Sample records for validated case finding

  1. 42 CFR 488.330 - Certification of compliance or noncompliance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... State survey agency may be followed by a Federal validation survey. (A) The State certifies the..., it is final, except in the case of a complaint or validation survey conducted by CMS, or CMS review... finding of noncompliance takes precedence over that of compliance. (ii) In the case of a validation survey...

  2. 42 CFR 488.330 - Certification of compliance or noncompliance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... State survey agency may be followed by a Federal validation survey. (A) The State certifies the..., it is final, except in the case of a complaint or validation survey conducted by CMS, or CMS review... finding of noncompliance takes precedence over that of compliance. (ii) In the case of a validation survey...

  3. 42 CFR 488.330 - Certification of compliance or noncompliance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... State survey agency may be followed by a Federal validation survey. (A) The State certifies the..., it is final, except in the case of a complaint or validation survey conducted by CMS, or CMS review... finding of noncompliance takes precedence over that of compliance. (ii) In the case of a validation survey...

  4. 42 CFR 488.330 - Certification of compliance or noncompliance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... State survey agency may be followed by a Federal validation survey. (A) The State certifies the..., it is final, except in the case of a complaint or validation survey conducted by CMS, or CMS review... finding of noncompliance takes precedence over that of compliance. (ii) In the case of a validation survey...

  5. 42 CFR 488.330 - Certification of compliance or noncompliance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... State survey agency may be followed by a Federal validation survey. (A) The State certifies the..., it is final, except in the case of a complaint or validation survey conducted by CMS, or CMS review... finding of noncompliance takes precedence over that of compliance. (ii) In the case of a validation survey...

  6. NLP based congestive heart failure case finding: A prospective analysis on statewide electronic medical records.

    PubMed

    Wang, Yue; Luo, Jin; Hao, Shiying; Xu, Haihua; Shin, Andrew Young; Jin, Bo; Liu, Rui; Deng, Xiaohong; Wang, Lijuan; Zheng, Le; Zhao, Yifan; Zhu, Chunqing; Hu, Zhongkai; Fu, Changlin; Hao, Yanpeng; Zhao, Yingzhen; Jiang, Yunliang; Dai, Dorothy; Culver, Devore S; Alfreds, Shaun T; Todd, Rogow; Stearns, Frank; Sylvester, Karl G; Widen, Eric; Ling, Xuefeng B

    2015-12-01

    In order to proactively manage congestive heart failure (CHF) patients, an effective CHF case finding algorithm is required to process both structured and unstructured electronic medical records (EMR) to allow complementary and cost-efficient identification of CHF patients. We set to identify CHF cases from both EMR codified and natural language processing (NLP) found cases. Using narrative clinical notes from all Maine Health Information Exchange (HIE) patients, the NLP case finding algorithm was retrospectively (July 1, 2012-June 30, 2013) developed with a random subset of HIE associated facilities, and blind-tested with the remaining facilities. The NLP based method was integrated into a live HIE population exploration system and validated prospectively (July 1, 2013-June 30, 2014). Total of 18,295 codified CHF patients were included in Maine HIE. Among the 253,803 subjects without CHF codings, our case finding algorithm prospectively identified 2411 uncodified CHF cases. The positive predictive value (PPV) is 0.914, and 70.1% of these 2411 cases were found to be with CHF histories in the clinical notes. A CHF case finding algorithm was developed, tested and prospectively validated. The successful integration of the CHF case findings algorithm into the Maine HIE live system is expected to improve the Maine CHF care. Copyright © 2015. Published by Elsevier Ireland Ltd.

  7. School-Based Asthma Case Finding: The Arkansas Experience

    ERIC Educational Resources Information Center

    Vargas, Perla A.; Magee, James S.; Bushmiaer, Margo; Simpson, Pippa M.; Jones, Craig A.; Feild, Charles R.; Jones, Stacie M.

    2006-01-01

    This population-based case-finding study sought to determine asthma prevalence and characterize disease severity and burden among school-aged children in the Little Rock School District. Asthma cases were identified by validated algorithm and parental report of asthma diagnosis. The overall response rate was low. Among schools with greater than…

  8. Conflicting Discourses in Qualitative Research: The Search for Divergent Data within Cases

    ERIC Educational Resources Information Center

    Antin, Tamar M. J.; Constantine, Norman A.; Hunt, Geoffrey

    2015-01-01

    The search for disconfirming evidence, or negative cases, is often considered a valuable strategy for assessing the credibility or validity of qualitative research claims. This article draws on a multimethod qualitative research project to illustrate how a search for disconfirming evidence evolved from a check on the validity of findings to a…

  9. Development and validation of case-finding algorithms for the identification of patients with anti-neutrophil cytoplasmic antibody-associated vasculitis in large healthcare administrative databases.

    PubMed

    Sreih, Antoine G; Annapureddy, Narender; Springer, Jason; Casey, George; Byram, Kevin; Cruz, Andy; Estephan, Maya; Frangiosa, Vince; George, Michael D; Liu, Mei; Parker, Adam; Sangani, Sapna; Sharim, Rebecca; Merkel, Peter A

    2016-12-01

    The aim of this study was to develop and validate case-finding algorithms for granulomatosis with polyangiitis (Wegener's, GPA), microscopic polyangiitis (MPA), and eosinophilic GPA (Churg-Strauss, EGPA). Two hundred fifty patients per disease were randomly selected from two large healthcare systems using the International Classification of Diseases version 9 (ICD9) codes for GPA/EGPA (446.4) and MPA (446.0). Sixteen case-finding algorithms were constructed using a combination of ICD9 code, encounter type (inpatient or outpatient), physician specialty, use of immunosuppressive medications, and the anti-neutrophil cytoplasmic antibody type. Algorithms with the highest average positive predictive value (PPV) were validated in a third healthcare system. An algorithm excluding patients with eosinophilia or asthma and including the encounter type and physician specialty had the highest PPV for GPA (92.4%). An algorithm including patients with eosinophilia and asthma and the physician specialty had the highest PPV for EGPA (100%). An algorithm including patients with one of the diagnoses (alveolar hemorrhage, interstitial lung disease, glomerulonephritis, and acute or chronic kidney disease), encounter type, physician specialty, and immunosuppressive medications had the highest PPV for MPA (76.2%). When validated in a third healthcare system, these algorithms had high PPV (85.9% for GPA, 85.7% for EGPA, and 61.5% for MPA). Adding the anti-neutrophil cytoplasmic antibody type increased the PPV to 94.4%, 100%, and 81.2% for GPA, EGPA, and MPA, respectively. Case-finding algorithms accurately identify patients with GPA, EGPA, and MPA in administrative databases. These algorithms can be used to assemble population-based cohorts and facilitate future research in epidemiology, drug safety, and comparative effectiveness. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Development and Validation of Case-Finding Algorithms for the Identification of Patients with ANCA-Associated Vasculitis in Large Healthcare Administrative Databases

    PubMed Central

    Sreih, Antoine G.; Annapureddy, Narender; Springer, Jason; Casey, George; Byram, Kevin; Cruz, Andy; Estephan, Maya; Frangiosa, Vince; George, Michael D.; Liu, Mei; Parker, Adam; Sangani, Sapna; Sharim, Rebecca; Merkel, Peter A.

    2016-01-01

    Purpose To develop and validate case-finding algorithms for granulomatosis with polyangiitis (Wegener’s, GPA), microscopic polyangiitis (MPA), and eosinophilic granulomatosis with polyangiitis (Churg-Strauss, EGPA). Methods 250 patients per disease were randomly selected from 2 large healthcare systems using the International Classification of Diseases version 9 (ICD9) codes for GPA/EGPA (446.4) and MPA (446.0). 16 case-finding algorithms were constructed using a combination of ICD9 code, encounter type (inpatient or outpatient), physician specialty, use of immunosuppressive medications, and the anti-neutrophil cytoplasmic antibody (ANCA) type. Algorithms with the highest average positive predictive value (PPV) were validated in a third healthcare system. Results An algorithm excluding patients with eosinophilia or asthma and including the encounter type and physician specialty had the highest PPV for GPA (92.4%). An algorithm including patients with eosinophilia and asthma and the physician specialty had the highest PPV for EGPA (100%). An algorithm including patients with one of the following diagnoses: alveolar hemorrhage, interstitial lung disease, glomerulonephritis, acute or chronic kidney disease, the encounter type, physician specialty, and immunosuppressive medications had the highest PPV for MPA (76.2%). When validated in a third healthcare system, these algorithms had high PPV (85.9% for GPA, 85.7% for EGPA, and 61.5% for MPA). Adding the ANCA type increased the PPV to 94.4%, 100%, and 81.2% for GPA, EGPA, and MPA respectively. Conclusion Case-finding algorithms accurately identify patients with GPA, EGPA, and MPA in administrative databases. These algorithms can be used to assemble population-based cohorts and facilitate future research in epidemiology, drug safety, and comparative effectiveness. PMID:27804171

  11. Long-Term Impact of Valid Case Criterion on Capturing Population-Level Growth under Item Response Theory Equating. Research Report. ETS RR-17-17

    ERIC Educational Resources Information Center

    Deng, Weiling; Monfils, Lora

    2017-01-01

    Using simulated data, this study examined the impact of different levels of stringency of the valid case inclusion criterion on item response theory (IRT)-based true score equating over 5 years in the context of K-12 assessment when growth in student achievement is expected. Findings indicate that the use of the most stringent inclusion criterion…

  12. Validity of juvenile idiopathic arthritis diagnoses using administrative health data.

    PubMed

    Stringer, Elizabeth; Bernatsky, Sasha

    2015-03-01

    Administrative health databases are valuable sources of data for conducting research including disease surveillance, outcomes research, and processes of health care at the population level. There has been limited use of administrative data to conduct studies of pediatric rheumatic conditions and no studies validating case definitions in Canada. We report a validation study of incident cases of juvenile idiopathic arthritis in the Canadian province of Nova Scotia. Cases identified through administrative data algorithms were compared to diagnoses in a clinical database. The sensitivity of algorithms that included pediatric rheumatology specialist claims was 81-86%. However, 35-48% of cases that were identified could not be verified in the clinical database depending on the algorithm used. Our case definitions would likely lead to overestimates of disease burden. Our findings may be related to issues pertaining to the non-fee-for-service remuneration model in Nova Scotia, in particular, systematic issues related to the process of submitting claims.

  13. A new framework to enhance the interpretation of external validation studies of clinical prediction models.

    PubMed

    Debray, Thomas P A; Vergouwe, Yvonne; Koffijberg, Hendrik; Nieboer, Daan; Steyerberg, Ewout W; Moons, Karel G M

    2015-03-01

    It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from "different but related" samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Validation of a computer case definition for sudden cardiac death in opioid users

    PubMed Central

    2012-01-01

    Background To facilitate the use of automated databases for studies of sudden cardiac death, we previously developed a computerized case definition that had a positive predictive value between 86% and 88%. However, the definition has not been specifically validated for prescription opioid users, for whom out-of-hospital overdose deaths may be difficult to distinguish from sudden cardiac death. Findings We assembled a cohort of persons 30-74 years of age prescribed propoxyphene or hydrocodone who had no life-threatening non-cardiovascular illness, diagnosed drug abuse, residence in a nursing home in the past year, or hospital stay within the past 30 days. Medical records were sought for a sample of 140 cohort deaths within 30 days of a prescription fill meeting the computer case definition. Of the 140 sampled deaths, 81 were adjudicated; 73 (90%) were sudden cardiac deaths. Two deaths had possible opioid overdose; after removing these two the positive predictive value was 88%. Conclusions These findings are consistent with our previous validation studies and suggest the computer case definition of sudden cardiac death is a useful tool for pharmacoepidemiologic studies of opioid analgesics. PMID:22938531

  15. A comparison of the validity of GHQ-12 and CHQ-12 in Chinese primary care patients in Manchester.

    PubMed

    Pan, P C; Goldberg, D P

    1990-11-01

    The present study compares the efficacy of the GHQ-12 and the Chinese Health Questionnaire (CHQ-12) in Cantonese speaking Chinese primary-care patients living in Greater Manchester, using relative operating characteristic (ROC) analysis. We did not find that the Chinese version offered any advantage over the conventional version of the GHQ in this population. Stepwise discriminant analysis however confirmed the value of individual items in the former pertaining to specific somatic symptoms and interpersonal relationships in differentiating cases from non-cases. Information biases, arising from the lack of a reliability study on the second-stage case identifying interview and the unique linguistic characteristics of the Chinese language may have affected the overall validity indices of the questionnaires. The study also examines the effects of using different criteria to define a case, and shows that with increasing levels of severity, there is an improvement in the diagnostic performance of the two questionnaires as reflected by areas under ROC curves and traditional validity indices. Possible explanations of these findings are discussed. The scoring method proposed by Goodchild & Duncan-Jones (1985) when used on these questionnaires had no demonstrable advantage over the conventional scoring method.

  16. Case finding of lifestyle and mental health disorders in primary care: validation of the ‘CHAT’ tool

    PubMed Central

    Goodyear-Smith, Felicity; Coupe, Nicole M; Arroll, Bruce; Elley, C Raina; Sullivan, Sean; McGill, Anne-Thea

    2008-01-01

    Background Primary care is accessible and ideally placed for case finding of patients with lifestyle and mental health risk factors and subsequent intervention. The short self-administered Case-finding and Help Assessment Tool (CHAT) was developed for lifestyle and mental health assessment of adult patients in primary health care. This tool checks for tobacco use, alcohol and other drug misuse, problem gambling, depression, anxiety and stress, abuse, anger problems, inactivity, and eating disorders. It is well accepted by patients, GPs and nurses. Aim To assess criterion-based validity of CHAT against a composite gold standard. Design of study Conducted according to the Standards for Reporting of Diagnostic Accuracy statement for diagnostic tests. Setting Primary care practices in Auckland, New Zealand. Method One thousand consecutive adult patients completed CHAT and a composite gold standard. Sensitivities, specificities, positive and negative predictive values, and likelihood ratios were calculated. Results Response rates for each item ranged from 79.6 to 99.8%. CHAT was sensitive and specific for almost all issues screened, except exercise and eating disorders. Sensitivity ranged from 96% (95% confidence interval [CI] = 87 to 99%) for major depression to 26% (95% CI = 22 to 30%) for exercise. Specificity ranged from 97% (95% CI = 96 to 98%) for problem gambling and problem drug use to 40% (95% CI = 36 to 45%) for exercise. All had high likelihood ratios (3–30), except exercise and eating disorders. Conclusion CHAT is a valid and acceptable case-finding tool for most common lifestyle and mental health conditions. PMID:18186993

  17. The statistical validity of nursing home survey findings.

    PubMed

    Woolley, Douglas C

    2011-11-01

    The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.

  18. Testing expert systems

    NASA Technical Reports Server (NTRS)

    Chang, C. L.; Stachowitz, R. A.

    1988-01-01

    Software quality is of primary concern in all large-scale expert system development efforts. Building appropriate validation and test tools for ensuring software reliability of expert systems is therefore required. The Expert Systems Validation Associate (EVA) is a validation system under development at the Lockheed Artificial Intelligence Center. EVA provides a wide range of validation and test tools to check correctness, consistency, and completeness of an expert system. Testing a major function of EVA. It means executing an expert system with test cases with the intent of finding errors. In this paper, we describe many different types of testing such as function-based testing, structure-based testing, and data-based testing. We describe how appropriate test cases may be selected in order to perform good and thorough testing of an expert system.

  19. A hypothesis-driven physical examination learning and assessment procedure for medical students: initial validity evidence.

    PubMed

    Yudkowsky, Rachel; Otaki, Junji; Lowenstein, Tali; Riddle, Janet; Nishigori, Hiroshi; Bordage, Georges

    2009-08-01

    Diagnostic accuracy is maximised by having clinical signs and diagnostic hypotheses in mind during the physical examination (PE). This diagnostic reasoning approach contrasts with the rote, hypothesis-free screening PE learned by many medical students. A hypothesis-driven PE (HDPE) learning and assessment procedure was developed to provide targeted practice and assessment in anticipating, eliciting and interpreting critical aspects of the PE in the context of diagnostic challenges. This study was designed to obtain initial content validity evidence, performance and reliability estimates, and impact data for the HDPE procedure. Nineteen clinical scenarios were developed, covering 160 PE manoeuvres. A total of 66 Year 3 medical students prepared for and encountered three clinical scenarios during required formative assessments. For each case, students listed anticipated positive PE findings for two plausible diagnoses before examining the patient; examined a standardised patient (SP) simulating one of the diagnoses; received immediate feedback from the SP, and documented their findings and working diagnosis. The same students later encountered some of the scenarios during their Year 4 clinical skills examination. On average, Year 3 students anticipated 65% of the positive findings, correctly performed 88% of the PE manoeuvres and documented 61% of the findings. Year 4 students anticipated and elicited fewer findings overall, but achieved proportionally more discriminating findings, thereby more efficiently achieving a diagnostic accuracy equivalent to that of students in Year 3. Year 4 students performed better on cases on which they had received feedback as Year 3 students. Twelve cases would provide a reliability of 0.80, based on discriminating checklist items only. The HDPE provided medical students with a thoughtful, deliberate approach to learning and assessing PE skills in a valid and reliable manner.

  20. Validation of the Hwalek-Sengstock Elder Abuse Screening Test.

    ERIC Educational Resources Information Center

    Neale, Anne Victoria; And Others

    Elder abuse is recognized as an under-detected and under-reported social problem. Difficulties in detecting elder abuse are compounded by the lack of a standardized, psychometrically valid instrument for case finding. The development of the Hwalek-Sengstock Elder Abuse Screening Test (H-S/EAST) followed a larger effort to identify indicators and…

  1. Risk prediction in the community: A systematic review of case-finding instruments that predict adverse healthcare outcomes in community-dwelling older adults.

    PubMed

    O'Caoimh, Rónán; Cornally, Nicola; Weathers, Elizabeth; O'Sullivan, Ronan; Fitzgerald, Carol; Orfila, Francesc; Clarnette, Roger; Paúl, Constança; Molloy, D William

    2015-09-01

    Few case-finding instruments are available to community healthcare professionals. This review aims to identify short, valid instruments that detect older community-dwellers risk of four adverse outcomes: hospitalisation, functional-decline, institutionalisation and death. Data sources included PubMed and the Cochrane library. Data on outcome measures, patient and instrument characteristics, and trial quality (using the Quality In Prognosis Studies [QUIPS] tool), were double-extracted for derivation-validation studies in community-dwelling older adults (>50 years). Forty-six publications, representing 23 unique instruments, were included. Only five were externally validated. Mean patient age range was 64.2-84.6 years. Most instruments n=18, (78%) were derived in North America from secondary analysis of survey data. The majority n=12, (52%), measured more than one outcome with hospitalisation and the Probability of Repeated Admission score the most studied outcome and instrument respectively. All instruments incorporated multiple predictors. Activities of daily living n=16, (70%), was included most often. Accuracy varied according to instruments and outcomes; area under the curve of 0.60-0.73 for hospitalisation, 0.63-0.78 for functional decline, 0.70-0.74 for institutionalisation and 0.56-0.82 for death. The QUIPS tool showed that 5/23 instruments had low potential for bias across all domains. This review highlights the present need to develop short, reliable, valid instruments to case-find older adults at risk in the community. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  2. Diagnostic performance of major depression disorder case-finding instruments used among mothers of young children in the United States: A systematic review.

    PubMed

    Owora, Arthur H; Carabin, Hélène; Reese, Jessica; Garwe, Tabitha

    2016-09-01

    Growing recognition of the interrelated negative outcomes associated with major depression disorder (MDD) among mothers and their children has led to renewed public health interest in the early identification and treatment of maternal MDD. Healthcare providers, however, remain unsure of the validity of existing case-finding instruments. We conducted a systematic review to identify the most valid maternal MDD case-finding instrument used in the United States. We identified articles reporting the sensitivity and specificity of MDD case-finding instruments based on Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) by systematically searching through three electronic bibliographic databases, PubMed, PsycINFO, and EMBASE, from 1994 to 2014. Study eligibility and quality were evaluated using the Standards for the Reporting of Diagnostic Accuracy studies and Quality Assessment of Diagnostic Accuracy Studies guidelines respectively. Overall, we retrieved 996 unduplicated articles and selected 74 for full-text review. Of these, 14 articles examining 21 different instruments were included in the systematic review. The 10 item Edinburgh Postnatal Depression Scale and Postpartum Depression Screening Scale had the most stable (lowest variation) and highest diagnostic performance during the antepartum and postpartum periods (sensitivity range: 0.63-0.94 and 0.67-0.95; specificity range: 0.83-0.98 and 0.68-0.97 respectively). Greater variation in diagnostic performance was observed among studies with higher MDD prevalence. Factors that explain greater variation in instrument diagnostic performance in study populations with higher MDD prevalence were not examined. Findings suggest that the diagnostic performance of maternal MDD case-finding instruments is peripartum period-specific. Published by Elsevier B.V.

  3. Experiences Using Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1996-01-01

    This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  4. Development and Application of the CAT-RPM Report for Strengths-Based Case Management of At-Risk Youth in Schools

    ERIC Educational Resources Information Center

    Bower, J. M.; Carroll, A.; Ashman, A.

    2015-01-01

    The Contextualised Assessment Tool for Risk and Protection Management (CAT-RPM) has been established as a valid and reliable tool for differentiating groups across age, sex and behaviour and assisting young people to find their strengths [Bower, J., A. Carroll, and A. Ashman. 2014. "The Development and Validation of the Contextualised…

  5. The validity of tooth grinding measures: etiology of pain dysfunction syndrome revisited.

    PubMed

    Marbach, J J; Raphael, K G; Dohrenwend, B P; Lennon, M C

    1990-03-01

    The current study explores the proposition that a treating clinician's etiologic model influences patients' reports of tooth grinding, the validity of, and subsequent research findings relying on these measures. The investigation compares self-reports of tooth grinding and related clinical variables for 151 cases of temporomandibular pain and dysfunction syndrome (TMPDS) treated by a clinician who does not explicitly support the grinding theory of the etiology of TMPDS, and 139 healthy controls. Cases were no more likely than well controls to report ever-grinding, but were actually significantly less likely than well controls to report current grinding. They were also significantly more likely to report that a dentist had told them they ground. Findings suggest that studies using self-report, clinician-report of tooth grinding (or both) are methodologically inadequate for addressing the relationship between tooth grinding and TMPDS.

  6. Experiences Using Lightweight Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1997-01-01

    This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  7. Real-time Raman spectroscopy for automatic in vivo skin cancer detection: an independent validation.

    PubMed

    Zhao, Jianhua; Lui, Harvey; Kalia, Sunil; Zeng, Haishan

    2015-11-01

    In a recent study, we have demonstrated that real-time Raman spectroscopy could be used for skin cancer diagnosis. As a translational study, the objective of this study is to validate previous findings through a completely independent clinical test. In total, 645 confirmed cases were included in the analysis, including a cohort of 518 cases from a previous study, and an independent cohort of 127 new cases. Multi-variant statistical data analyses including principal component with general discriminant analysis (PC-GDA) and partial least squares (PLS) were used separately for lesion classification, which generated similar results. When the previous cohort (n = 518) was used as training and the new cohort (n = 127) was used as testing, the area under the receiver operating characteristic curve (ROC AUC) was found to be 0.889 (95 % CI 0.834-0.944; PLS); when the two cohorts were combined, the ROC AUC was 0.894 (95 % CI 0.870-0.918; PLS) with the narrowest confidence intervals. Both analyses were comparable to the previous findings, where the ROC AUC was 0.896 (95 % CI 0.846-0.946; PLS). The independent study validates that real-time Raman spectroscopy could be used for automatic in vivo skin cancer diagnosis with good accuracy.

  8. Statistically Controlling for Confounding Constructs Is Harder than You Think

    PubMed Central

    Westfall, Jacob; Yarkoni, Tal

    2016-01-01

    Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707

  9. Making transit-oriented development work in low-income Latino neighborhoods : a comparative case study of Boyle Heights, Los Angeles and Logan Heights, San Diego.

    DOT National Transportation Integrated Search

    2016-12-01

    This research project is a continuation of a previous NITC-funded study. The first study compared the MacArthur Park TOD in Los Angeles to the : Fruitvale Village TOD in Oakland. The findings from this new study further validate the key findings from...

  10. Sherlock Holmes and child psychopathology assessment approaches: the case of the false-positive.

    PubMed

    Jensen, P S; Watanabe, H

    1999-02-01

    To explore the relative value of various methods of assessing childhood psychopathology, the authors compared 4 groups of children: those who met criteria for one or more DSM diagnoses and scored high on parent symptom checklists, those who met psychopathology criteria on either one of these two assessment approaches alone, and those who met no psychopathology assessment criterion. Parents of 201 children completed the Child Behavior Checklist (CBCL), after which children and parents were administered the Diagnostic Interview Schedule for Children (version 2.1). Children and parents also completed other survey measures and symptom report inventories. The 4 groups of children were compared against "external validators" to examine the merits of "false-positive" and "false-negative" cases. True-positive cases (those that met DSM criteria and scored high on the CBCL) differed significantly from the true-negative cases on most external validators. "False-positive" and "false-negative" cases had intermediate levels of most risk factors and external validators. "False-positive" cases were not normal per se because they scored significantly above the true-negative group on a number of risk factors and external validators. A similar but less marked pattern was noted for "false-negatives." Findings call into question whether cases with high symptom checklist scores despite no formal diagnoses should be considered "false-positive." Pending the availability of robust markers for mental illness, researchers and clinicians must resist the tendency to reify diagnostic categories or to engage in arcane debates about the superiority of one assessment approach over another.

  11. What Different Kinds of Stratification Can Reveal about the Generalizability of Data-Mined Skill Assessment Models

    ERIC Educational Resources Information Center

    Sao Pedro, Michael A.; Baker, Ryan S. J. d.; Gobert, Janice D.

    2013-01-01

    When validating assessment models built with data mining, generalization is typically tested at the student-level, where models are tested on new students. This approach, though, may fail to find cases where model performance suffers if other aspects of those cases relevant to prediction are not well represented. We explore this here by testing if…

  12. A Pedagogical Trebuchet: A Case Study in Experimental History and History Pedagogy

    ERIC Educational Resources Information Center

    Brice, Lee L.; Catania, Steven

    2012-01-01

    A common problem history teachers face regardless of their field of specialization is how to help students find answers to the most difficult historical questions, those for which the sources are unavailable or inaccessible, and teach them to do so in a methodologically valid manner. This article presents a case study which shows how a project in…

  13. Validation of a computer case definition for sudden cardiac death in opioid users.

    PubMed

    Kawai, Vivian K; Murray, Katherine T; Stein, C Michael; Cooper, William O; Graham, David J; Hall, Kathi; Ray, Wayne A

    2012-08-31

    To facilitate the use of automated databases for studies of sudden cardiac death, we previously developed a computerized case definition that had a positive predictive value between 86% and 88%. However, the definition has not been specifically validated for prescription opioid users, for whom out-of-hospital overdose deaths may be difficult to distinguish from sudden cardiac death. We assembled a cohort of persons 30-74 years of age prescribed propoxyphene or hydrocodone who had no life-threatening non-cardiovascular illness, diagnosed drug abuse, residence in a nursing home in the past year, or hospital stay within the past 30 days. Medical records were sought for a sample of 140 cohort deaths within 30 days of a prescription fill meeting the computer case definition. Of the 140 sampled deaths, 81 were adjudicated; 73 (90%) were sudden cardiac deaths. Two deaths had possible opioid overdose; after removing these two the positive predictive value was 88%. These findings are consistent with our previous validation studies and suggest the computer case definition of sudden cardiac death is a useful tool for pharmacoepidemiologic studies of opioid analgesics.

  14. BrainCheck - a very brief tool to detect incipient cognitive decline: optimized case-finding combining patient- and informant-based data.

    PubMed

    Ehrensperger, Michael M; Taylor, Kirsten I; Berres, Manfred; Foldi, Nancy S; Dellenbach, Myriam; Bopp, Irene; Gold, Gabriel; von Gunten, Armin; Inglin, Daniel; Müri, René; Rüegger, Brigitte; Kressig, Reto W; Monsch, Andreas U

    2014-01-01

    Optimal identification of subtle cognitive impairment in the primary care setting requires a very brief tool combining (a) patients' subjective impairments, (b) cognitive testing, and (c) information from informants. The present study developed a new, very quick and easily administered case-finding tool combining these assessments ('BrainCheck') and tested the feasibility and validity of this instrument in two independent studies. We developed a case-finding tool comprised of patient-directed (a) questions about memory and depression and (b) clock drawing, and (c) the informant-directed 7-item version of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Feasibility study: 52 general practitioners rated the feasibility and acceptance of the patient-directed tool. Validation study: An independent group of 288 Memory Clinic patients (mean ± SD age = 76.6 ± 7.9, education = 12.0 ± 2.6; 53.8% female) with diagnoses of mild cognitive impairment (n = 80), probable Alzheimer's disease (n = 185), or major depression (n = 23) and 126 demographically matched, cognitively healthy volunteer participants (age = 75.2 ± 8.8, education = 12.5 ± 2.7; 40% female) partook. All patient and healthy control participants were administered the patient-directed tool, and informants of 113 patient and 70 healthy control participants completed the very short IQCODE. Feasibility study: General practitioners rated the patient-directed tool as highly feasible and acceptable. Validation study: A Classification and Regression Tree analysis generated an algorithm to categorize patient-directed data which resulted in a correct classification rate (CCR) of 81.2% (sensitivity = 83.0%, specificity = 79.4%). Critically, the CCR of the combined patient- and informant-directed instruments (BrainCheck) reached nearly 90% (that is 89.4%; sensitivity = 97.4%, specificity = 81.6%). A new and very brief instrument for general practitioners, 'BrainCheck', combined three sources of information deemed critical for effective case-finding (that is, patients' subject impairments, cognitive testing, informant information) and resulted in a nearly 90% CCR. Thus, it provides a very efficient and valid tool to aid general practitioners in deciding whether patients with suspected cognitive impairments should be further evaluated or not ('watchful waiting').

  15. The Chinese version of the cognitive, affective, and somatic empathy scale for children: Validation, gender invariance and associated factors.

    PubMed

    Liu, Jianghong; Qiao, Xin; Dong, Fanghong; Raine, Adrian

    2018-01-01

    Empathy is hypothesized to have several components, including affective, cognitive, and somatic contributors. The only validated, self-report measure to date that assesses all three forms of empathy is the Cognitive, Affective, and Somatic Empathy Scale (CASES), but no current study has reported the psychometric properties of this scale outside of the initial U.S. sample. This study reports the first psychometric analysis of a non-English translation of the CASES. Confirmatory factor analysis was used to assess the factor structure of CASES as well as its associations with callous-unemotional traits in 860 male and female children (mean age 11.54± .64 years) from the China Jintan Child Cohort Study. Analyses supported a three-factor model of cognitive, affective, and somatic empathy, with satisfactory fit indices consistent with the psychometric properties of the English version of CASES. Construct validity was established by three findings. First, females scored significantly higher in empathy than males. Second, lower scores of empathy were associated with lower IQ. Third, children with lower empathy also showed more callous-unemotional attributes. We established for the first time cross-cultural validity for Cognitive, Affective, and Somatic Empathy Scale (CASES). Our Chinese data supports the use of this new instrument in non-Western samples, and affirms the utility of this instrument for a comprehensive assessment of empathy in children.

  16. A Systematic Review Comparing the Acceptability, Validity and Concordance of Discrete Choice Experiments and Best-Worst Scaling for Eliciting Preferences in Healthcare.

    PubMed

    Whitty, Jennifer A; Oliveira Gonçalves, Ana Sofia

    2018-06-01

    The aim of this study was to compare the acceptability, validity and concordance of discrete choice experiment (DCE) and best-worst scaling (BWS) stated preference approaches in health. A systematic search of EMBASE, Medline, AMED, PubMed, CINAHL, Cochrane Library and EconLit databases was undertaken in October to December 2016 without date restriction. Studies were included if they were published in English, presented empirical data related to the administration or findings of traditional format DCE and object-, profile- or multiprofile-case BWS, and were related to health. Study quality was assessed using the PREFS checklist. Fourteen articles describing 12 studies were included, comparing DCE with profile-case BWS (9 studies), DCE and multiprofile-case BWS (1 study), and profile- and multiprofile-case BWS (2 studies). Although limited and inconsistent, the balance of evidence suggests that preferences derived from DCE and profile-case BWS may not be concordant, regardless of the decision context. Preferences estimated from DCE and multiprofile-case BWS may be concordant (single study). Profile- and multiprofile-case BWS appear more statistically efficient than DCE, but no evidence is available to suggest they have a greater response efficiency. Little evidence suggests superior validity for one format over another. Participant acceptability may favour DCE, which had a lower self-reported task difficulty and was preferred over profile-case BWS in a priority setting but not necessarily in other decision contexts. DCE and profile-case BWS may be of equal validity but give different preference estimates regardless of the health context; thus, they may be measuring different constructs. Therefore, choice between methods is likely to be based on normative considerations related to coherence with theoretical frameworks and on pragmatic considerations related to ease of data collection.

  17. Scrutinizing a Survey-Based Measure of Science and Mathematics Teacher Knowledge: Relationship to Observations of Teaching Practice

    NASA Astrophysics Data System (ADS)

    Talbot, Robert M.

    2017-12-01

    There is a clear need for valid and reliable instrumentation that measures teacher knowledge. However, the process of investigating and making a case for instrument validity is not a simple undertaking; rather, it is a complex endeavor. This paper presents the empirical case of one aspect of such an instrument validation effort. The particular instrument under scrutiny was developed in order to determine the effect of a teacher education program on novice science and mathematics teachers' strategic knowledge (SK). The relationship between novice science and mathematics teachers' SK as measured by a survey and their SK as inferred from observations of practice using a widely used observation protocol is the subject of this paper. Moderate correlations between parts of the observation-based construct and the SK construct were observed. However, the main finding of this work is that the context in which the measurement is made (in situ observations vs. ex situ survey) is an essential factor in establishing the validity of the measurement itself.

  18. Meta-analysis of screening and case finding tools for depression in cancer: evidence based recommendations for clinical practice on behalf of the Depression in Cancer Care consensus group.

    PubMed

    Mitchell, Alex J; Meader, Nick; Davies, Evan; Clover, Kerrie; Carter, Gregory L; Loscalzo, Matthew J; Linden, Wolfgang; Grassi, Luigi; Johansen, Christoffer; Carlson, Linda E; Zabora, James

    2012-10-01

    To examine the validity of screening and case-finding tools used in the identification of depression as defined by an ICD10/DSM-IV criterion standard. We identified 63 studies involving 19 tools (in 33 publications) designed to help clinicians identify depression in cancer settings. We used a standardized rating system. We excluded 11 tools without at least two independent studies, leaving 8 tools for comparison. Across all cancer stages there were 56 diagnostic validity studies (n=10,009). For case-finding, one stem question, two stem questions and the BDI-II all had level 2 evidence (2a, 2b and 2c respectively) and given their better acceptability we gave the stem questions a grade B recommendation. For screening, two stem questions had level 1b evidence (with high acceptability) and the BDI-II had level 2c evidence. For every 100 people screened in advanced cancer, the two questions would accurately detect 18 cases, while missing only 1 and correctly reassure 74 with 7 falsely identified. For every 100 people screened in non-palliative settings the BDI-II would accurately detect 17 cases, missing 2 and correctly re-assure 70, with 11 falsely identified as cases. The main cautions are the reliance on DSM-IV definitions of major depression, the large number of small studies and the paucity of data for many tools in specific settings. Although no single tool could be offered unqualified support, several tools are likely to improve upon unassisted clinical recognition. In clinical practice, all tools should form part of an integrated approach involving further follow-up, clinical assessment and evidence based therapy. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Empirical evaluation demonstrated importance of validating biomarkers for early detection of cancer in screening settings to limit the number of false-positive findings.

    PubMed

    Chen, Hongda; Knebel, Phillip; Brenner, Hermann

    2016-07-01

    Search for biomarkers for early detection of cancer is a very active area of research, but most studies are done in clinical rather than screening settings. We aimed to empirically evaluate the role of study setting for early detection marker identification and validation. A panel of 92 candidate cancer protein markers was measured in 35 clinically identified colorectal cancer patients and 35 colorectal cancer patients identified at screening colonoscopy. For each case group, we selected 38 controls without colorectal neoplasms at screening colonoscopy. Single-, two- and three-marker combinations discriminating cases and controls were identified in each setting and subsequently validated in the alternative setting. In all scenarios, a higher number of predictive biomarkers were initially detected in the clinical setting, but a substantially lower proportion of identified biomarkers could subsequently be confirmed in the screening setting. Confirmation rates were 50.0%, 84.5%, and 74.2% for one-, two-, and three-marker algorithms identified in the screening setting and were 42.9%, 18.6%, and 25.7% for algorithms identified in the clinical setting. Validation of early detection markers of cancer in a true screening setting is important to limit the number of false-positive findings. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Advanced Risk Reduction Tool (ARRT) Special Case Study Report: Science and Engineering Technical Assessments (SETA) Program

    NASA Technical Reports Server (NTRS)

    Kirsch, Paul J.; Hayes, Jane; Zelinski, Lillian

    2000-01-01

    This special case study report presents the Science and Engineering Technical Assessments (SETA) team's findings for exploring the correlation between the underlying models of Advanced Risk Reduction Tool (ARRT) relative to how it identifies, estimates, and integrates Independent Verification & Validation (IV&V) activities. The special case study was conducted under the provisions of SETA Contract Task Order (CTO) 15 and the approved technical approach documented in the CTO-15 Modification #1 Task Project Plan.

  1. Positional and positioning down-beating nystagmus without central nervous system findings.

    PubMed

    Ogawa, Yasuo; Suzuki, Mamoru; Otsuka, Koji; Shimizu, Shigetaka; Inagaki, Taro; Hayashi, Mami; Hagiwara, Akira; Kitajima, Naoharu

    2009-12-01

    We report the clinical features of 4 cases with positional or positioning down-beating nystagmus in a head-hanging or supine position without any obvious central nervous system disorder. The 4 cases had some findings in common. There were no abnormal findings on neurological tests or brain MRI. They did not have gaze nystagmus. Their nystagmus was observed only in a supine or head-hanging position and it was never observed upon returning to a sitting position and never reversed. The nystagmus had no or little torsional component, had latency and tended to decrease with time. The positional DBN (p-DBN) is known to be indicative of a central nervous system disorder. Recently there were some reports that canalithiasis of the anterior semicircular canal (ASC) causes p-DBN and that patients who have p-DBN without obvious CNS dysfunction are dealt with anterior semicircular canal (ASC) benign paroxysmal positional vertigo (BPPV). There are some doubts as to the validity of making a diagnosis of ASC-BPPV in a case of p-DBN without CNS findings. It is hard to determine the cause of p-DBN in these cases.

  2. Analysis of 2000 cases treated with gamma knife surgery: validating eligibility criteria for a prospective multi-institutional study of stereotactic radiosurgery alone for treatment of patients with 1-10 brain metastases (JLGK0901) in Japan

    PubMed Central

    Higuchi, Yoshinori; Nagano, Osamu; Sato, Yasunori; Yamamoto, Masaaki; Ono, Junichi; Saeki, Naokatsu; Miyakawa, Akifumi; Hirai, Tatsuo

    2012-01-01

    Objective The Japan Leksell Gamma Knife (JLGK) Society has conducted a prospective multi-institute study (JLGK0901, UNIN000001812) for selected patients in order to prove the effectiveness of stereotactic radiosurgery (SRS) alone using the gamma knife (GK) for 1-10 brain lesions. Herein, we verify the validity of 5 major patient selection criteria for the JLGK0901 trial. Materials and Methods Between 1998 and 2010, 2246 consecutive cases with 10352 brain metastases treated with GK were analyzed to determine the validity of the following 5 major JLGK0901 criteria; 1) 1-10 brain lesions, 2) less than 10 cm3 volume of the largest tumor, 3) no more than 15 cm3 total tumor volume, 4) no cerebrospinal fluid (CSF) dissemination, 5) Karnofsky performance status (KPS) score ≥70. Results For cases with >10 brain metastases, salvage treatments for new lesions were needed more frequently. The tumor control rate for lesions larger than 10 cm3 was significantly lower than that of tumors <10 cm3. Overall, neurological and qualitative survivals (OS, NS, QS) of cases with >15 cm3 total tumor volume or positive magnetic resonance imaging findings of CSF were significantly poorer. Outcomes in cases with KPS <70 were significantly poorer in terms of OS. Conclusion Our retrospective results of 2246 GK-treated cases verified the validity of the 5 major JLGK0901 criteria. The inclusion criteria for the JLGK0901 study are appearently good indications for SRS. PMID:29296339

  3. High laboratory cost predicted per tuberculosis case diagnosed with increased case finding without a triage strategy.

    PubMed

    Dunbar, R; Naidoo, P; Beyers, N; Langley, I

    2017-09-01

    Cape Town, South Africa. To model the effects of increased case finding and triage strategies on laboratory costs per tuberculosis (TB) case diagnosed. We used a validated operational model and published laboratory cost data. We modelled the effect of varying the proportion with TB among presumptive cases and Xpert cartridge price reductions on cost per TB case and per additional TB case diagnosed in the Xpert-based vs. smear/culture-based algorithms. In our current scenario (18.3% with TB among presumptive cases), the proportion of cases diagnosed increased by 8.7% (16.7% vs. 15.0%), and the cost per case diagnosed increased by 142% (US$121 vs. US$50). The cost per additional case diagnosed was US$986. This would increase to US$1619 if the proportion with TB among presumptive cases was 10.6%. At 25.9-30.8% of TB prevalence among presumptive cases and a 50% reduction in Xpert cartridge price, the cost per TB case diagnosed would range from US$50 to US$59 (comparable to the US$48.77 found in routine practice with smear/culture). The operational model illustrates the effect of increased case finding on laboratory costs per TB case diagnosed. Unless triage strategies are identified, the approach will not be sustainable, even if Xpert cartridge prices are reduced.

  4. Laboratory compliance with the American Society of Clinical Oncology/College of American Pathologists human epidermal growth factor receptor 2 testing guidelines: a 3-year comparison of validation procedures.

    PubMed

    Dyhdalo, Kathryn S; Fitzgibbons, Patrick L; Goldsmith, Jeffery D; Souers, Rhona J; Nakhleh, Raouf E

    2014-07-01

    The American Society of Clinical Oncology/College of American Pathologists (ASCO/CAP) published guidelines in 2007 regarding testing accuracy, interpretation, and reporting of results for HER2 studies. A 2008 survey identified areas needing improved compliance. To reassess laboratory response to those guidelines following a full accreditation cycle for an updated snapshot of laboratory practices regarding ASCO/CAP guidelines. In 2011, a survey was distributed with the HER2 immunohistochemistry (IHC) proficiency testing program identical to the 2008 survey. Of the 1150 surveys sent, 977 (85.0%) were returned, comparable to the original survey response in 2008 (757 of 907; 83.5%). New participants submitted 124 of 977 (12.7%) surveys. The median laboratory accession rate was 14,788 cases with 211 HER2 tests performed annually. Testing was validated with fluorescence in situ hybridization in 49.1% (443 of 902) of the laboratories; 26.3% (224 of 853) of the laboratories used another IHC assay. The median number of cases to validate fluorescence in situ hybridization (n = 40) and IHC (n = 27) was similar to those in 2008. Ninety-five percent concordance with fluorescence in situ hybridization was achieved by 76.5% (254 of 332) of laboratories for IHC(-) findings and 70.4% (233 of 331) for IHC(+) cases. Ninety-five percent concordance with another IHC assay was achieved by 71.1% (118 of 168) of the laboratories for negative findings and 69.6% (112 of 161) of the laboratories for positive cases. The proportion of laboratories interpreting HER2 IHC using ASCO/CAP guidelines (86.6% [798 of 921] in 2011; 83.8% [605 of 722] in 2008) remains similar. Although fixation time improvements have been made, assay validation deficiencies still exist. The results of this survey were shared within the CAP, including the Laboratory Accreditation Program and the ASCO/CAP panel revising the HER2 guidelines published in October 2013. The Laboratory Accreditation Program checklist was changed to strengthen HER2 validation practices.

  5. Chronic obstructive pulmonary disease case finding by community pharmacists: a potential cost-effective public health intervention.

    PubMed

    Wright, David; Twigg, Michael; Thornley, Tracey

    2015-02-01

    This study aims to pilot a community pharmacy chronic obstructive pulmonary disease (COPD) case finding service in England, estimating costs and effects. Patients potentially at risk of COPD were screened with validated tools. Smoking cessation was offered to all smokers identified as potentially having undiagnosed COPD. Cost and effects of the service were estimated. Twenty-one community pharmacies screened 238 patients over 9 months. One hundred thirty-five patients were identified with potentially undiagnosed COPD; 88 were smokers. Smoking cessation initiation provided a project gain of 38.62 life years, 19.92 quality-adjusted life years and a cost saving of £392.67 per patient screened. COPD case finding by community pharmacists potentially provides cost-savings and improves quality of life. © 2014 The Authors. International Journal of Pharmacy Practice published by John Wiley & Sons Ltd on behalf of Royal Pharmaceutical Society.

  6. Real-world use of the risk-need-responsivity model and the level of service/case management inventory with community-supervised offenders.

    PubMed

    Dyck, Heather L; Campbell, Mary Ann; Wershler, Julie L

    2018-06-01

    The risk-need-responsivity model (RNR; Bonta & Andrews, 2017) has become a leading approach for effective offender case management, but field tests of this model are still required. The present study first assessed the predictive validity of the RNR-informed Level of Service/Case Management Inventory (LS/CMI; Andrews, Bonta, & Wormith, 2004) with a sample of Atlantic Canadian male and female community-supervised provincial offenders (N = 136). Next, the case management plans prepared from these LS/CMI results were analyzed for adherence to the principles of risk, need, and responsivity. As expected, the LS/CMI was a strong predictor of general recidivism for both males (area under the curve = .75, 95% confidence interval [.66, .85]), and especially females (area under the curve = .94, 95% confidence interval [.84, 1.00]), over an average 3.42-year follow-up period. The LS/CMI was predictive of time to recidivism, with lower risk cases taking longer to reoffend than higher risk cases. Despite the robust predictive validity of the LS/CMI, case management plans developed by probation officers generally reflected poor adherence to the RNR principles. These findings highlight the need for better training on how to transfer risk appraisal information from valid risk tools to case plans to better meet the best-practice principles of risk, need, and responsivity for criminal behavior risk reduction. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Analytic Validation of Immunohistochemical Assays: A Comparison of Laboratory Practices Before and After Introduction of an Evidence-Based Guideline.

    PubMed

    Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Souers, Rhona J; Fatheree, Lisa A; Volmar, Keith E; Stuart, Lauren N; Nowak, Jan A; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - Laboratories must demonstrate analytic validity before any test can be used clinically, but studies have shown inconsistent practices in immunohistochemical assay validation. - To assess changes in immunohistochemistry analytic validation practices after publication of an evidence-based laboratory practice guideline. - A survey on current immunohistochemistry assay validation practices and on the awareness and adoption of a recently published guideline was sent to subscribers enrolled in one of 3 relevant College of American Pathologists proficiency testing programs and to additional nonsubscribing laboratories that perform immunohistochemical testing. The results were compared with an earlier survey of validation practices. - Analysis was based on responses from 1085 laboratories that perform immunohistochemical staining. Of 1057 responses, 65.4% (691) were aware of the guideline recommendations before this survey was sent and 79.9% (550 of 688) of those have already adopted some or all of the recommendations. Compared with the 2010 survey, a significant number of laboratories now have written validation procedures for both predictive and nonpredictive marker assays and specifications for the minimum numbers of cases needed for validation. There was also significant improvement in compliance with validation requirements, with 99% (100 of 102) having validated their most recently introduced predictive marker assay, compared with 74.9% (326 of 435) in 2010. The difficulty in finding validation cases for rare antigens and resource limitations were cited as the biggest challenges in implementing the guideline. - Dissemination of the 2014 evidence-based guideline validation practices had a positive impact on laboratory performance; some or all of the recommendations have been adopted by nearly 80% of respondents.

  8. Triple Photoionization of Neon and Argon Near Threshold

    NASA Astrophysics Data System (ADS)

    Bluett, Jaques B.; Lukić, Dragan; Sellin, Ivan A.; Whitfield, Scott B.; Wehlitz, Ralf

    2003-05-01

    The threshold behavior of the triple ionization cross-section of neon and argon was investigated using monochromatized synchrotron radiation and ion time-of-flight spectrometry. The Ne^3+ and Ar^3+ cross-sections are found to follow the Wannier power law(G.H. Wannier, Phys. Rev. 90), 817 (1953). consistent with a Wannier exponent of 2.162 predicted by theory. This is also consistent with the findings of Samson and Angel(J.A.R. Samson and G.C. Angel, Phys. Lett. 61), 1584 (1988). for the case of Ne. In the case of argon we find a much shorter range of validity than for neon.

  9. Intelligence in Bali--A Case Study on Estimating Mean IQ for a Population Using Various Corrections Based on Theory and Empirical Findings

    ERIC Educational Resources Information Center

    Rindermann, Heiner; te Nijenhuis, Jan

    2012-01-01

    A high-quality estimate of the mean IQ of a country requires giving a well-validated test to a nationally representative sample, which usually is not feasible in developing countries. So, we used a convenience sample and four corrections based on theory and empirical findings to arrive at a good-quality estimate of the mean IQ in Bali. Our study…

  10. Effort, symptom validity testing, performance validity testing and traumatic brain injury.

    PubMed

    Bigler, Erin D

    2014-01-01

    To understand the neurocognitive effects of brain injury, valid neuropsychological test findings are paramount. This review examines the research on what has been referred to a symptom validity testing (SVT). Above a designated cut-score signifies a 'passing' SVT performance which is likely the best indicator of valid neuropsychological test findings. Likewise, substantially below cut-point performance that nears chance or is at chance signifies invalid test performance. Significantly below chance is the sine qua non neuropsychological indicator for malingering. However, the interpretative problems with SVT performance below the cut-point yet far above chance are substantial, as pointed out in this review. This intermediate, border-zone performance on SVT measures is where substantial interpretative challenges exist. Case studies are used to highlight the many areas where additional research is needed. Historical perspectives are reviewed along with the neurobiology of effort. Reasons why performance validity testing (PVT) may be better than the SVT term are reviewed. Advances in neuroimaging techniques may be key in better understanding the meaning of border zone SVT failure. The review demonstrates the problems with rigidity in interpretation with established cut-scores. A better understanding of how certain types of neurological, neuropsychiatric and/or even test conditions may affect SVT performance is needed.

  11. Computational modeling in the optimization of corrosion control to reduce lead in drinking water

    EPA Science Inventory

    An international “proof-of-concept” research project (UK, US, CA) will present its findings during this presentation. An established computational modeling system developed in the UK is being calibrated and validated in U.S. and Canadian case studies. It predicts LCR survey resul...

  12. [Psychosomatics in rheumatology].

    PubMed

    Eich, W; Blumenstiel, K; Lensche, H; Fiehn, C; Bieber, C

    2004-04-01

    Psychosocial factors influence the course and the outcome of chronic somatic diseases. This is also valid for rheumatic diseases like rheumatoid arthritis, spondyloarthropathies, systemic collagen vascular diseases, and fibromyalgia syndrome. The article summarises the evidence-based findings and it illustrates possibilities of psychosomatic treatment in rheumatic diseases by means of three case reports.

  13. An evaluation of case completeness for New Zealand Coronial case files held on the Australasian National Coronial Information System (NCIS).

    PubMed

    Lilley, Rebbecca; Davie, Gabrielle; Wilson, Suzanne

    2016-10-01

    Large administrative databases provide powerful opportunities for examining the epidemiology of injury. The National Coronial Information System (NCIS) contains Coronial data from Australia and New Zealand (NZ); however, only closed cases are stored for NZ. This paper examines the completeness of NZ data within the NCIS and its impact upon the validity and utility of this database. A retrospective review of the capture of NZ cases of quad-related fatalities held in the NCIS was undertaken by identifying outstanding Coronial cases held on the NZ Coronial Management System (primary source of NZ Coronial data). NZ data held on the NCIS database were incomplete due to the non-capture of closed cases and the unavailability of open cases. Improvements to the information provided on the NCIS about the completeness of NZ data are needed to improve the validity of NCIS-derived findings and the overall utility of the NCIS for research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  14. Valid statistical inference methods for a case-control study with missing data.

    PubMed

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2018-04-01

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  15. Description and validation of a new automated surveillance system for Clostridium difficile in Denmark.

    PubMed

    Chaine, M; Gubbels, S; Voldstedlund, M; Kristensen, B; Nielsen, J; Andersen, L P; Ellermann-Eriksen, S; Engberg, J; Holm, A; Olesen, B; Schønheyder, H C; Østergaard, C; Ethelberg, S; Mølbak, K

    2017-09-01

    The surveillance of Clostridium difficile (CD) in Denmark consists of laboratory based data from Departments of Clinical Microbiology (DCMs) sent to the National Registry of Enteric Pathogens (NREP). We validated a new surveillance system for CD based on the Danish Microbiology Database (MiBa). MiBa automatically collects microbiological test results from all Danish DCMs. We built an algorithm to identify positive test results for CD recorded in MiBa. A CD case was defined as a person with a positive culture for CD or PCR detection of toxin A and/or B and/or binary toxin. We compared CD cases identified through the MiBa-based surveillance with those reported to NREP and locally in five DCMs representing different Danish regions. During 2010-2014, NREP reported 13 896 CD cases, and the MiBa-based surveillance 21 252 CD cases. There was a 99·9% concordance between the local datasets and the MiBa-based surveillance. Surveillance based on MiBa was superior to the current surveillance system, and the findings show that the number of CD cases in Denmark hitherto has been under-reported. There were only minor differences between local data and the MiBa-based surveillance, showing the completeness and validity of CD data in MiBa. This nationwide electronic system can greatly strengthen surveillance and research in various applications.

  16. Financial decision-making abilities and financial exploitation in older African Americans: Preliminary validity evidence for the Lichtenberg Financial Decision Rating Scale (LFDRS).

    PubMed

    Lichtenberg, Peter A; Ficker, Lisa J; Rahman-Filipiak, Annalise

    2016-01-01

    This study examines preliminary evidence for the Lichtenberg Financial Decision Rating Scale (LFDRS), a new person-centered approach to assessing capacity to make financial decisions, and its relationship to self-reported cases of financial exploitation in 69 older African Americans. More than one third of individuals reporting financial exploitation also had questionable decisional abilities. Overall, decisional ability score and current decision total were significantly associated with cognitive screening test and financial ability scores, demonstrating good criterion validity. Study findings suggest that impaired decisional abilities may render older adults more vulnerable to financial exploitation, and that the LFDRS is a valid tool.

  17. The making of nursing practice law in Lebanon: a policy analysis case study.

    PubMed

    El-Jardali, Fadi; Hammoud, Rawan; Younan, Lina; Nuwayhid, Helen Samaha; Abdallah, Nadine; Alameddine, Mohammad; Bou-Karroum, Lama; Salman, Lana

    2014-09-05

    Evidence-informed decisions can strengthen health systems, improve health, and reduce health inequities. Despite the Beijing, Montreux, and Bamako calls for action, literature shows that research evidence is underemployed in policymaking, especially in the East Mediterranean region (EMR). Selecting the draft nursing practice law as a case study, this policy analysis exercise aims at generating in-depth insights on the public policymaking process, identifying the factors that influence policymaking and assessing to what extent evidence is used in this process. This study utilized a qualitative research design using a case study approach and was conducted in two phases: data collection and analysis, and validation. In the first phase, data was collected through key informant interviews that covered 17 stakeholders. In the second phase, a panel discussion was organized to validate the findings, identify any gaps, and gain insights and feedback of the panelists. Thematic analysis was conducted and guided by the Walt & Gilson's "Policy Triangle Framework" as themes were categorized into content, actors, process, and context. Findings shed light on the complex nature of health policymaking and the unstructured approach of decision making. This study uncovered the barriers that hindered the progress of the draft nursing law and the main barriers against the use of evidence in policymaking. Findings also uncovered the risk involved in the use of international recommendations without the involvement of stakeholders and without accounting for contextual factors and implementation barriers. Findings were interpreted within the context of the Lebanese political environment and the power play between stakeholders, taking into account equity considerations. This policy analysis exercise presents findings that are helpful for policymakers and all other stakeholders and can feed into revising the draft nursing law to reach an effective alternative that is feasible in Lebanon. Our findings are relevant in local and regional context as policymakers and other stakeholders can benefit from this experience when drafting laws and at the global context, as international organizations can consider this case study when developing global guidance and recommendations.

  18. Serum proteomic profiling of major depressive disorder

    PubMed Central

    Bot, M; Chan, M K; Jansen, R; Lamers, F; Vogelzangs, N; Steiner, J; Leweke, F M; Rothermundt, M; Cooper, J; Bahn, S; Penninx, B W J H

    2015-01-01

    Much has still to be learned about the molecular mechanisms of depression. This study aims to gain insight into contributing mechanisms by identifying serum proteins related to major depressive disorder (MDD) in a large psychiatric cohort study. Our sample consisted of 1589 participants of the Netherlands Study of Depression and Anxiety, comprising 687 individuals with current MDD (cMDD), 482 individuals with remitted MDD (rMDD) and 420 controls. We studied the relationship between MDD status and the levels of 171 serum proteins detected on a multi-analyte profiling platform using adjusted linear regression models. Pooled analyses of two independent validation cohorts (totaling 78 MDD cases and 156 controls) was carried out to validate our top markers. Twenty-eight analytes differed significantly between cMDD cases and controls (P<0.05), whereas 10 partly overlapping markers differed significantly between rMDD cases and controls. Antidepressant medication use and comorbid anxiety status did not substantially impact on these findings. Sixteen of the cMDD-related markers had been assayed in the pooled validation cohorts, of which seven were associated with MDD. The analytes prominently associated with cMDD related to diverse cell communication and signal transduction processes (pancreatic polypeptide, macrophage migration inhibitory factor, ENRAGE, interleukin-1 receptor antagonist and tenascin-C), immune response (growth-regulated alpha protein) and protein metabolism (von Willebrand factor). Several proteins were implicated in depression. Changes were more prominent in cMDD, suggesting that molecular alterations in serum are associated with acute depression symptomatology. These findings may help to establish serum-based biomarkers of depression and could improve our understanding of its pathophysiology. PMID:26171980

  19. 3-Tesla MRI-assisted detection of compression points in ulnar neuropathy at the elbow in correlation with intraoperative findings.

    PubMed

    Hold, Alina; Mayr-Riedler, Michael S; Rath, Thomas; Pona, Igor; Nierlich, Patrick; Breitenseher, Julia; Kasprian, Gregor

    2018-03-06

    Releasing the ulnar nerve from all entrapments is the primary objective of every surgical method in ulnar neuropathy at the elbow (UNE). The aim of this retrospective diagnostic study was to validate preoperative 3-Tesla MRI results by comparing the MRI findings with the intraoperative aspects during endoscopic-assisted or open surgery. Preoperative MRI studies were assessed by a radiologist not informed about intraoperative findings in request for the exact site of nerve compression. The localizations of compression were then correlated with the intraoperative findings obtained from the operative records. Percent agreement and Cohen's kappa (κ) values were calculated. From a total of 41 elbows, there was a complete agreement in 27 (65.8%) cases and a partial agreement in another 12 (29.3%) cases. Cohen's kappa showed fair-to-moderate agreement. High-resolution MRI cannot replace thorough intraoperative visualization of the ulnar nerve and its surrounding structures but may provide valuable information in ambiguous cases or relapses. Copyright © 2018 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  20. Sensitivity of self-reported opioid use in case-control studies: Healthy individuals versus hospitalized patients.

    PubMed

    Rashidian, Hamideh; Hadji, Maryam; Marzban, Maryam; Gholipour, Mahin; Rahimi-Movaghar, Afarin; Kamangar, Farin; Malekzadeh, Reza; Weiderpass, Elisabete; Rezaianzadeh, Abbas; Moradi, Abdolvahab; Babhadi-Ashar, Nima; Ghiasvand, Reza; Khavari-Daneshvar, Hossein; Haghdoost, Ali Akbar; Zendehdel, Kazem

    2017-01-01

    Several case-control studies have shown associations between the risk of different cancers and self-reported opium use. Inquiring into relatively sensitive issues, such as the history of drug use, is usually prone to information bias. However, in order to justify the findings of these types of studies, we have to quantify the level of such a negative bias. In current study, we aimed to evaluate sensitivity of self-reported opioid use and suggest suitable types of control groups for case-control studies on opioid use and the risk of cancer. In order to compare the validity of the self-reported opioid use, we cross-validated the response of two groups of subjects 1) 178 hospitalized patients and 2) 186 healthy individuals with the results of their tests using urine rapid drug screen (URDS) and thin layer chromatography (TLC). The questioners were asked by trained interviewers to maximize the validity of responses; healthy individuals were selected from the companions of patients in hospitals. Self-reported regular opioid use was 36.5% in hospitalized patients 19.3% in healthy individuals (p-value> 0.001).The reported frequencies of opioid use in the past 72 hours were 21.4% and 11.8% in hospitalized patients and healthy individuals respectively. Comparing their responses with the results of urine tests showed a sensitivity of 77% and 69% among hospitalized patients and healthy individuals for self-reports (p-value = 0.4). Having corrected based on the mentioned sensitivities; the frequency of opioid regular use was 47% and 28% in hospitalized patients and healthy individuals, respectively. Regular opioid use among hospitalized patients was significantly higher than in healthy individuals (p-value> 0.001). Our findings showed that the level of opioid use under-reporting in hospitalized patients and healthy individuals was considerable but comparable. In addition, the frequency of regular opioid use among hospitalized patients was significantly higher than that in the general population. Altogether, it seems that, without corrections for these differences and biases, the results of many studies including case-control studies on opioid use might distort findings substantially.

  1. Sensitivity of self-reported opioid use in case-control studies: Healthy individuals versus hospitalized patients

    PubMed Central

    Rashidian, Hamideh; Hadji, Maryam; Marzban, Maryam; Gholipour, Mahin; Rahimi-Movaghar, Afarin; Kamangar, Farin; Malekzadeh, Reza; Weiderpass, Elisabete; Rezaianzadeh, Abbas; Moradi, Abdolvahab; Babhadi-Ashar, Nima; Ghiasvand, Reza; Khavari-Daneshvar, Hossein; Haghdoost, Ali Akbar; Zendehdel, Kazem

    2017-01-01

    Background Several case-control studies have shown associations between the risk of different cancers and self-reported opium use. Inquiring into relatively sensitive issues, such as the history of drug use, is usually prone to information bias. However, in order to justify the findings of these types of studies, we have to quantify the level of such a negative bias. In current study, we aimed to evaluate sensitivity of self-reported opioid use and suggest suitable types of control groups for case-control studies on opioid use and the risk of cancer. Methods In order to compare the validity of the self-reported opioid use, we cross-validated the response of two groups of subjects 1) 178 hospitalized patients and 2) 186 healthy individuals with the results of their tests using urine rapid drug screen (URDS) and thin layer chromatography (TLC). The questioners were asked by trained interviewers to maximize the validity of responses; healthy individuals were selected from the companions of patients in hospitals. Results Self-reported regular opioid use was 36.5% in hospitalized patients 19.3% in healthy individuals (p-value> 0.001).The reported frequencies of opioid use in the past 72 hours were 21.4% and 11.8% in hospitalized patients and healthy individuals respectively. Comparing their responses with the results of urine tests showed a sensitivity of 77% and 69% among hospitalized patients and healthy individuals for self-reports (p-value = 0.4). Having corrected based on the mentioned sensitivities; the frequency of opioid regular use was 47% and 28% in hospitalized patients and healthy individuals, respectively. Regular opioid use among hospitalized patients was significantly higher than in healthy individuals (p-value> 0.001). Conclusion Our findings showed that the level of opioid use under-reporting in hospitalized patients and healthy individuals was considerable but comparable. In addition, the frequency of regular opioid use among hospitalized patients was significantly higher than that in the general population. Altogether, it seems that, without corrections for these differences and biases, the results of many studies including case-control studies on opioid use might distort findings substantially. PMID:28854228

  2. Building the evidence on simulation validity: comparison of anesthesiologists' communication patterns in real and simulated cases.

    PubMed

    Weller, Jennifer; Henderson, Robert; Webster, Craig S; Shulruf, Boaz; Torrie, Jane; Davies, Elaine; Henderson, Kaylene; Frampton, Chris; Merry, Alan F

    2014-01-01

    Effective teamwork is important for patient safety, and verbal communication underpins many dimensions of teamwork. The validity of the simulated environment would be supported if it elicited similar verbal communications to the real setting. The authors hypothesized that anesthesiologists would exhibit similar verbal communication patterns in routine operating room (OR) cases and routine simulated cases. The authors further hypothesized that anesthesiologists would exhibit different communication patterns in routine cases (real or simulated) and simulated cases involving a crisis. Key communications relevant to teamwork were coded from video recordings of anesthesiologists in the OR, routine simulation and crisis simulation and percentages were compared. The authors recorded comparable videos of 20 anesthesiologists in the two simulations, and 17 of these anesthesiologists in the OR, generating 400 coded events in the OR, 683 in the routine simulation, and 1,419 in the crisis simulation. The authors found no significant differences in communication patterns in the OR and the routine simulations. The authors did find significant differences in communication patterns between the crisis simulation and both the OR and the routine simulations. Participants rated team communication as realistic and considered their communications occurred with a similar frequency in the simulations as in comparable cases in the OR. The similarity of teamwork-related communications elicited from anesthesiologists in simulated cases and the real setting lends support for the ecological validity of the simulation environment and its value in teamwork training. Different communication patterns and frequencies under the challenge of a crisis support the use of simulation to assess crisis management skills.

  3. The Reliability and Validity of the Thoracolumbar Injury Classification System in Pediatric Spine Trauma.

    PubMed

    Savage, Jason W; Moore, Timothy A; Arnold, Paul M; Thakur, Nikhil; Hsu, Wellington K; Patel, Alpesh A; McCarthy, Kathryn; Schroeder, Gregory D; Vaccaro, Alexander R; Dimar, John R; Anderson, Paul A

    2015-09-15

    The thoracolumbar injury classification system (TLICS) was evaluated in 20 consecutive pediatric spine trauma cases. The purpose of this study was to determine the reliability and validity of the TLICS in pediatric spine trauma. The TLICS was developed to improve the categorization and management of thoracolumbar trauma. TLICS has been shown to have good reliability and validity in the adult population. The clinical and radiographical findings of 20 pediatric thoracolumbar fractures were prospectively presented to 20 surgeons with disparate levels of training and experience with spinal trauma. These injuries were consecutively scored using the TLICS. Cohen unweighted κ coefficients and Spearman rank order correlation values were calculated for the key parameters (injury morphology, status of posterior ligamentous complex, neurological status, TLICS total score, and proposed management) to assess the inter-rater reliabilities. Five surgeons scored the same cases 3 months later to assess the intra-rater reliability. The actual management of each case was then compared with the treatment recommended by the TLICS algorithm to assess validity. The inter-rater κ statistics of all subgroups (injury morphology, status of the posterior ligamentous complex, neurological status, TLICS total score, and proposed treatment) were within the range of moderate to substantial reproducibility (0.524-0.958). All subgroups had excellent intra-rater reliability (0.748-1.000). The various indices for validity were calculated (80.3% correct, 0.836 sensitivity, 0.785 specificity, 0.676 positive predictive value, 0.899 negative predictive value). Overall, TLICS demonstrated good validity. The TLICS has good reliability and validity when used in the pediatric population. The inter-rater reliability of predicting management and indices for validity are lower than those in adults with thoracolumbar fractures, which is likely due to differences in the way children are treated for certain types of injuries. TLICS can be used to reliably categorize thoracolumbar injuries in the pediatric population; however, modifications may be needed to better guide treatment in this specific patient population. 4.

  4. Finding the Balance: Jan Kagarice, a Case Study of a Master Trombone Teacher

    ERIC Educational Resources Information Center

    Marston, Karen Lynn

    2011-01-01

    The purpose of this study was to investigate and document the pedagogical techniques practiced by Jan Kagarice, Adjunct Professor of Trombone at the University of North Texas. Given that the study of master teachers has been identified as a valid method for defining effective teaching (Duke & Simmons, 2006), the intended outcome was to…

  5. A Professional Dilemma: Following the Principle or the Principal?

    ERIC Educational Resources Information Center

    Zirkel, Perry A.

    2012-01-01

    This article reports on a case that resulted in a published court decision which illustrates a dilemma at the intersection of the No Child Left Behind Act (NCLB) and the Individuals with Disabilities Education Act (IDEA). On first impression, the finding that teachers were operating based on professional principle seems to validate their actions.…

  6. Evaluating physician performance at individualizing care: a pilot study tracking contextual errors in medical decision making.

    PubMed

    Weiner, Saul J; Schwartz, Alan; Yudkowsky, Rachel; Schiff, Gordon D; Weaver, Frances M; Goldberg, Julie; Weiss, Kevin B

    2007-01-01

    Clinical decision making requires 2 distinct cognitive skills: the ability to classify patients' conditions into diagnostic and management categories that permit the application of research evidence and the ability to individualize or-more specifically-to contextualize care for patients whose circumstances and needs require variation from the standard approach to care. The purpose of this study was to develop and test a methodology for measuring physicians' performance at contextualizing care and compare it to their performance at planning biomedically appropriate care. First, the authors drafted 3 cases, each with 4 variations, 3 of which are embedded with biomedical and/or contextual information that is essential to planning care. Once the cases were validated as instruments for assessing physician performance, 54 internal medicine residents were then presented with opportunities to make these preidentified biomedical or contextual errors, and data were collected on information elicitation and error making. The case validation process was successful in that, in the final iteration, the physicians who received the contextual variant of cases proposed an alternate plan of care to those who received the baseline variant 100% of the time. The subsequent piloting of these validated cases unmasked previously unmeasured differences in physician performance at contextualizing care. The findings, which reflect the performance characteristics of the study population, are presented. This pilot study demonstrates a methodology for measuring physician performance at contextualizing care and illustrates the contribution of such information to an overall assessment of physician practice.

  7. Generally objective measurement of human temperature and reading ability: some corollaries.

    PubMed

    Stenner, A Jackson; Stone, Mark

    2010-01-01

    We argue that a goal of measurement is general objectivity: point estimates of a person's measure (height, temperature, and reader ability) should be independent of the instrument and independent of the sample in which the person happens to find herself. In contrast, Rasch's concept of specific objectivity requires only differences (i.e., comparisons) between person measures to be independent of the instrument. We present a canonical case in which there is no overlap between instruments and persons: each person is measured by a unique instrument. We then show what is required to estimate measures in this degenerate case. The canonical case encourages a simplification and reconceptualization of validity and reliability. Not surprisingly, this reconceptualization looks a lot like the way physicists and chemometricians think about validity and measurement error. We animate this presentation with a technology that blurs the distinction between instruction, assessment, and generally objective measurement of reader ability. We encourage adaptation of this model to health outcomes measurement.

  8. Validity and reliability of an instrument for assessing case analyses in bioengineering ethics education.

    PubMed

    Goldin, Ilya M; Pinkus, Rosa Lynn; Ashley, Kevin

    2015-06-01

    Assessment in ethics education faces a challenge. From the perspectives of teachers, students, and third-party evaluators like the Accreditation Board for Engineering and Technology and the National Institutes of Health, assessment of student performance is essential. Because of the complexity of ethical case analysis, however, it is difficult to formulate assessment criteria, and to recognize when students fulfill them. Improvement in students' moral reasoning skills can serve as the focus of assessment. In previous work, Rosa Lynn Pinkus and Claire Gloeckner developed a novel instrument for assessing moral reasoning skills in bioengineering ethics. In this paper, we compare that approach to existing assessment techniques, and evaluate its validity and reliability. We find that it is sensitive to knowledge gain and that independent coders agree on how to apply it.

  9. Qualitative validation of the reduction from two reciprocally coupled neurons to one self-coupled neuron in a respiratory network model.

    PubMed

    Dunmyre, Justin R

    2011-06-01

    The pre-Bötzinger complex of the mammalian brainstem is a heterogeneous neuronal network, and individual neurons within the network have varying strengths of the persistent sodium and calcium-activated nonspecific cationic currents. Individually, these currents have been the focus of modeling efforts. Previously, Dunmyre et al. (J Comput Neurosci 1-24, 2011) proposed a model and studied the interactions of these currents within one self-coupled neuron. In this work, I consider two identical, reciprocally coupled model neurons and validate the reduction to the self-coupled case. I find that all of the dynamics of the two model neuron network and the regions of parameter space where these distinct dynamics are found are qualitatively preserved in the reduction to the self-coupled case.

  10. Integrating Validity Theory with Use of Measurement Instruments in Clinical Settings

    PubMed Central

    Kelly, P Adam; O'Malley, Kimberly J; Kallen, Michael A; Ford, Marvella E

    2005-01-01

    Objective To present validity concepts in a conceptual framework useful for research in clinical settings. Principal Findings We present a three-level decision rubric for validating measurement instruments, to guide health services researchers step-by-step in gathering and evaluating validity evidence within their specific situation. We address construct precision, the capacity of an instrument to measure constructs it purports to measure and differentiate from other, unrelated constructs; quantification precision, the reliability of the instrument; and translation precision, the ability to generalize scores from an instrument across subjects from the same or similar populations. We illustrate with specific examples, such as an approach to validating a measurement instrument for veterans when prior evidence of instrument validity for this population does not exist. Conclusions Validity should be viewed as a property of the interpretations and uses of scores from an instrument, not of the instrument itself: how scores are used and the consequences of this use are integral to validity. Our advice is to liken validation to building a court case, including discovering evidence, weighing the evidence, and recognizing when the evidence is weak and more evidence is needed. PMID:16178998

  11. Optimal growth trajectories with finite carrying capacity.

    PubMed

    Caravelli, F; Sindoni, L; Caccioli, F; Ududec, C

    2016-08-01

    We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.

  12. Gaussian quadrature and lattice discretization of the Fermi-Dirac distribution for graphene.

    PubMed

    Oettinger, D; Mendoza, M; Herrmann, H J

    2013-07-01

    We construct a lattice kinetic scheme to study electronic flow in graphene. For this purpose, we first derive a basis of orthogonal polynomials, using as the weight function the ultrarelativistic Fermi-Dirac distribution at rest. Later, we use these polynomials to expand the respective distribution in a moving frame, for both cases, undoped and doped graphene. In order to discretize the Boltzmann equation and make feasible the numerical implementation, we reduce the number of discrete points in momentum space to 18 by applying a Gaussian quadrature, finding that the family of representative wave (2+1)-vectors, which satisfies the quadrature, reconstructs a honeycomb lattice. The procedure and discrete model are validated by solving the Riemann problem, finding excellent agreement with other numerical models. In addition, we have extended the Riemann problem to the case of different dopings, finding that by increasing the chemical potential the electronic fluid behaves as if it increases its effective viscosity.

  13. Optimal growth trajectories with finite carrying capacity

    NASA Astrophysics Data System (ADS)

    Caravelli, F.; Sindoni, L.; Caccioli, F.; Ududec, C.

    2016-08-01

    We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.

  14. [Evaluation of cartilage defects in the knee: validity of clinical, magnetic-resonance-imaging and radiological findings compared with arthroscopy].

    PubMed

    Spahn, G; Wittig, R; Kahl, E; Klinger, H M; Mückley, T; Hofmann, G O

    2007-05-01

    The study was aimed to evaluate the validity of clinical, radiological and MRI examination for cartilage defects of the knee compared with arthroscopic finding. Seven-hundred seventy-two patients who were suffering from knee pain over more than 3 months were evaluated clinical (grinding-sign) and with radiography and magnetic resonance imaging (MRI) and subsequent arthroscopy. The grinding sign had a sensitivity of 0.39. The association of a positive grinding test with high grade cartilage defects was significant (p<0.000). In 97.4% an intact chondral surface correlated with a normal radiological finding. Subchondral sclerosis, exophytes and a joint space narrowing was significantly associated with high grade cartilage defects (p<0.000). The accuracy of MRI was 59.5%. The MRI resulted in an overestimation in 36.6% and an underestimation in 3.9%. False-positive results were significant more often assessed in low-grade cartilage defects (p<0.000). Clinical signs, x-ray imaging and MRI correlate with arthroscopic findings in cases of deep cartilage lesions. In intact or low-grade degenerated cartilage often results an overestimating of these findings.

  15. Perturbed Newtonian description of the Lemaître model with non-negligible pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, Kazuhiro; Marra, Valerio; Mukhanov, Viatcheslav

    2016-03-01

    We study the validity of the Newtonian description of cosmological perturbations using the Lemaître model, an exact spherically symmetric solution of Einstein's equation. This problem has been investigated in the past for the case of a dust fluid. Here, we extend the previous analysis to the more general case of a fluid with non-negligible pressure, and, for the numerical examples, we consider the case of radiation (P=ρ/3). We find that, even when the density contrast has a nonlinear amplitude, the Newtonian description of the cosmological perturbations using the gravitational potential ψ and the curvature potential φ is valid as longmore » as we consider sub-horizon inhomogeneities. However, the relation ψ+φ=O(φ{sup 2})—which holds for the case of a dust fluid—is not valid for a relativistic fluid, and an effective anisotropic stress is generated. This demonstrates the usefulness of the Lemaître model which allows us to study in an exact nonlinear fashion the onset of anisotropic stress in fluids with non-negligible pressure. We show that this happens when the characteristic scale of the inhomogeneity is smaller than the sound horizon and that the deviation is caused by the nonlinear effect of the fluid's fast motion. We also find that ψ+φ= [O(φ{sup 2}),O(c{sub s}{sup 2φ} δ)] for an inhomogeneity with density contrast δ whose characteristic scale is smaller than the sound horizon, unless w is close to −1, where w and c{sub s} are the equation of state parameter and the sound speed of the fluid, respectively. On the other hand, we expect ψ+φ=O(φ{sup 2}) to hold for an inhomogeneity whose characteristic scale is larger than the sound horizon, unless the amplitude of the inhomogeneity is large and w is close to −1.« less

  16. [Virtual bronchoscopy: the correlation between endoscopic simulation and bronchoscopic findings].

    PubMed

    Salvolini, L; Gasparini, S; Baldelli, S; Bichi Secchi, E; Amici, F

    1997-11-01

    We carried out a preliminary clinical validation of 3D spiral CT virtual endoscopic reconstructions of the tracheobronchial tree, by comparing virtual bronchoscopic images with actual endoscopic findings. Twenty-two patients with tracheobronchial disease suspected at preliminary clinical, cytopathological and plain chest film findings were submitted to spiral CT of the chest and bronchoscopy. CT was repeated after endobronchial therapy in 2 cases. Virtual endoscopic shaded-surface-display views of the tracheobronchial tree were reconstructed from reformatted CT data with an Advantage Navigator software. Virtual bronchoscopic images were preliminarily evaluated with a semi-quantitative quality score (excellent/good/fair/poor). The depiction of consecutive airway branches was then considered. Virtual bronchoscopies were finally submitted to double-blind comparison with actual endoscopies. Virtual image quality was considered excellent in 8 cases, good in 14 and fair in 2. Virtual exploration was stopped at the lobar bronchi in one case only; the origin of segmental bronchi was depicted in 23 cases and that of some subsegmental branches in 2 cases. Agreement between actual and virtual bronchoscopic findings was good in all cases but 3 where it was nevertheless considered satisfactory. The yield of clinically useful information differed in 8/24 cases: virtual reconstructions provided more information than bronchoscopy in 5 cases and vice versa in 3. Virtual reconstructions are limited in that the procedure is long and difficult and needing a strictly standardized threshold value not to alter virtual findings. Moreover, the reconstructed surface lacks transparency, there is the partial volume effect and the branches < or = 4 pixels phi and/or meandering ones are difficult to explore. Our preliminary data are encouraging. Segmental bronchi were depicted in nearly all cases, except for the branches involved by disease. Obstructing lesions could be bypassed in some cases, making an indication for endoscopic laser therapy. Future didactic perspectives and applications to minimally invasive or virtual reality-assisted therapy seem promising, even though actual clinical applications require further studies.

  17. Validation of X1 motorcycle model in industrial plant layout by using WITNESSTM simulation software

    NASA Astrophysics Data System (ADS)

    Hamzas, M. F. M. A.; Bareduan, S. A.; Zakaria, M. Z.; Tan, W. J.; Zairi, S.

    2017-09-01

    This paper demonstrates a case study on simulation, modelling and analysis for X1 Motorcycles Model. In this research, a motorcycle assembly plant has been selected as a main place of research study. Simulation techniques by using Witness software were applied to evaluate the performance of the existing manufacturing system. The main objective is to validate the data and find out the significant impact on the overall performance of the system for future improvement. The process of validation starts when the layout of the assembly line was identified. All components are evaluated to validate whether the data is significance for future improvement. Machine and labor statistics are among the parameters that were evaluated for process improvement. Average total cycle time for given workstations is used as criterion for comparison of possible variants. From the simulation process, the data used are appropriate and meet the criteria for two-sided assembly line problems.

  18. A systematic review of validated methods for identifying erythema multiforme major/minor/not otherwise specified, Stevens-Johnson Syndrome, or toxic epidermal necrolysis using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's (FDA) Mini-Sentinel pilot program aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of erythema multiforme and related conditions. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the erythema multiforme HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles that used administrative and claims data to identify erythema multiforme, Stevens-Johnson syndrome, or toxic epidermal necrolysis and that included validation estimates of the coding algorithms. Our search revealed limited literature focusing on erythema multiforme and related conditions that provided administrative and claims data-based algorithms and validation estimates. Only four studies provided validated algorithms and all studies used the same International Classification of Diseases code, 695.1. Approximately half of cases subjected to expert review were consistent with erythema multiforme and related conditions. Updated research needs to be conducted on designing validation studies that test algorithms for erythema multiforme and related conditions and that take into account recent changes in the diagnostic coding of these diseases. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Validating Quantitative Measurement Using Qualitative Data: Combining Rasch Scaling and Latent Semantic Analysis in Psychiatry

    NASA Astrophysics Data System (ADS)

    Lange, Rense

    2015-02-01

    An extension of concurrent validity is proposed that uses qualitative data for the purpose of validating quantitative measures. The approach relies on Latent Semantic Analysis (LSA) which places verbal (written) statements in a high dimensional semantic space. Using data from a medical / psychiatric domain as a case study - Near Death Experiences, or NDE - we established concurrent validity by connecting NDErs qualitative (written) experiential accounts with their locations on a Rasch scalable measure of NDE intensity. Concurrent validity received strong empirical support since the variance in the Rasch measures could be predicted reliably from the coordinates of their accounts in the LSA derived semantic space (R2 = 0.33). These coordinates also predicted NDErs age with considerable precision (R2 = 0.25). Both estimates are probably artificially low due to the small available data samples (n = 588). It appears that Rasch scalability of NDE intensity is a prerequisite for these findings, as each intensity level is associated (at least probabilistically) with a well- defined pattern of item endorsements.

  20. The Construction of Facts: Preconditions for Meaning in Teaching Energy in Swedish Classrooms

    ERIC Educational Resources Information Center

    Gyberg, Per; Lee, Francis

    2010-01-01

    This article investigates the mechanisms that govern the processes of inclusion and exclusion of knowledge. It draws on three cases from Swedish classrooms about how energy is created as an area of knowledge. We are interested in how knowledge is made valid and legitimate in a school context, and in defining and finding tools to identify…

  1. Tourism and Specific Risk Areas for Cryptococcus gattii, Vancouver Island, Canada

    PubMed Central

    Chambers, Catharine; MacDougall, Laura; Li, Min

    2008-01-01

    We compared travel histories of case-patients with Cryptococcus gattii infection during 1999–2006 to travel destinations of the general public on Vancouver Island, British Columbia, Canada. Findings validated and refined estimates of risk on the basis of place of residence and showed no spatial progression of risk areas on this island over time. PMID:18976570

  2. A Case Study of 4 & 5 Cost Effectiveness

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.; McCaugherty, Dan; Joshi, Tulasi; Callahan, John

    1997-01-01

    This paper looks at the Independent Verification and Validation (IV&V) of NASA's Space Shuttle Day of Launch I-Load Update (DoLILU) project. IV&V is defined. The system's development life cycle is explained. Data collection and analysis are described. DoLILU Issue Tracking Reports (DITRs) authored by IV&V personnel are analyzed to determine the effectiveness of IV&V in finding errors before the code, testing, and integration phase of the software development life cycle. The study's findings are reported along with the limitations of the study and planned future research.

  3. Psychometric instrumentation: reliability and validity of instruments used for clinical practice, evidence-based practice projects and research studies.

    PubMed

    Mayo, Ann M

    2015-01-01

    It is important for CNSs and other APNs to consider the reliability and validity of instruments chosen for clinical practice, evidence-based practice projects, or research studies. Psychometric testing uses specific research methods to evaluate the amount of error associated with any particular instrument. Reliability estimates explain more about how well the instrument is designed, whereas validity estimates explain more about scores that are produced by the instrument. An instrument may be architecturally sound overall (reliable), but the same instrument may not be valid. For example, if a specific group does not understand certain well-constructed items, then the instrument does not produce valid scores when used with that group. Many instrument developers may conduct reliability testing only once, yet continue validity testing in different populations over many years. All CNSs should be advocating for the use of reliable instruments that produce valid results. Clinical nurse specialists may find themselves in situations where reliability and validity estimates for some instruments that are being utilized are unknown. In such cases, CNSs should engage key stakeholders to sponsor nursing researchers to pursue this most important work.

  4. Screening for depression with a brief questionnaire in a primary care setting: validation of the two questions with help question (Malay version).

    PubMed

    Mohd-Sidik, Sherina; Arroll, Bruce; Goodyear-Smith, Felicity; Zain, Azhar M D

    2011-01-01

    To determine the diagnostic accuracy of the two questions with help question (TQWHQ) in the Malay language. The two questions are case-finding questions on depression, and a question on whether help is needed was added to increase the specificity of the two questions. This cross sectional validation study was conducted in a government funded primary care clinic in Malaysia. The participants included 146 consecutive women patients receiving no psychotropic drugs and who were Malay speakers. The main outcome measures were sensitivity, specificity, and likelihood ratios of the two questions and help question. The two questions showed a sensitivity of 99% (95% confidence interval 88% to 99.9%) and a specificity of 70% (62% to 78%), respectively. The likelihood ratio for a positive test was 3.3 (2.5 to 4.5) and the likelihood ratio for a negative test was 0.01 (0.00 to 0.57). The addition of the help question to the two questions increased the specificity to 95% (89% to 98%). The two qeustions on depression detected most cases of depression in this study. The questions have the advantage of brevity. The addition of the help question increased the specificity of the two questions. Based on these findings, the TQWHQ can be strongly recommended for detection of depression in government primary care clnics in Malaysia. Translation did not apear to affect the validity of the TQWHQ.

  5. Prediction, Detection, and Validation of Isotope Clusters in Mass Spectrometry Data

    PubMed Central

    Treutler, Hendrik; Neumann, Steffen

    2016-01-01

    Mass spectrometry is a key analytical platform for metabolomics. The precise quantification and identification of small molecules is a prerequisite for elucidating the metabolism and the detection, validation, and evaluation of isotope clusters in LC-MS data is important for this task. Here, we present an approach for the improved detection of isotope clusters using chemical prior knowledge and the validation of detected isotope clusters depending on the substance mass using database statistics. We find remarkable improvements regarding the number of detected isotope clusters and are able to predict the correct molecular formula in the top three ranks in 92% of the cases. We make our methodology freely available as part of the Bioconductor packages xcms version 1.50.0 and CAMERA version 1.30.0. PMID:27775610

  6. Only complementary voices tell the truth: a reevaluation of validity in multi-informant approaches of child and adolescent clinical assessments.

    PubMed

    Kaurin, Aleksandra; Egloff, Boris; Stringaris, Argyris; Wessa, Michèle

    2016-08-01

    Multi-informant approaches are thought to be key to clinical assessment. Classical theories of psychological measurements assume that only convergence among different informants' reports allows for an estimate of the true nature and causes of clinical presentations. However, the integration of multiple accounts is fraught with problems because findings in child and adolescent psychiatry do not conform to the fundamental expectation of convergence. Indeed, reports provided by different sources (self, parents, teachers, peers) share little variance. Moreover, in some cases informant divergence may be meaningful and not error variance. In this review, we give an overview of conceptual and theoretical foundations of valid multi-informant assessment and discuss why our common concepts of validity need revaluation.

  7. Validation of Case Finding Algorithms for Hepatocellular Cancer from Administrative Data and Electronic Health Records using Natural Language Processing

    PubMed Central

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2013-01-01

    Background Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC ICD-9 codes, and evaluated whether natural language processing (NLP) by the Automated Retrieval Console (ARC) for document classification improves HCC identification. Methods We identified a cohort of patients with ICD-9 codes for HCC during 2005–2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared to manual classification. PPV, sensitivity, and specificity of ARC were calculated. Results 1138 patients with HCC were identified by ICD-9 codes. Based on manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. Conclusion A combined approach of ICD-9 codes and NLP of pathology and radiology reports improves HCC case identification in automated data. PMID:23929403

  8. Validation of Case Finding Algorithms for Hepatocellular Cancer From Administrative Data and Electronic Health Records Using Natural Language Processing.

    PubMed

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2016-02-01

    Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC International Classification of Diseases, 9th Revision (ICD-9) codes, and evaluated whether natural language processing by the Automated Retrieval Console (ARC) for document classification improves HCC identification. We identified a cohort of patients with ICD-9 codes for HCC during 2005-2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared with manual classification. PPV, sensitivity, and specificity of ARC were calculated. A total of 1138 patients with HCC were identified by ICD-9 codes. On the basis of manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. A combined approach of ICD-9 codes and natural language processing of pathology and radiology reports improves HCC case identification in automated data.

  9. Cross-cultural adaptation and validation of the sino-nasal outcome test (SNOT-22) for Spanish-speaking patients.

    PubMed

    de los Santos, Gonzalo; Reyes, Pablo; del Castillo, Raúl; Fragola, Claudio; Royuela, Ana

    2015-11-01

    Our objective was to perform translation, cross-cultural adaptation and validation of the sino-nasal outcome test 22 (SNOT-22) to Spanish language. SNOT-22 was translated, back translated, and a pretest trial was performed. The study included 119 individuals divided into 60 cases, who met diagnostic criteria for chronic rhinosinusitis according to the European Position Paper on Rhinosinusitis 2012; and 59 controls, who reported no sino-nasal disease. Internal consistency was evaluated with Cronbach's alpha test, reproducibility with Kappa coefficient, reliability with intraclass correlation coefficient (ICC), validity with Mann-Whitney U test and responsiveness with Wilcoxon test. In cases, Cronbach's alpha was 0.91 both before and after treatment, as for controls, it was 0.90 at their first test assessment and 0.88 at 3 weeks. Kappa coefficient was calculated for each item, with an average score of 0.69. ICC was also performed for each item, with a score of 0.87 in the overall score and an average among all items of 0.71. Median score for cases was 47, and 2 for controls, finding the difference to be highly significant (Mann-Whitney U test, p < 0.001). Clinical changes were observed among treated patients, with a median score of 47 and 13.5 before and after treatment, respectively (Wilcoxon test, p < 0.001). The effect size resulted in 0.14 in treated patients whose status at 3 weeks was unvarying; 1.03 in those who were better and 1.89 for much better group. All controls were unvarying with an effect size of 0.05. The Spanish version of the SNOT-22 has the internal consistency, reliability, reproducibility, validity and responsiveness necessary to be a valid instrument to be used in clinical practice.

  10. Data mining: comparing the empiric CFS to the Canadian ME/CFS case definition.

    PubMed

    Jason, Leonard A; Skendrovic, Beth; Furst, Jacob; Brown, Abigail; Weng, Angela; Bronikowski, Christine

    2012-01-01

    This article contrasts two case definitions for myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS). We compared the empiric CFS case definition (Reeves et al., 2005) and the Canadian ME/CFS clinical case definition (Carruthers et al., 2003) with a sample of individuals with CFS versus those without. Data mining with decision trees was used to identify the best items to identify patients with CFS. Data mining is a statistical technique that was used to help determine which of the survey questions were most effective for accurately classifying cases. The empiric criteria identified about 79% of patients with CFS and the Canadian criteria identified 87% of patients. Items identified by the Canadian criteria had more construct validity. The implications of these findings are discussed. © 2011 Wiley Periodicals, Inc.

  11. The Einstein viscosity with fluid elasticity

    NASA Astrophysics Data System (ADS)

    Einarsson, Jonas; Yang, Mengfei; Shaqfeh, Eric S. G.

    2017-11-01

    We give the first correction to the suspension viscosity due to fluid elasticity for a dilute suspension of spheres in a viscoelastic medium. Our perturbation theory is valid to O (Wi2) in the Weissenberg number Wi = γ . λ , where γ is the typical magnitude of the suspension velocity gradient, and λ is the relaxation time of the viscoelastic fluid. For shear flow we find that the suspension shear-thickens due to elastic stretching in strain `hot spots' near the particle, despite the fact that the stress inside the particles decreases relative to the Newtonian case. We thus argue that it is crucial to correctly model the extensional rheology of the suspending medium to predict the shear rheology of the suspension. For uniaxial extensional flow we correct existing results at O (Wi) , and find dramatic strain-rate thickening at O (Wi2) . We validate our theory with fully resolved numerical simulations.

  12. Einstein viscosity with fluid elasticity

    NASA Astrophysics Data System (ADS)

    Einarsson, Jonas; Yang, Mengfei; Shaqfeh, Eric S. G.

    2018-01-01

    We give the first correction to the suspension viscosity due to fluid elasticity for a dilute suspension of spheres in a viscoelastic medium. Our perturbation theory is valid to O (ϕ Wi2) in the particle volume fraction ϕ and the Weissenberg number Wi =γ ˙λ , where γ ˙ is the typical magnitude of the suspension velocity gradient, and λ is the relaxation time of the viscoelastic fluid. For shear flow we find that the suspension shear-thickens due to elastic stretching in strain "hot spots" near the particle, despite the fact that the stress inside the particles decreases relative to the Newtonian case. We thus argue that it is crucial to correctly model the extensional rheology of the suspending medium to predict the shear rheology of the suspension. For uniaxial extensional flow we correct existing results at O (ϕ Wi ) , and find dramatic strain-rate thickening at O (ϕ Wi2) . We validate our theory with fully resolved numerical simulations.

  13. Pancultural self-enhancement reloaded: a meta-analytic reply to Heine (2005).

    PubMed

    Sedikides, Constantine; Gaertner, Lowell; Vevea, Jack L

    2005-10-01

    C. Sedikides, L. Gaertner, and Y. Toguchi (2003) reported findings favoring the universality of self-enhancement. S. J. Heine (2005) challenged the authors' research on evidential and logical grounds. In response, the authors carried out 2 meta-analytic investigations. The results backed the C. Sedikides et al. (2003) theory and findings. Both Westerners and Easterners self-enhanced tactically. Westerners self-enhanced on attributes relevant to the cultural ideal of individualism, whereas Easterners self-enhanced on attributes relevant to the cultural ideal of collectivism (in both cases, because of the personal importance of the ideal). Self-enhancement motivation is universal, although its manifestations are strategically sensitive to cultural context. The authors respond to other aspects of Heine's critique by discussing why researchers should empirically validate the comparison dimension (individualistic vs. collectivistic) and defending why the better-than-average effect is a valid measure of self-enhancement.

  14. Assessing Competence in Collaborative Case Conceptualization: Development and Preliminary Psychometric Properties of the Collaborative Case Conceptualization Rating Scale (CCC-RS).

    PubMed

    Kuyken, Willem; Beshai, Shadi; Dudley, Robert; Abel, Anna; Görg, Nora; Gower, Philip; McManus, Freda; Padesky, Christine A

    2016-03-01

    Case conceptualization is assumed to be an important element in cognitive-behavioural therapy (CBT) because it describes and explains clients' presentations in ways that inform intervention. However, we do not have a good measure of competence in CBT case conceptualization that can be used to guide training and elucidate mechanisms. The current study addresses this gap by describing the development and preliminary psychometric properties of the Collaborative Case Conceptualization - Rating Scale (CCC-RS; Padesky et al., 2011). The CCC-RS was developed in accordance with the model posited by Kuyken et al. (2009). Data for this study (N = 40) were derived from a larger trial (Wiles et al., 2013) with adults suffering from resistant depression. Internal consistency and inter-rater reliability were calculated. Further, and as a partial test of the scale's validity, Pearson's correlation coefficients were obtained for scores on the CCC-RS and key scales from the Cognitive Therapy Scale - Revised (CTS-R; Blackburn et al., 2001). The CCC-RS showed excellent internal consistency (α = .94), split-half (.82) and inter-rater reliabilities (ICC =.84). Total scores on the CCC-RS were significantly correlated with scores on the CTS-R (r = .54, p < .01). Moreover, the Collaboration subscale of the CCC-RS was significantly correlated (r = .44) with its counterpart of the CTS-R in a theoretically predictable manner. These preliminary results indicate that the CCC-RS is a reliable measure with adequate face, content and convergent validity. Further research is needed to replicate and extend the current findings to other facets of validity.

  15. SU-D-BRB-01: A Predictive Planning Tool for Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palefsky, S; Roper, J; Elder, E

    Purpose: To demonstrate the feasibility of a predictive planning tool which provides SRS planning guidance based on simple patient anatomical properties: PTV size, PTV shape and distance from critical structures. Methods: Ten framed SRS cases treated at Winship Cancer Institute of Emory University were analyzed to extract data on PTV size, sphericity (shape), and distance from critical structures such as the brainstem and optic chiasm. The cases consisted of five pairs. Each pair consisted of two cases with a similar diagnosis (such as pituitary adenoma or arteriovenous malformation) that were treated with different techniques: DCA, or IMRS. A Naive Bayesmore » Classifier was trained on this data to establish the conditions under which each treatment modality was used. This model was validated by classifying ten other randomly-selected cases into DCA or IMRS classes, calculating the probability of each technique, and comparing results to the treated technique. Results: Of the ten cases used to validate the model, nine had their technique predicted correctly. The three cases treated with IMRS were all identified as such. Their probabilities of being treated with IMRS ranged between 59% and 100%. Six of the seven cases treated with DCA were correctly classified. These probabilities ranged between 51% and 95%. One case treated with DCA was incorrectly predicted to be an IMRS plan. The model’s confidence in this case was 91%. Conclusion: These findings indicate that a predictive planning tool based on simple patient anatomical properties can predict the SRS technique used for treatment. The algorithm operated with 90% accuracy. With further validation on larger patient populations, this tool may be used clinically to guide planners in choosing an appropriate treatment technique. The prediction algorithm could also be adapted to guide selection of treatment parameters such as treatment modality and number of fields for radiotherapy across anatomical sites.« less

  16. To Report or Not to Report: Exploring Healthy Volunteers' Rationales for Disclosing Adverse Events in Phase I Drug Trials.

    PubMed

    McManus, Lisa; Fisher, Jill A

    2018-04-25

    Phase I trials test the safety and tolerability of investigational drugs and often use healthy volunteers as research participants. Adverse events (AEs) are collected in part through participants' self-reports of any symptoms they experience during the trial. In some cases, experiencing AEs can result in trial participation being terminated. Because of the economic incentives underlying their motivation to participate, there is concern that healthy volunteers routinely fail to report AEs and, thereby, jeopardize the validity of the trial results. We interviewed 131 U.S. healthy volunteers about their experiences with AEs, including their rationales for reporting or failing to report symptoms. We found that participants have three primary rationales for their AE reporting behavior: economic, health-oriented, and data integrity. Participants often make decisions about whether to report AEs on a case-by-case basis evaluating what effects reporting or not reporting might have on the compensation they receive from the trial, the risk to their health, and the results of the particular clinical trial. Participants' interpretations of clinic policies, staff behaviors, and personal or vicarious experiences with reporting AEs also shape reporting decisions. Our findings demonstrate that participants' reporting behavior is more complex than previous portraits of healthy volunteers have suggested. Rather than finding participants who were so focused on the financial compensation that they were willing to subvert trial results, our study indicates that participants are willing in most cases to forgo their full compensation if they believe not reporting their symptoms jeopardizes their own safety or the validity of the research.

  17. Development and External Validation of the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer: Comparison with Two Western Risk Calculators in an Asian Cohort

    PubMed Central

    Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang

    2017-01-01

    Purpose We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Materials and Methods Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. Results PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, p<0.001) but not different from that of the ERSPCRC-HG (0.83) on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11%) would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2%) would not have been diagnosed. Conclusions KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings. PMID:28046017

  18. Development and External Validation of the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer: Comparison with Two Western Risk Calculators in an Asian Cohort.

    PubMed

    Park, Jae Young; Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang; Byun, Seok-Soo

    2017-01-01

    We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, p<0.001) but not different from that of the ERSPCRC-HG (0.83) on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11%) would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2%) would not have been diagnosed. KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings.

  19. Under-reporting of tuberculosis in Praia, Cape Verde, from 2006 to 2012.

    PubMed

    Furtado da Luz, E; Braga, J U

    2018-03-01

    According to World Health Organization (WHO) estimates, the under-reporting rate for tuberculosis (TB) in Cape Verde between 2006 and 2012 was 49%. However, the WHO recognises the challenges associated with this estimation process and recommends implementing other methods, such as record linkage, to combat TB under-reporting. To estimate and analyse under-reporting of cases by TB surveillance health units and to evaluate TB cases retrieved from other TB diagnostic sources in Praia, Cape Verde, from 2006 to 2012. This cross-sectional study evaluated under-reporting using the following data: 1) the under-reporting index from TB reporting health units (RHUs), where the number of validated TB cases from RHUs was compared with data from the National Programme for the Fight against Tuberculosis and Leprosy (NPFTL); and 2) the under-reporting index among overall data sources, or a comparison of the number of all validated TB cases from all sources with NPFTL data. The TB under-reporting rate was 40% in Praia during the study period, and results were influenced by laboratory findings. The TB under-reporting rate was very similar to the rate estimated by the WHO. TB surveillance must be improved to reduce under-reporting.

  20. 78 FR 23533 - Endangered and Threatened Wildlife and Plants; 90-Day Finding on a Petition To Delist the Wood Bison

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-19

    ... classification and our rationale for accepting wood bison as a valid subspecies below. Taxonomy is the theory and... groups of animals should be lumped together or split apart. Such is the case for wood bison. We recognize... separated by a great distance, led Boyd et al. (2010, p. 16 (American Bison Specialist Group)) to conclude...

  1. Path integral analysis of Jarzynski's equality: Analytical results

    NASA Astrophysics Data System (ADS)

    Minh, David D. L.; Adib, Artur B.

    2009-02-01

    We apply path integrals to study nonequilibrium work theorems in the context of Brownian dynamics, deriving in particular the equations of motion governing the most typical and most dominant trajectories. For the analytically soluble cases of a moving harmonic potential and a harmonic oscillator with a time-dependent natural frequency, we find such trajectories, evaluate the work-weighted propagators, and validate Jarzynski’s equality.

  2. Population Health Metrics Research Consortium gold standard verbal autopsy validation study: design, implementation, and development of analysis datasets

    PubMed Central

    2011-01-01

    Background Verbal autopsy methods are critically important for evaluating the leading causes of death in populations without adequate vital registration systems. With a myriad of analytical and data collection approaches, it is essential to create a high quality validation dataset from different populations to evaluate comparative method performance and make recommendations for future verbal autopsy implementation. This study was undertaken to compile a set of strictly defined gold standard deaths for which verbal autopsies were collected to validate the accuracy of different methods of verbal autopsy cause of death assignment. Methods Data collection was implemented in six sites in four countries: Andhra Pradesh, India; Bohol, Philippines; Dar es Salaam, Tanzania; Mexico City, Mexico; Pemba Island, Tanzania; and Uttar Pradesh, India. The Population Health Metrics Research Consortium (PHMRC) developed stringent diagnostic criteria including laboratory, pathology, and medical imaging findings to identify gold standard deaths in health facilities as well as an enhanced verbal autopsy instrument based on World Health Organization (WHO) standards. A cause list was constructed based on the WHO Global Burden of Disease estimates of the leading causes of death, potential to identify unique signs and symptoms, and the likely existence of sufficient medical technology to ascertain gold standard cases. Blinded verbal autopsies were collected on all gold standard deaths. Results Over 12,000 verbal autopsies on deaths with gold standard diagnoses were collected (7,836 adults, 2,075 children, 1,629 neonates, and 1,002 stillbirths). Difficulties in finding sufficient cases to meet gold standard criteria as well as problems with misclassification for certain causes meant that the target list of causes for analysis was reduced to 34 for adults, 21 for children, and 10 for neonates, excluding stillbirths. To ensure strict independence for the validation of methods and assessment of comparative performance, 500 test-train datasets were created from the universe of cases, covering a range of cause-specific compositions. Conclusions This unique, robust validation dataset will allow scholars to evaluate the performance of different verbal autopsy analytic methods as well as instrument design. This dataset can be used to inform the implementation of verbal autopsies to more reliably ascertain cause of death in national health information systems. PMID:21816095

  3. From patient care to research: a validation study examining the factors contributing to data quality in a primary care electronic medical record database.

    PubMed

    Coleman, Nathan; Halas, Gayle; Peeler, William; Casaclang, Natalie; Williamson, Tyler; Katz, Alan

    2015-02-05

    Electronic Medical Records (EMRs) are increasingly used in the provision of primary care and have been compiled into databases which can be utilized for surveillance, research and informing practice. The primary purpose of these records is for the provision of individual patient care; validation and examination of underlying limitations is crucial for use for research and data quality improvement. This study examines and describes the validity of chronic disease case definition algorithms and factors affecting data quality in a primary care EMR database. A retrospective chart audit of an age stratified random sample was used to validate and examine diagnostic algorithms applied to EMR data from the Manitoba Primary Care Research Network (MaPCReN), part of the Canadian Primary Care Sentinel Surveillance Network (CPCSSN). The presence of diabetes, hypertension, depression, osteoarthritis and chronic obstructive pulmonary disease (COPD) was determined by review of the medical record and compared to algorithm identified cases to identify discrepancies and describe the underlying contributing factors. The algorithm for diabetes had high sensitivity, specificity and positive predictive value (PPV) with all scores being over 90%. Specificities of the algorithms were greater than 90% for all conditions except for hypertension at 79.2%. The largest deficits in algorithm performance included poor PPV for COPD at 36.7% and limited sensitivity for COPD, depression and osteoarthritis at 72.0%, 73.3% and 63.2% respectively. Main sources of discrepancy included missing coding, alternative coding, inappropriate diagnosis detection based on medications used for alternate indications, inappropriate exclusion due to comorbidity and loss of data. Comparison to medical chart review shows that at MaPCReN the CPCSSN case finding algorithms are valid with a few limitations. This study provides the basis for the validated data to be utilized for research and informs users of its limitations. Analysis of underlying discrepancies provides the ability to improve algorithm performance and facilitate improved data quality.

  4. Laboratory compliance with the American Society of Clinical Oncology/college of American Pathologists guidelines for human epidermal growth factor receptor 2 testing: a College of American Pathologists survey of 757 laboratories.

    PubMed

    Nakhleh, Raouf E; Grimm, Erin E; Idowu, Michael O; Souers, Rhona J; Fitzgibbons, Patrick L

    2010-05-01

    To ensure quality human epidermal growth receptor 2 (HER2) testing in breast cancer, the American Society of Clinical Oncology/College of American Pathologists guidelines were introduced with expected compliance by 2008. To assess the effect these guidelines have had on pathology laboratories and their ability to address key components. In late 2008, a survey was distributed with the HER2 immunohistochemistry (IHC) proficiency testing program. It included questions regarding pathology practice characteristics and assay validation using fluorescence in situ hybridization or another IHC laboratory assay and assessed pathologist HER2 scoring competency. Of the 907 surveys sent, 757 (83.5%) were returned. The median laboratory accessioned 15 000 cases and performed 190 HER2 tests annually. Quantitative computer image analysis was used by 33% of laboratories. In-house fluorescence in situ hybridization was performed in 23% of laboratories, and 60% of laboratories addressed the 6- to 48-hour tissue fixation requirement by embedding tissue on the weekend. HER2 testing was performed on the initial biopsy in 40%, on the resection specimen in 6%, and on either in 56% of laboratories. Testing was validated with only fluorescence in situ hybridization in 47% of laboratories, whereas 10% of laboratories used another IHC assay only; 13% used both assays, and 12% and 15% of laboratories had not validated their assays or chose "not applicable" on the survey question, respectively. The 90% concordance rate with fluorescence in situ hybridization results was achieved by 88% of laboratories for IHC-negative findings and by 81% of laboratories for IHC-positive cases. The 90% concordance rate for laboratories using another IHC assay was achieved by 80% for negative findings and 75% for positive cases. About 91% of laboratories had a pathologist competency assessment program. This survey demonstrates the extent and characteristics of HER2 testing. Although some American Society of Clinical Oncology/College of American Pathologists guidelines have been implemented, gaps remain in validation of HER2 IHC testing.

  5. An analysis of thermal response factors and how to reduce their computational time requirement

    NASA Technical Reports Server (NTRS)

    Wiese, M. R.

    1982-01-01

    Te RESFAC2 version of the Thermal Response Factor Program (RESFAC) is the result of numerous modifications and additions to the original RESFAC. These modifications and additions have significantly reduced the program's computational time requirement. As a result of this work, the program is more efficient and its code is both readable and understandable. This report describes what a thermal response factor is; analyzes the original matrix algebra calculations and root finding techniques; presents a new root finding technique and streamlined matrix algebra; supplies ten validation cases and their results.

  6. Pediatric bipolar disorder: validity, phenomenology, and recommendations for diagnosis

    PubMed Central

    Youngstrom, Eric A; Birmaher, Boris; Findling, Robert L

    2013-01-01

    Objective To find, review, and critically evaluate evidence pertaining to the phenomenology of pediatric bipolar disorder and its validity as a diagnosis. Methods The present qualitative review summarizes and synthesizes available evidence about the phenomenology of bipolar disorder (BD) in youths, including description of the diagnostic sensitivity and specificity of symptoms, clarification about rates of cycling and mixed states, and discussion about chronic versus episodic presentations of mood dysregulation. The validity of the diagnosis of BD in youths is also evaluated based on traditional criteria including associated demographic characteristics, family environmental features, genetic bases, longitudinal studies of youths at risk of developing BD as well as youths already manifesting symptoms on the bipolar spectrum, treatment studies and pharmacologic dissection, neurobiological findings (including morphological and functional data), and other related laboratory findings. Additional sections review impairment and quality of life, personality and temperamental correlates, the clinical utility of a bipolar diagnosis in youths, and the dimensional versus categorical distinction as it applies to mood disorder in youths. Results A schema for diagnosis of BD in youths is developed, including a review of different operational definitions of `bipolar not otherwise specified.' Principal areas of disagreement appear to include the relative role of elated versus irritable mood in assessment, and also the limits of the extent of the bipolar spectrum – when do definitions become so broad that they are no longer describing `bipolar' cases? Conclusions In spite of these areas of disagreement, considerable evidence has amassed supporting the validity of the bipolar diagnosis in children and adolescents. PMID:18199237

  7. Identification of women at risk of depression in pregnancy: using women's accounts to understand the poor specificity of the Whooley and Arroll case finding questions in clinical practice.

    PubMed

    Darwin, Zoe; McGowan, Linda; Edozien, Leroy C

    2016-02-01

    Antenatal mental health assessment is increasingly common in high-income countries. Despite lacking evidence on validation or acceptability, the Whooley questions (modified PHQ-2) and Arroll 'help' question are used in the UK at booking (the first formal antenatal appointment) to identify possible cases of depression. This study investigated validation of the questions and women's views on assessment. Women (n = 191) booking at an inner-city hospital completed the Whooley and Arroll questions as part of their routine clinical care then completed a research questionnaire containing the Edinburgh postnatal depression scale (EPDS). A purposive subsample (n = 22) were subsequently interviewed. The Whooley questions 'missed' half the possible cases identified using the EPDS (EPDS threshold ≥ 10: sensitivity 45.7 %, specificity 92.1 %; ≥ 13: sensitivity 47.8 %, specificity 86.1 %), worsening to nine in ten when adopting the Arroll item (EPDS ≥ 10: sensitivity 9.1 %, specificity 98.2 %; ≥ 13: sensitivity 9.5 %, specificity 97.1 %). Women's accounts indicated that under-disclosure relates to the context of assessment and perceived relevance of depression to maternity services. Depression symptoms are under-identified in current local practice. While validated tools are needed that can be readily applied in routine maternity care, psychometric properties will be influenced by the context of disclosure when implemented in practice.

  8. Endovascular Exclusion of Visceral Artery Aneurysms with Stent-Grafts: Technique and Long-Term Follow-up

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossi, Michele; Rebonato, Alberto, E-mail: albertorebonato@libero.it; Greco, Laura

    This paper describes four cases of visceral artery aneurysms (VAAs) successfully treated with endovascular stent-grafts and discusses the endovascular approach to VAAs and the long-term results. Four balloon expandable stent-grafts were used to treat three splenic artery aneurysms and one bleeding common hepatic artery pseudoaneurysm. The percutaneous access site and the materials were chosen on the basis of CT angiography findings. In all cases the aneurysms were successfully excluded. In one case a splenic infarction occurred, with nonrelevant clinical findings. At 16- to 24-month follow-up three patients had patent stents and complete exclusion and shrinkage of the aneurysms. One patientmore » died due to pancreatitis and sepsis, 16 days after successful stenting and exclusion of a bleeding pseudoaneurysm. We conclude that endovascular treatment using covered stent-grafts is a valid therapeutic option for VAAs. Multislice CT preoperative study helps in planning stent-graft positioning.« less

  9. Evaluations of children who have disclosed sexual abuse via facilitated communication.

    PubMed

    Botash, A S; Babuts, D; Mitchell, N; O'Hara, M; Lynch, L; Manuel, J

    1994-12-01

    To review the findings of interdisciplinary team evaluations of children who disclosed sexual abuse via facilitated communication. Case series. Tertiary care hospital outpatient child sexual abuse program in central New York. Between January 1990 and March 1993, 13 children who disclosed sexual abuse via facilitated communication and were referred to a university hospital child abuse referral and evaluation center. The range of previously determined developmental diagnosis included mental retardation, speech delay, and autism. None. Medical records were reviewed for (1) disclosure, (2) physical evidence, (3) child's behavioral and medical history, (4) disclosures by siblings, (5) perpetrator's confession, (6) child protective services determinations, and (7) court findings. Four children had evidence of sexual abuse: two had physical findings consistent with sexual abuse, one also disclosed the allegation verbally, and one perpetrator confessed. These results neither support nor refute validation of facilitated communication. However, many children had other evidence of sexual abuse, suggesting that each child's case should be evaluated without bias.

  10. Identifying Stakeholders and Their Preferences about NFR by Comparing Use Case Diagrams of Several Existing Systems

    NASA Astrophysics Data System (ADS)

    Kaiya, Haruhiko; Osada, Akira; Kaijiri, Kenji

    We present a method to identify stakeholders and their preferences about non-functional requirements (NFR) by using use case diagrams of existing systems. We focus on the changes about NFR because such changes help stakeholders to identify their preferences. Comparing different use case diagrams of the same domain helps us to find changes to be occurred. We utilize Goal-Question-Metrics (GQM) method for identifying variables that characterize NFR, and we can systematically represent changes about NFR using the variables. Use cases that represent system interactions help us to bridge the gap between goals and metrics (variables), and we can easily construct measurable NFR. For validating and evaluating our method, we applied our method to an application domain of Mail User Agent (MUA) system.

  11. Predicting child maltreatment: A meta-analysis of the predictive validity of risk assessment instruments.

    PubMed

    van der Put, Claudia E; Assink, Mark; Boekhout van Solinge, Noëlle F

    2017-11-01

    Risk assessment is crucial in preventing child maltreatment since it can identify high-risk cases in need of child protection intervention. Despite widespread use of risk assessment instruments in child welfare, it is unknown how well these instruments predict maltreatment and what instrument characteristics are associated with higher levels of predictive validity. Therefore, a multilevel meta-analysis was conducted to examine the predictive accuracy of (characteristics of) risk assessment instruments. A literature search yielded 30 independent studies (N=87,329) examining the predictive validity of 27 different risk assessment instruments. From these studies, 67 effect sizes could be extracted. Overall, a medium significant effect was found (AUC=0.681), indicating a moderate predictive accuracy. Moderator analyses revealed that onset of maltreatment can be better predicted than recurrence of maltreatment, which is a promising finding for early detection and prevention of child maltreatment. In addition, actuarial instruments were found to outperform clinical instruments. To bring risk and needs assessment in child welfare to a higher level, actuarial instruments should be further developed and strengthened by distinguishing risk assessment from needs assessment and by integrating risk assessment with case management. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Renal Stone Risk During Spaceflight: Assessment and Countermeasure Validation

    NASA Technical Reports Server (NTRS)

    Pietrzyk, Robert A.; Whitson, Peggy A.; Sams, Clarence F.; Jones, Jeffery A.; Smith, Scott M.

    2009-01-01

    This viewgraph presentation describes the risks of renal stone formation in manned space flight. The contents include: 1) Risk; 2) Evidence; 3) Nephrolithiasis -A Multifactorial Disease; 4) Symptoms/signs; 5) Urolithiasis and Stone Passage; 6) Study Objectives; 7) Subjects; 8) Methods; 9) Investigation Results; 10) Potassium Citrate; 11) Calcium Balance; 12) Case Study; 13) Significant Findings; 14) Risk Mitigation Strategies and Recommended Actions; and 15) Future Potential.

  13. Validating Machine Learning Algorithms for Twitter Data Against Established Measures of Suicidality.

    PubMed

    Braithwaite, Scott R; Giraud-Carrier, Christophe; West, Josh; Barnes, Michael D; Hanson, Carl Lee

    2016-05-16

    One of the leading causes of death in the United States (US) is suicide and new methods of assessment are needed to track its risk in real time. Our objective is to validate the use of machine learning algorithms for Twitter data against empirically validated measures of suicidality in the US population. Using a machine learning algorithm, the Twitter feeds of 135 Mechanical Turk (MTurk) participants were compared with validated, self-report measures of suicide risk. Our findings show that people who are at high suicidal risk can be easily differentiated from those who are not by machine learning algorithms, which accurately identify the clinically significant suicidal rate in 92% of cases (sensitivity: 53%, specificity: 97%, positive predictive value: 75%, negative predictive value: 93%). Machine learning algorithms are efficient in differentiating people who are at a suicidal risk from those who are not. Evidence for suicidality can be measured in nonclinical populations using social media data.

  14. Validating Machine Learning Algorithms for Twitter Data Against Established Measures of Suicidality

    PubMed Central

    2016-01-01

    Background One of the leading causes of death in the United States (US) is suicide and new methods of assessment are needed to track its risk in real time. Objective Our objective is to validate the use of machine learning algorithms for Twitter data against empirically validated measures of suicidality in the US population. Methods Using a machine learning algorithm, the Twitter feeds of 135 Mechanical Turk (MTurk) participants were compared with validated, self-report measures of suicide risk. Results Our findings show that people who are at high suicidal risk can be easily differentiated from those who are not by machine learning algorithms, which accurately identify the clinically significant suicidal rate in 92% of cases (sensitivity: 53%, specificity: 97%, positive predictive value: 75%, negative predictive value: 93%). Conclusions Machine learning algorithms are efficient in differentiating people who are at a suicidal risk from those who are not. Evidence for suicidality can be measured in nonclinical populations using social media data. PMID:27185366

  15. Reconciling incongruous qualitative and quantitative findings in mixed methods research: exemplars from research with drug using populations.

    PubMed

    Wagner, Karla D; Davidson, Peter J; Pollini, Robin A; Strathdee, Steffanie A; Washburn, Rachel; Palinkas, Lawrence A

    2012-01-01

    Mixed methods research is increasingly being promoted in the health sciences as a way to gain more comprehensive understandings of how social processes and individual behaviours shape human health. Mixed methods research most commonly combines qualitative and quantitative data collection and analysis strategies. Often, integrating findings from multiple methods is assumed to confirm or validate the findings from one method with the findings from another, seeking convergence or agreement between methods. Cases in which findings from different methods are congruous are generally thought of as ideal, whilst conflicting findings may, at first glance, appear problematic. However, the latter situation provides the opportunity for a process through which apparently discordant results are reconciled, potentially leading to new emergent understandings of complex social phenomena. This paper presents three case studies drawn from the authors' research on HIV risk amongst injection drug users in which mixed methods studies yielded apparently discrepant results. We use these case studies (involving injection drug users [IDUs] using a Needle/Syringe Exchange Program in Los Angeles, CA, USA; IDUs seeking to purchase needle/syringes at pharmacies in Tijuana, Mexico; and young street-based IDUs in San Francisco, CA, USA) to identify challenges associated with integrating findings from mixed methods projects, summarize lessons learned, and make recommendations for how to more successfully anticipate and manage the integration of findings. Despite the challenges inherent in reconciling apparently conflicting findings from qualitative and quantitative approaches, in keeping with others who have argued in favour of integrating mixed methods findings, we contend that such an undertaking has the potential to yield benefits that emerge only through the struggle to reconcile discrepant results and may provide a sum that is greater than the individual qualitative and quantitative parts. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Reconciling incongruous qualitative and quantitative findings in mixed methods research: exemplars from research with drug using populations

    PubMed Central

    Wagner, Karla D.; Davidson, Peter J.; Pollini, Robin A.; Strathdee, Steffanie A.; Washburn, Rachel; Palinkas, Lawrence A.

    2011-01-01

    Mixed methods research is increasingly being promoted in the health sciences as a way to gain more comprehensive understandings of how social processes and individual behaviours shape human health. Mixed methods research most commonly combines qualitative and quantitative data collection and analysis strategies. Often, integrating findings from multiple methods is assumed to confirm or validate the findings from one method with the findings from another, seeking convergence or agreement between methods. Cases in which findings from different methods are congruous are generally thought of as ideal, while conflicting findings may, at first glance, appear problematic. However, the latter situation provides the opportunity for a process through which apparently discordant results are reconciled, potentially leading to new emergent understandings of complex social phenomena. This paper presents three case studies drawn from the authors’ research on HIV risk among injection drug users in which mixed methods studies yielded apparently discrepant results. We use these case studies (involving injection drug users [IDUs] using a needle/syringe exchange program in Los Angeles, California, USA; IDUs seeking to purchase needle/syringes at pharmacies in Tijuana, Mexico; and young street-based IDUs in San Francisco, CA, USA) to identify challenges associated with integrating findings from mixed methods projects, summarize lessons learned, and make recommendations for how to more successfully anticipate and manage the integration of findings. Despite the challenges inherent in reconciling apparently conflicting findings from qualitative and quantitative approaches, in keeping with others who have argued in favour of integrating mixed methods findings, we contend that such an undertaking has the potential to yield benefits that emerge only through the struggle to reconcile discrepant results and may provide a sum that is greater than the individual qualitative and quantitative parts. PMID:21680168

  17. Torso-Tank Validation of High-Resolution Electrogastrography (EGG): Forward Modelling, Methodology and Results.

    PubMed

    Calder, Stefan; O'Grady, Greg; Cheng, Leo K; Du, Peng

    2018-04-27

    Electrogastrography (EGG) is a non-invasive method for measuring gastric electrical activity. Recent simulation studies have attempted to extend the current clinical utility of the EGG, in particular by providing a theoretical framework for distinguishing specific gastric slow wave dysrhythmias. In this paper we implement an experimental setup called a 'torso-tank' with the aim of expanding and experimentally validating these previous simulations. The torso-tank was developed using an adult male torso phantom with 190 electrodes embedded throughout the torso. The gastric slow waves were reproduced using an artificial current source capable of producing 3D electrical fields. Multiple gastric dysrhythmias were reproduced based on high-resolution mapping data from cases of human gastric dysfunction (gastric re-entry, conduction blocks and ectopic pacemakers) in addition to normal test data. Each case was recorded and compared to the previously-presented simulated results. Qualitative and quantitative analyses were performed to define the accuracy showing [Formula: see text] 1.8% difference, [Formula: see text] 0.99 correlation, and [Formula: see text] 0.04 normalised RMS error between experimental and simulated findings. These results reaffirm previous findings and these methods in unison therefore present a promising morphological-based methodology for advancing the understanding and clinical applications of EGG.

  18. Validation of asthma recording in electronic health records: a systematic review

    PubMed Central

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-01-01

    Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining asthma definitions with optimal validity. PMID:29238227

  19. The risk of acute liver injury associated with the use of antibiotics--evaluating robustness of results in the pharmacoepidemiological research on outcomes of therapeutics by a European consortium (PROTECT) project.

    PubMed

    Udo, Renate; Tcherny-Lessenot, Stéphanie; Brauer, Ruth; Dolin, Paul; Irvine, David; Wang, Yunxun; Auclert, Laurent; Juhaeri, Juhaeri; Kurz, Xavier; Abenhaim, Lucien; Grimaldi, Lamiae; De Bruin, Marie L

    2016-03-01

    To examine the robustness of findings of case-control studies on the association between acute liver injury (ALI) and antibiotic use in the following different situations: (i) Replication of a protocol in different databases, with different data types, as well as replication in the same database, but performed by a different research team. (ii) Varying algorithms to identify cases, with and without manual case validation. (iii) Different exposure windows for time at risk. Five case-control studies in four different databases were performed with a common study protocol as starting point to harmonize study outcome definitions, exposure definitions and statistical analyses. All five studies showed an increased risk of ALI associated with antibiotic use ranging from OR 2.6 (95% CI 1.3-5.4) to 7.7 (95% CI 2.0-29.3). Comparable trends could be observed in the five studies: (i) without manual validation the use of the narrowest definition for ALI showed higher risk estimates, (ii) narrow and broad algorithm definitions followed by manual validation of cases resulted in similar risk estimates, and (iii) the use of a larger window (30 days vs 14 days) to define time at risk led to a decrease in risk estimates. Reproduction of a study using a predefined protocol in different database settings is feasible, although assumptions had to be made and amendments in the protocol were inevitable. Despite differences, the strength of association was comparable between the studies. In addition, the impact of varying outcome definitions and time windows showed similar trends within the data sources. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Developing a Time Series Predictive Model for Dengue in Zhongshan, China Based on Weather and Guangzhou Dengue Surveillance Data.

    PubMed

    Zhang, Yingtao; Wang, Tao; Liu, Kangkang; Xia, Yao; Lu, Yi; Jing, Qinlong; Yang, Zhicong; Hu, Wenbiao; Lu, Jiahai

    2016-02-01

    Dengue is a re-emerging infectious disease of humans, rapidly growing from endemic areas to dengue-free regions due to favorable conditions. In recent decades, Guangzhou has again suffered from several big outbreaks of dengue; as have its neighboring cities. This study aims to examine the impact of dengue epidemics in Guangzhou, China, and to develop a predictive model for Zhongshan based on local weather conditions and Guangzhou dengue surveillance information. We obtained weekly dengue case data from 1st January, 2005 to 31st December, 2014 for Guangzhou and Zhongshan city from the Chinese National Disease Surveillance Reporting System. Meteorological data was collected from the Zhongshan Weather Bureau and demographic data was collected from the Zhongshan Statistical Bureau. A negative binomial regression model with a log link function was used to analyze the relationship between weekly dengue cases in Guangzhou and Zhongshan, controlling for meteorological factors. Cross-correlation functions were applied to identify the time lags of the effect of each weather factor on weekly dengue cases. Models were validated using receiver operating characteristic (ROC) curves and k-fold cross-validation. Our results showed that weekly dengue cases in Zhongshan were significantly associated with dengue cases in Guangzhou after the treatment of a 5 weeks prior moving average (Relative Risk (RR) = 2.016, 95% Confidence Interval (CI): 1.845-2.203), controlling for weather factors including minimum temperature, relative humidity, and rainfall. ROC curve analysis indicated our forecasting model performed well at different prediction thresholds, with 0.969 area under the receiver operating characteristic curve (AUC) for a threshold of 3 cases per week, 0.957 AUC for a threshold of 2 cases per week, and 0.938 AUC for a threshold of 1 case per week. Models established during k-fold cross-validation also had considerable AUC (average 0.938-0.967). The sensitivity and specificity obtained from k-fold cross-validation was 78.83% and 92.48% respectively, with a forecasting threshold of 3 cases per week; 91.17% and 91.39%, with a threshold of 2 cases; and 85.16% and 87.25% with a threshold of 1 case. The out-of-sample prediction for the epidemics in 2014 also showed satisfactory performance. Our study findings suggest that the occurrence of dengue outbreaks in Guangzhou could impact dengue outbreaks in Zhongshan under suitable weather conditions. Future studies should focus on developing integrated early warning systems for dengue transmission including local weather and human movement.

  1. Quantitative fluorescent polymerase chain reaction for rapid prenatal diagnosis of fetal aneuploidies in chorionic villus sampling in a single institution

    PubMed Central

    Shin, You Jung; Kim, Do Jin; Ryu, Hyun Mee; Kim, Moon Young; Han, Jung Yeol; Choi, June Seek

    2016-01-01

    Objective To validate quantitative fluorescent polymerase chain reaction (QF-PCR) via chorionic villus sampling (CVS) for the diagnosis of fetal aneuploidies. Methods We retrospectively reviewed the medical records of consecutive pregnant women who had undergone CVS at Cheil General Hospital between December 2009 and June 2014. Only cases with reported QF-PCR before long-term culture (LTC) for conventional cytogenetic analysis were included, and the results of these two methods were compared. Results A total of 383 pregnant women underwent QF-PCR and LTC via CVS during the study period and 403 CVS specimens were collected. The indications of CVS were as follows: abnormal first-trimester ultrasonographic findings, including increased fetal nuchal translucency (85.1%), advanced maternal age (6.8%), previous history of fetal anomalies (4.2%), and positive dual test results for trisomy 21 (3.9%). The results of QF-PCR via CVS were as follows: 76 (18.9%) cases were identified as trisomy 21 (36 cases), 18 (33 cases), or 13 (seven cases), and 4 (1.0%) cases were suspected to be mosaicism. All results of common autosomal trisomies by QF-PCR were consistent with those of LTC and there were no false-positive findings. Four cases suspected as mosaicism in QF-PCR were confirmed as non-mosaic trisomies of trisomy 21 (one case) or trisomy 18 (three cases) in LTC. Conclusion QF-PCR via CVS has the advantage of rapid prenatal screening at an earlier stage of pregnancy for common chromosomal trisomies and thus can reduce the anxiety of parents. In particular, it can be helpful for pregnant women with increased fetal nuchal translucency or abnormal first-trimester ultrasonographic findings. PMID:27896246

  2. Investigation of rheumatoid arthritis susceptibility loci in juvenile idiopathic arthritis confirms high degree of overlap.

    PubMed

    Hinks, Anne; Cobb, Joanna; Sudman, Marc; Eyre, Stephen; Martin, Paul; Flynn, Edward; Packham, Jonathon; Barton, Anne; Worthington, Jane; Langefeld, Carl D; Glass, David N; Thompson, Susan D; Thomson, Wendy

    2012-07-01

    Rheumatoid arthritis (RA) shares some similar clinical and pathological features with juvenile idiopathic arthritis (JIA); indeed, the strategy of investigating whether RA susceptibility loci also confer susceptibility to JIA has already proved highly successful in identifying novel JIA loci. A plethora of newly validated RA loci has been reported in the past year. Therefore, the aim of this study was to investigate these single nucleotide polymorphisms (SNP) to determine if they were also associated with JIA. Thirty-four SNP that showed validated association with RA and had not been investigated previously in the UK JIA cohort were genotyped in JIA cases (n=1242), healthy controls (n=4281), and data were extracted for approximately 5380 UK Caucasian controls from the Wellcome Trust Case-Control Consortium 2. Genotype and allele frequencies were compared between cases with JIA and controls using PLINK. A replication cohort of 813 JIA cases and 3058 controls from the USA was available for validation of any significant findings. Thirteen SNP showed significant association (p<0.05) with JIA and for all but one the direction of association was the same as in RA. Of the eight loci that were tested, three showed significant association in the US cohort. A novel JIA susceptibility locus was identified, CD247, which represents another JIA susceptibility gene whose protein product is important in T-cell activation and signalling. The authors have also confirmed association of the PTPN2 and IL2RA genes with JIA, both reaching genome-wide significance in the combined analysis.

  3. Using natural language processing for identification of herpes zoster ophthalmicus cases to support population-based study.

    PubMed

    Zheng, Chengyi; Luo, Yi; Mercado, Cheryl; Sy, Lina; Jacobsen, Steven J; Ackerson, Brad; Lewin, Bruno; Tseng, Hung Fu

    2018-06-19

    Diagnosis codes are inadequate for accurately identifying herpes zoster ophthalmicus (HZO). There is significant lack of population-based studies on HZO due to the high expense of manual review of medical records. To assess whether HZO can be identified from the clinical notes using natural language processing (NLP). To investigate the epidemiology of HZO among HZ population based on the developed approach. A retrospective cohort analysis. A total of 49,914 southern California residents aged over 18 years, who had a new diagnosis of HZ. An NLP-based algorithm was developed and validated with the manually curated validation dataset (n=461). The algorithm was applied on over 1 million clinical notes associated with the study population. HZO versus non-HZO cases were compared by age, sex, race, and comorbidities. We measured the accuracy of NLP algorithm. NLP algorithm achieved 95.6% sensitivity and 99.3% specificity. Compared to the diagnosis codes, NLP identified significant more HZO cases among HZ population (13.9% versus 1.7%). Compared to the non-HZO group, the HZO group was older, had more males, had more Whites, and had more outpatient visits. We developed and validated an automatic method to identify HZO cases with high accuracy. As one of the largest studies on HZO, our finding emphasizes the importance of preventing HZ in the elderly population. This method can be a valuable tool to support population-based studies and clinical care of HZO in the era of big data. This article is protected by copyright. All rights reserved.

  4. Completeness and consistency in recording information in the tuberculosis case register, Cambodia, China and Viet Nam.

    PubMed

    Hoa, N B; Wei, C; Sokun, C; Lauritsen, J M; Rieder, H L

    2010-10-01

    Tuberculosis (TB) case registers in Cambodia, two provinces in China and in Viet Nam. To determine completeness and consistency of information for quarterly reports on case finding and treatment outcome. A representative sample of TB case registers was selected in Cambodia, in two provinces in China and in Viet Nam. Quarterly reports were reproduced from double-entered, validated data to determine completeness and consistency. The dataset comprised 37,635 patient records in 2 calendar years. Only 0.2%, 3.6% and 1.1% of cases, respectively, in Cambodia, the two China provinces, and Viet Nam did not allow classification for the quarterly report on case finding. If the treatment outcome was reported as cured, it was correct in 99.9%, 85.7%, and 98.5% of the respective three jurisdictions: errors were mostly due to misclassification of completion as cure. Under-reporting of failures was more frequent than over-reporting in Cambodia and Viet Nam, while in the two provinces in China 84% of reported failures did not actually meet the bacteriological criterion. This evaluation demonstrates that recording essential information is exemplary in all three countries. It will be essential to carefully supervise the ability of staff to correctly define TB treatment outcome results in all three countries.

  5. Pleomorphic Liposarcoma Arising in a Lipoleiomyosarcoma of the Uterus: Report of a Case With Genetic Profiling by a Next Generation Sequencing Panel.

    PubMed

    Schoolmeester, J Kenneth; Stamatakos, Michael D; Moyer, Ann M; Park, Kay J; Fairbairn, Melissa; Fader, Amanda N

    2016-07-01

    Uterine tumors with adipocytic differentiation are very uncommon. Mature adipocytes are sometimes seen as an element of smooth muscle neoplasms, more often as lipoleiomyoma, but also in the rare lipoleiomyosarcoma. Exceptional cases have been reported of various subtypes of liposarcoma associated with uterine smooth muscle tumors with or without adipocytic differentiation. We present a case of pleomorphic liposarcoma arising in a lipoleiomyosarcoma of the uterus. Genomic profiling was performed using a validated next generation sequencing panel covering 410 common cancer genes. Alterations were identified in TP53, PTEN, RB1, FAT1 and TERT. The patient's presentation and clinical course as well as the tumor's morphologic, immunohistochemical and molecular genetic findings are reviewed.

  6. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable

    PubMed Central

    Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393

  7. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable.

    PubMed

    Korjus, Kristjan; Hebart, Martin N; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.

  8. Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Calvin Version 3.0, 10074-Vtr-3.0-00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Gillespie

    2000-07-27

    This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M&O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequentmore » calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M&O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with the CALVIN V3.0 results within the bounds of the acceptance criteria. Therefore, it is concluded that the CALVIN V3.0 calculation models and options tested in this report are validated.« less

  9. Current Perception Threshold for Assessment of the Neurological Components of Hand-Arm Vibration Syndrome: A Review

    PubMed Central

    Kurozawa, Youichi; Hosoda, Takenobu; Nasu, Yoshiro

    2010-01-01

    Current perception threshold (CPT) has been proposed as a quantitative method for assessment of peripheral sensory nerve function. The aim of this review of selected reports is to provide an overview of CPT measurement for the assessment of the neurological component of hand-arm vibration syndrome (HAVS). The CPT values at 2000 Hz significantly increased for patients with HAVS. This result supports the previous histological findings that demyelination is found predominantly in the peripheral nerves in the hands of men exposed to hand-arm vibration. Diagnostic sensitivity and specificity were high for severe cases of Stockholm sensorineural (SSN) stage 3 compared with non-exposed controls, but not high for mild cases of SSN stage 1 or 2 and for carpal tunnel syndrome associated with HAVS. However, there are only a few studies on the diagnostic validity of the CPT test for the neurological components of HAVS. Further research is needed and should include diagnostic validity and standardizing of measurement conditions such as skin temperature. PMID:24031119

  10. A novel approach for medical research on lymphomas

    PubMed Central

    Conte, Cécile; Palmaro, Aurore; Grosclaude, Pascale; Daubisse-Marliac, Laetitia; Despas, Fabien; Lapeyre-Mestre, Maryse

    2018-01-01

    Abstract The use of claims database to study lymphomas in real-life conditions is a crucial issue in the future. In this way, it is essential to develop validated algorithms for the identification of lymphomas in these databases. The aim of this study was to assess the validity of diagnosis codes in the French health insurance database to identify incident cases of lymphomas according to results of a regional cancer registry, as the gold standard. Between 2010 and 2013, incident lymphomas were identified in hospital data through 2 algorithms of selection. The results of the identification process and characteristics of incident lymphomas cases were compared with data from the Tarn Cancer Registry. Each algorithm's performance was assessed by estimating sensitivity, predictive positive value, specificity (SPE), and negative predictive value. During the period, the registry recorded 476 incident cases of lymphomas, of which 52 were Hodgkin lymphomas and 424 non-Hodgkin lymphomas. For corresponding area and period, algorithm 1 provides a number of incident cases close to the Registry, whereas algorithm 2 overestimated the number of incident cases by approximately 30%. Both algorithms were highly specific (SPE = 99.9%) but moderately sensitive. The comparative analysis illustrates that similar distribution and characteristics are observed in both sources. Given these findings, the use of claims database can be consider as a pertinent and powerful tool to conduct medico-economic or pharmacoepidemiological studies in lymphomas. PMID:29480830

  11. A New Fracture Risk Assessment Tool (FREM) Based on Public Health Registries.

    PubMed

    Rubin, Katrine Hass; Möller, Sören; Holmberg, Teresa; Bliddal, Mette; Søndergaard, Jens; Abrahamsen, Bo

    2018-06-20

    Some conditions are already known to be associated with increased risk of osteoporotic fractures. Other conditions may also be significant indicators of increased risk. The aim of the study was to identify conditions for inclusion in a fracture prediction model (FREM - Fracture Risk Evaluation Model) for automated case finding of high-risk individuals of hip or major osteoporotic fractures (MOF). We included the total population in Denmark aged 45+ years (N= 2,495,339). All hospital diagnoses from 1998 to 2012 were used as possible conditions and the primary outcome was MOF during 2013. Our cohort was split randomly 50-50 into a development and a validation dataset for deriving and validating the predictive model. We applied backward selection on ICD-10 codes by logistic regression to develop an age-adjusted and sex-stratified model. FREM for MOF included 38 and 43 risk factors for women and men, respectively. Testing FREM for MOF in the validation cohort showed good accuracy as it produced ROC curves with an AUC of 0.750 (95% CI 0.741, 0.795) and 0.752 (95% CI 0.743, 0.761) for women and men, respectively FREM for hip fractures included 32 risk factors for both genders and showed an even higher accuracy in the validation cohort as AUC of 0.874 (95% CI 0.869, 0.879) and 0.851 (95% CI 0.841, 0.861) for women and men was found. We have developed and tested a prediction model (FREM) for identifying men and women at high risk of MOF or hip fractures by using solely existing administrative data. FREM could be employed either at the point of care integrated into Electronic Patient Record systems to alert physicians or deployed centrally in a national case finding strategy where patients at high fracture risk could be invited to a focused DXA program. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  12. Using Constructive Alignment to Improve Student Research and Writing Skills: A Case Study of a Master's Program in Real Estate Management

    ERIC Educational Resources Information Center

    Azasu, Samuel; Berggren, Björn

    2015-01-01

    The purpose of the paper is to describe and analyse efforts to integrate research into teaching in a postgraduate degree program in real estate management. The long term goals of the changes were to increase graduation rates as well as the quality of dissertations. In order to validate our findings, the data for this paper emanate from a three…

  13. Computing preimages of Boolean networks.

    PubMed

    Klotz, Johannes; Bossert, Martin; Schober, Steffen

    2013-01-01

    In this paper we present an algorithm based on the sum-product algorithm that finds elements in the preimage of a feed-forward Boolean networks given an output of the network. Our probabilistic method runs in linear time with respect to the number of nodes in the network. We evaluate our algorithm for randomly constructed Boolean networks and a regulatory network of Escherichia coli and found that it gives a valid solution in most cases.

  14. Can species distribution models really predict the expansion of invasive species?

    PubMed

    Barbet-Massin, Morgane; Rome, Quentin; Villemant, Claire; Courchamp, Franck

    2018-01-01

    Predictive studies are of paramount importance for biological invasions, one of the biggest threats for biodiversity. To help and better prioritize management strategies, species distribution models (SDMs) are often used to predict the potential invasive range of introduced species. Yet, SDMs have been regularly criticized, due to several strong limitations, such as violating the equilibrium assumption during the invasion process. Unfortunately, validation studies-with independent data-are too scarce to assess the predictive accuracy of SDMs in invasion biology. Yet, biological invasions allow to test SDMs usefulness, by retrospectively assessing whether they would have accurately predicted the latest ranges of invasion. Here, we assess the predictive accuracy of SDMs in predicting the expansion of invasive species. We used temporal occurrence data for the Asian hornet Vespa velutina nigrithorax, a species native to China that is invading Europe with a very fast rate. Specifically, we compared occurrence data from the last stage of invasion (independent validation points) to the climate suitability distribution predicted from models calibrated with data from the early stage of invasion. Despite the invasive species not being at equilibrium yet, the predicted climate suitability of validation points was high. SDMs can thus adequately predict the spread of V. v. nigrithorax, which appears to be-at least partially-climatically driven. In the case of V. v. nigrithorax, SDMs predictive accuracy was slightly but significantly better when models were calibrated with invasive data only, excluding native data. Although more validation studies for other invasion cases are needed to generalize our results, our findings are an important step towards validating the use of SDMs in invasion biology.

  15. Can species distribution models really predict the expansion of invasive species?

    PubMed Central

    Rome, Quentin; Villemant, Claire; Courchamp, Franck

    2018-01-01

    Predictive studies are of paramount importance for biological invasions, one of the biggest threats for biodiversity. To help and better prioritize management strategies, species distribution models (SDMs) are often used to predict the potential invasive range of introduced species. Yet, SDMs have been regularly criticized, due to several strong limitations, such as violating the equilibrium assumption during the invasion process. Unfortunately, validation studies–with independent data–are too scarce to assess the predictive accuracy of SDMs in invasion biology. Yet, biological invasions allow to test SDMs usefulness, by retrospectively assessing whether they would have accurately predicted the latest ranges of invasion. Here, we assess the predictive accuracy of SDMs in predicting the expansion of invasive species. We used temporal occurrence data for the Asian hornet Vespa velutina nigrithorax, a species native to China that is invading Europe with a very fast rate. Specifically, we compared occurrence data from the last stage of invasion (independent validation points) to the climate suitability distribution predicted from models calibrated with data from the early stage of invasion. Despite the invasive species not being at equilibrium yet, the predicted climate suitability of validation points was high. SDMs can thus adequately predict the spread of V. v. nigrithorax, which appears to be—at least partially–climatically driven. In the case of V. v. nigrithorax, SDMs predictive accuracy was slightly but significantly better when models were calibrated with invasive data only, excluding native data. Although more validation studies for other invasion cases are needed to generalize our results, our findings are an important step towards validating the use of SDMs in invasion biology. PMID:29509789

  16. EG-09EPIGENETIC PROFILING REVEALS A CpG HYPERMETHYLATION PHENOTYPE (CIMP) ASSOCIATED WITH WORSE PROGRESSION-FREE SURVIVAL IN MENINGIOMA

    PubMed Central

    Olar, Adriana; Wani, Khalida; Mansouri, Alireza; Zadeh, Gelareh; Wilson, Charmaine; DeMonte, Franco; Fuller, Gregory; Jones, David; Pfister, Stefan; von Deimling, Andreas; Sulman, Erik; Aldape, Kenneth

    2014-01-01

    BACKGROUND: Methylation profiling of solid tumors has revealed biologic subtypes, often with clinical implications. Methylation profiles of meningioma and their clinical implications are not well understood. METHODS: Ninety-two meningioma samples (n = 44 test set and n = 48 validation set) were profiled using the Illumina HumanMethylation450 BeadChip. Unsupervised clustering and analyses for recurrence-free survival (RFS) were performed. RESULTS: Unsupervised clustering of the test set using approximately 900 highly variable markers identified two clearly defined methylation subgroups. One of the groups (n = 19) showed global hypermethylation of a set of markers, analogous to CpG island methylator phenotype (CIMP). These findings were reproducible in the validation set, with 18/48 samples showing the CIMP-positive phenotype. Importantly, of 347 highly variable markers common to both the test and validation set analyses, 107 defined CIMP in the test set and 94 defined CIMP in the validation set, with an overlap of 83 markers between the two datasets. This number is much greater than expected by chance indicating reproducibly of the hypermethylated markers that define CIMP in meningioma. With respect to clinical correlation, the 37 CIMP-positive cases displayed significantly shorter RFS compared to the 55 non-CIMP cases (hazard ratio 2.9, p = 0.013). In an effort to develop a preliminary outcome predictor, a 155-marker subset correlated with RFS was identified in the test dataset. When interrogated in the validation dataset, this 155-marker subset showed a statistical trend (p < 0.1) towards distinguishing survival groups. CONCLUSIONS: This study defines the existence of a CIMP phenotype in meningioma, which involves a substantial proportion (37/92, 40%) of samples with clinical implications. Ongoing work will expand this cohort and examine identification of additional biologic differences (mutational and DNA copy number analysis) to further characterize the aberrant methylation subtype in meningioma. CIMP-positivity with aberrant methylation in recurrent/malignant meningioma suggests a potential therapeutic target for clinically aggressive cases.

  17. A statistical approach to selecting and confirming validation targets in -omics experiments

    PubMed Central

    2012-01-01

    Background Genomic technologies are, by their very nature, designed for hypothesis generation. In some cases, the hypotheses that are generated require that genome scientists confirm findings about specific genes or proteins. But one major advantage of high-throughput technology is that global genetic, genomic, transcriptomic, and proteomic behaviors can be observed. Manual confirmation of every statistically significant genomic result is prohibitively expensive. This has led researchers in genomics to adopt the strategy of confirming only a handful of the most statistically significant results, a small subset chosen for biological interest, or a small random subset. But there is no standard approach for selecting and quantitatively evaluating validation targets. Results Here we present a new statistical method and approach for statistically validating lists of significant results based on confirming only a small random sample. We apply our statistical method to show that the usual practice of confirming only the most statistically significant results does not statistically validate result lists. We analyze an extensively validated RNA-sequencing experiment to show that confirming a random subset can statistically validate entire lists of significant results. Finally, we analyze multiple publicly available microarray experiments to show that statistically validating random samples can both (i) provide evidence to confirm long gene lists and (ii) save thousands of dollars and hundreds of hours of labor over manual validation of each significant result. Conclusions For high-throughput -omics studies, statistical validation is a cost-effective and statistically valid approach to confirming lists of significant results. PMID:22738145

  18. Rediscovery rate estimation for assessing the validation of significant findings in high-throughput studies.

    PubMed

    Ganna, Andrea; Lee, Donghwan; Ingelsson, Erik; Pawitan, Yudi

    2015-07-01

    It is common and advised practice in biomedical research to validate experimental or observational findings in a population different from the one where the findings were initially assessed. This practice increases the generalizability of the results and decreases the likelihood of reporting false-positive findings. Validation becomes critical when dealing with high-throughput experiments, where the large number of tests increases the chance to observe false-positive results. In this article, we review common approaches to determine statistical thresholds for validation and describe the factors influencing the proportion of significant findings from a 'training' sample that are replicated in a 'validation' sample. We refer to this proportion as rediscovery rate (RDR). In high-throughput studies, the RDR is a function of false-positive rate and power in both the training and validation samples. We illustrate the application of the RDR using simulated data and real data examples from metabolomics experiments. We further describe an online tool to calculate the RDR using t-statistics. We foresee two main applications. First, if the validation study has not yet been collected, the RDR can be used to decide the optimal combination between the proportion of findings taken to validation and the size of the validation study. Secondly, if a validation study has already been done, the RDR estimated using the training data can be compared with the observed RDR from the validation data; hence, the success of the validation study can be assessed. © The Author 2014. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  19. Validation of central line-associated bloodstream infection data in a voluntary reporting state: New Mexico.

    PubMed

    Thompson, Deborah L; Makvandi, Monear; Baumbach, Joan

    2013-02-01

    In New Mexico, voluntary submission of central line-associated bloodstream infection (CLABSI) surveillance data via the National Healthcare Safety Network (NHSN) began in July 2008. Validation of CLABSI data is necessary to ensure quality, accuracy, and reliability of surveillance efforts. We conducted a retrospective medical record review of 123 individuals with positive blood cultures who were admitted to adult intensive care units (ICU) at 6 New Mexico hospitals between November 2009 and March 2010. Blinded reviews were conducted independently by pairs of reviewers using standardized data collection instruments. Findings were compared between reviewers and with NHSN data. Discordant cases were reviewed and reconciled with hospital infection preventionists. Initially, 118 individuals were identified for medical record review. Seven ICU CLABSI events were identified by the reviewers. Data submitted to the NHSN revealed 8 ICU CLABSI events, 5 of which had not been identified for medical record review and 3 of which had been determined by reviewers to not be ICU CLABSI cases. Comparison of final case determinations for all 123 individuals with NHSN data resulted in a sensitivity of 66.7%, specificity of 100%, positive predictive value of 100%, and negative predictive value of 96.5% for ICU CLABSI surveillance. There is need for ongoing quality improvement and validation processes to ensure accurate NHSN data. Copyright © 2013 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  20. A realist evaluation of the management of a well- performing regional hospital in Ghana

    PubMed Central

    2010-01-01

    Background Realist evaluation offers an interesting approach to evaluation of interventions in complex settings, but has been little applied in health care. We report on a realist case study of a well performing hospital in Ghana and show how such a realist evaluation design can help to overcome the limited external validity of a traditional case study. Methods We developed a realist evaluation framework for hypothesis formulation, data collection, data analysis and synthesis of the findings. Focusing on the role of human resource management in hospital performance, we formulated our hypothesis around the high commitment management concept. Mixed methods were used in data collection, including individual and group interviews, observations and document reviews. Results We found that the human resource management approach (the actual intervention) included induction of new staff, training and personal development, good communication and information sharing, and decentralised decision-making. We identified 3 additional practices: ensuring optimal physical working conditions, access to top managers and managers' involvement on the work floor. Teamwork, recognition and trust emerged as key elements of the organisational climate. Interviewees reported high levels of organisational commitment. The analysis unearthed perceived organisational support and reciprocity as underlying mechanisms that link the management practices with commitment. Methodologically, we found that realist evaluation can be fruitfully used to develop detailed case studies that analyse how management interventions work and in which conditions. Analysing the links between intervention, mechanism and outcome increases the explaining power, while identification of essential context elements improves the usefulness of the findings for decision-makers in other settings (external validity). We also identified a number of practical difficulties and priorities for further methodological development. Conclusion This case suggests that a well-balanced HRM bundle can stimulate organisational commitment of health workers. Such practices can be implemented even with narrow decision spaces. Realist evaluation provides an appropriate approach to increase the usefulness of case studies to managers and policymakers. PMID:20100330

  1. A realist evaluation of the management of a well-performing regional hospital in Ghana.

    PubMed

    Marchal, Bruno; Dedzo, McDamien; Kegels, Guy

    2010-01-25

    Realist evaluation offers an interesting approach to evaluation of interventions in complex settings, but has been little applied in health care. We report on a realist case study of a well performing hospital in Ghana and show how such a realist evaluation design can help to overcome the limited external validity of a traditional case study. We developed a realist evaluation framework for hypothesis formulation, data collection, data analysis and synthesis of the findings. Focusing on the role of human resource management in hospital performance, we formulated our hypothesis around the high commitment management concept. Mixed methods were used in data collection, including individual and group interviews, observations and document reviews. We found that the human resource management approach (the actual intervention) included induction of new staff, training and personal development, good communication and information sharing, and decentralised decision-making. We identified 3 additional practices: ensuring optimal physical working conditions, access to top managers and managers' involvement on the work floor. Teamwork, recognition and trust emerged as key elements of the organisational climate. Interviewees reported high levels of organisational commitment. The analysis unearthed perceived organisational support and reciprocity as underlying mechanisms that link the management practices with commitment. Methodologically, we found that realist evaluation can be fruitfully used to develop detailed case studies that analyse how management interventions work and in which conditions. Analysing the links between intervention, mechanism and outcome increases the explaining power, while identification of essential context elements improves the usefulness of the findings for decision-makers in other settings (external validity). We also identified a number of practical difficulties and priorities for further methodological development. This case suggests that a well-balanced HRM bundle can stimulate organisational commitment of health workers. Such practices can be implemented even with narrow decision spaces. Realist evaluation provides an appropriate approach to increase the usefulness of case studies to managers and policymakers.

  2. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  3. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  4. Efficient hit-finding approaches for histone methyltransferases: the key parameters.

    PubMed

    Ahrens, Thomas; Bergner, Andreas; Sheppard, David; Hafenbradl, Doris

    2012-01-01

    For many novel epigenetics targets the chemical ligand space and structural information were limited until recently and are still largely unknown for some targets. Hit-finding campaigns are therefore dependent on large and chemically diverse libraries. In the specific case of the histone methyltransferase G9a, the authors have been able to apply an efficient process of intelligent selection of compounds for primary screening, rather than screening the full diverse deck of 900 000 compounds to identify hit compounds. A number of different virtual screening methods have been applied for the compound selection, and the results have been analyzed in the context of their individual success rates. For the primary screening of 2112 compounds, a FlashPlate assay format and full-length histone H3.1 substrate were employed. Validation of hit compounds was performed using the orthogonal fluorescence lifetime technology. Rated by purity and IC(50) value, 18 compounds (0.9% of compound screening deck) were finally considered validated primary G9a hits. The hit-finding approach has led to novel chemotypes being identified, which can facilitate hit-to-lead projects. This study demonstrates the power of virtual screening technologies for novel, therapeutically relevant epigenetics protein targets.

  5. Real-time web-based assessment of total population risk of future emergency department utilization: statewide prospective active case finding study.

    PubMed

    Hu, Zhongkai; Jin, Bo; Shin, Andrew Y; Zhu, Chunqing; Zhao, Yifan; Hao, Shiying; Zheng, Le; Fu, Changlin; Wen, Qiaojun; Ji, Jun; Li, Zhen; Wang, Yong; Zheng, Xiaolin; Dai, Dorothy; Culver, Devore S; Alfreds, Shaun T; Rogow, Todd; Stearns, Frank; Sylvester, Karl G; Widen, Eric; Ling, Xuefeng B

    2015-01-13

    An easily accessible real-time Web-based utility to assess patient risks of future emergency department (ED) visits can help the health care provider guide the allocation of resources to better manage higher-risk patient populations and thereby reduce unnecessary use of EDs. Our main objective was to develop a Health Information Exchange-based, next 6-month ED risk surveillance system in the state of Maine. Data on electronic medical record (EMR) encounters integrated by HealthInfoNet (HIN), Maine's Health Information Exchange, were used to develop the Web-based surveillance system for a population ED future 6-month risk prediction. To model, a retrospective cohort of 829,641 patients with comprehensive clinical histories from January 1 to December 31, 2012 was used for training and then tested with a prospective cohort of 875,979 patients from July 1, 2012, to June 30, 2013. The multivariate statistical analysis identified 101 variables predictive of future defined 6-month risk of ED visit: 4 age groups, history of 8 different encounter types, history of 17 primary and 8 secondary diagnoses, 8 specific chronic diseases, 28 laboratory test results, history of 3 radiographic tests, and history of 25 outpatient prescription medications. The c-statistics for the retrospective and prospective cohorts were 0.739 and 0.732 respectively. Integration of our method into the HIN secure statewide data system in real time prospectively validated its performance. Cluster analysis in both the retrospective and prospective analyses revealed discrete subpopulations of high-risk patients, grouped around multiple "anchoring" demographics and chronic conditions. With the Web-based population risk-monitoring enterprise dashboards, the effectiveness of the active case finding algorithm has been validated by clinicians and caregivers in Maine. The active case finding model and associated real-time Web-based app were designed to track the evolving nature of total population risk, in a longitudinal manner, for ED visits across all payers, all diseases, and all age groups. Therefore, providers can implement targeted care management strategies to the patient subgroups with similar patterns of clinical histories, driving the delivery of more efficient and effective health care interventions. To the best of our knowledge, this prospectively validated EMR-based, Web-based tool is the first one to allow real-time total population risk assessment for statewide ED visits.

  6. Case definitions for chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME): a systematic review

    PubMed Central

    Brurberg, Kjetil Gundro; Fønhus, Marita Sporstøl; Larun, Lillebeth; Flottorp, Signe; Malterud, Kirsti

    2014-01-01

    Objective To identify case definitions for chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME), and explore how the validity of case definitions can be evaluated in the absence of a reference standard. Design Systematic review. Setting International. Participants A literature search, updated as of November 2013, led to the identification of 20 case definitions and inclusion of 38 validation studies. Primary and secondary outcome measure Validation studies were assessed for risk of bias and categorised according to three validation models: (1) independent application of several case definitions on the same population, (2) sequential application of different case definitions on patients diagnosed with CFS/ME with one set of diagnostic criteria or (3) comparison of prevalence estimates from different case definitions applied on different populations. Results A total of 38 studies contributed data of sufficient quality and consistency for evaluation of validity, with CDC-1994/Fukuda as the most frequently applied case definition. No study rigorously assessed the reproducibility or feasibility of case definitions. Validation studies were small with methodological weaknesses and inconsistent results. No empirical data indicated that any case definition specifically identified patients with a neuroimmunological condition. Conclusions Classification of patients according to severity and symptom patterns, aiming to predict prognosis or effectiveness of therapy, seems useful. Development of further case definitions of CFS/ME should be given a low priority. Consistency in research can be achieved by applying diagnostic criteria that have been subjected to systematic evaluation. PMID:24508851

  7. Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation

    NASA Technical Reports Server (NTRS)

    He, Yuning; Lee, Herbert K. H.; Davies, Misty D.

    2012-01-01

    Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.

  8. Evaluation of the national Notifiable Diseases Surveillance System for dengue fever in Taiwan, 2010-2012.

    PubMed

    McKerr, Caoimhe; Lo, Yi-Chun; Edeghere, Obaghe; Bracebridge, Sam

    2015-03-01

    In Taiwan, around 1,500 cases of dengue fever are reported annually and incidence has been increasing over time. A national web-based Notifiable Diseases Surveillance System (NDSS) has been in operation since 1997 to monitor incidence and trends and support case and outbreak management. We present the findings of an evaluation of the NDSS to ascertain the extent to which dengue fever surveillance objectives are being achieved. We extracted the NDSS data on all laboratory-confirmed dengue fever cases reported during 1 January 2010 to 31 December 2012 to assess and describe key system attributes based on the Centers for Disease Control and Prevention surveillance evaluation guidelines. The system's structure and processes were delineated and operational staff interviewed using a semi-structured questionnaire. Crude and age-adjusted incidence rates were calculated and key demographic variables were summarised to describe reporting activity. Data completeness and validity were described across several variables. Of 5,072 laboratory-confirmed dengue fever cases reported during 2010-2012, 4,740 (93%) were reported during July to December. The system was judged to be simple due to its minimal reporting steps. Data collected on key variables were correctly formatted and usable in > 90% of cases, demonstrating good data completeness and validity. The information collected was considered relevant by users with high acceptability. Adherence to guidelines for 24-hour reporting was 99%. Of 720 cases (14%) recorded as travel-related, 111 (15%) had an onset >14 days after return, highlighting the potential for misclassification. Information on hospitalization was missing for 22% of cases. The calculated PVP was 43%. The NDSS for dengue fever surveillance is a robust, well maintained and acceptable system that supports the collection of complete and valid data needed to achieve the surveillance objectives. The simplicity of the system engenders compliance leading to timely and accurate reporting. Completeness of hospitalization information could be further improved to allow assessment of severity of illness. To minimize misclassification, an algorithm to accurately classify travel cases should be established.

  9. Implementing partnership-driven clinical federated electronic health record data sharing networks.

    PubMed

    Stephens, Kari A; Anderson, Nicholas; Lin, Ching-Ping; Estiri, Hossein

    2016-09-01

    Building federated data sharing architectures requires supporting a range of data owners, effective and validated semantic alignment between data resources, and consistent focus on end-users. Establishing these resources requires development methodologies that support internal validation of data extraction and translation processes, sustaining meaningful partnerships, and delivering clear and measurable system utility. We describe findings from two federated data sharing case examples that detail critical factors, shared outcomes, and production environment results. Two federated data sharing pilot architectures developed to support network-based research associated with the University of Washington's Institute of Translational Health Sciences provided the basis for the findings. A spiral model for implementation and evaluation was used to structure iterations of development and support knowledge share between the two network development teams, which cross collaborated to support and manage common stages. We found that using a spiral model of software development and multiple cycles of iteration was effective in achieving early network design goals. Both networks required time and resource intensive efforts to establish a trusted environment to create the data sharing architectures. Both networks were challenged by the need for adaptive use cases to define and test utility. An iterative cyclical model of development provided a process for developing trust with data partners and refining the design, and supported measureable success in the development of new federated data sharing architectures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Association of TNF, MBL, and VDR Polymorphisms with Leprosy Phenotypes

    PubMed Central

    Sapkota, Bishwa R.; Macdonald, Murdo; Berrington, William R.; Misch, E. Ann; Ranjit, Chaman; Siddiqui, M. Ruby; Kaplan, Gilla; Hawn, Thomas R.

    2010-01-01

    Background Although genetic variants in tumor necrosis factor (TNF), mannose binding lectin (MBL), and the vitamin D receptor (VDR) have been associated with leprosy clinical outcomes these findings have not been extensively validated. Methods We used a case-control study design with 933 patients in Nepal, which included 240 patients with type I reversal reaction (RR), and 124 patients with erythema nodosum leprosum (ENL) reactions. We compared genotype frequencies in 933 cases and 101 controls of 7 polymorphisms, including a promoter region variant in TNF (G−308A), three polymorphisms in MBL (C154T, G161A and G170A), and three variants in VDR (FokI, BsmI, and TaqI). Results We observed an association between TNF −308A and protection from leprosy with an odds ratio (OR) of 0.52 (95% confidence interval (CI) of 0.29 to 0.95, P = 0.016). MBL polymorphism G161A was associated with protection from lepromatous leprosy (OR (95% CI) = 0.33 (0.12–0.85), P = 0.010). VDR polymorphisms were not associated with leprosy phenotypes. Conclusion These results confirm previous findings of an association of TNF −308A with protection from leprosy and MBL polymorphisms with protection from lepromatous leprosy. The statistical significance was modest and will require further study for conclusive validation. PMID:20650301

  11. Is the Acute NMDA Receptor Hypofunction a Valid Model of Schizophrenia?

    PubMed Central

    Adell, Albert; Jiménez-Sánchez, Laura; López-Gil, Xavier; Romón, Tamara

    2012-01-01

    Several genetic, neurodevelopmental, and pharmacological animal models of schizophrenia have been established. This short review examines the validity of one of the most used pharmacological model of the illness, ie, the acute administration of N-methyl-D-aspartate (NMDA) receptor antagonists in rodents. In some cases, data on chronic or prenatal NMDA receptor antagonist exposure have been introduced for comparison. The face validity of acute NMDA receptor blockade is granted inasmuch as hyperlocomotion and stereotypies induced by phencyclidine, ketamine, and MK-801 are regarded as a surrogate for the positive symptoms of schizophrenia. In addition, the loss of parvalbumin-containing cells (which is one of the most compelling finding in postmortem schizophrenia brain) following NMDA receptor blockade adds construct validity to this model. However, the lack of changes in glutamic acid decarboxylase (GAD67) is at variance with human studies. It is possible that changes in GAD67 are more reflective of the neurodevelopmental condition of schizophrenia. Finally, the model also has predictive validity, in that its behavioral and transmitter activation in rodents are responsive to antipsychotic treatment. Overall, although not devoid of drawbacks, the acute administration of NMDA receptor antagonists can be considered as a good model of schizophrenia bearing a satisfactory degree of validity. PMID:21965469

  12. Death by 'ice': fatal methamphetamine intoxication of a body packer case detected by postmortem computed tomography (PMCT) and validated by autopsy.

    PubMed

    Bin Abdul Rashid, Saiful Nizam; Rahim, Amir Saad Abdul; Thali, Michael J; Flach, Patricia M

    2013-03-01

    Fatal acute methamphetamine (MA) poisoning in cases of internal drug trafficking is rarely described in the literature. This case study reports an MA 'body packer' who died from fatal methamphetamine intoxication due to leaking drug packages in the alimentary tract. The deceased was examined by postmortem computed tomography (PMCT), and the results were correlated to subsequent autopsy and toxicological findings. The deceased was arrested by the police when he was found disoriented in the city of Kuala Lumpur. He was transferred to the emergency department on suspicion of drug abuse. The initial drug screening was reactive for amphetamines. Shortly after admission to the hospital, he died despite rigorous resuscitation attempts. The postmortem plain chest and abdominal radiographs revealed multiple suspicious opacities in the gastrointestinal tract attributable to body packages. An unenhanced whole body PMCT revealed twenty-five drug packages, twenty-four in the stomach and one in the transverse colon. At least two were disintegrating, and therefore leaking. The autopsy findings were consistent with the PMCT results. Toxicology confirmed the diagnosis of fatal methamphetamine intoxication.

  13. Simulation validation and management

    NASA Astrophysics Data System (ADS)

    Illgen, John D.

    1995-06-01

    Illgen Simulation Technologies, Inc., has been working interactive verification and validation programs for the past six years. As a result, they have evolved a methodology that has been adopted and successfully implemented by a number of different verification and validation programs. This methodology employs a unique case of computer-assisted software engineering (CASE) tools to reverse engineer source code and produce analytical outputs (flow charts and tables) that aid the engineer/analyst in the verification and validation process. We have found that the use of CASE tools saves time,which equate to improvements in both schedule and cost. This paper will describe the ISTI-developed methodology and how CASe tools are used in its support. Case studies will be discussed.

  14. Validation of Student and Parent Reported Data on the Basic Grant Application Form, 1978-79 Comprehensive Validation Guide. Procedural Manual for: Validation of Cases Referred by Institutions; Validation of Cases Referred by the Office of Education; Recovery of Overpayments.

    ERIC Educational Resources Information Center

    Smith, Karen; And Others

    Procedures for validating data reported by students and parents on an application for Basic Educational Opportunity Grants were developed in 1978 for the U.S. Office of Education (OE). Validation activities include: validation of flagged Student Eligibility Reports (SERs) for students whose schools are part of the Alternate Disbursement System;…

  15. Assessment and associated features of prolonged grief disorder among Chinese bereaved individuals.

    PubMed

    Li, Jie; Prigerson, Holly G

    2016-04-01

    Most research on the assessment and characteristics of prolonged grief disorder (PGD) has been conducted in Western bereaved samples. Limited information about PGD in Chinese samples exists. This study aims to validate the Chinese version of the Inventory of Complicated grief (ICG), examine the distinctiveness of PGD symptoms from symptoms of bereavement-related depression and anxiety, and explore the prevalence of PGD in a Chinese sample. Responses from 1358 bereaved Chinese adults were collected through an on-line survey. They completed the Chinese version of ICG and a questionnaire measuring depression and anxiety symptoms. The findings indicate that Chinese ICG has sound validity and high internal consistency. The ICG cut-off score for PGD "caseness"in this large Chinese sample was 48. The distinctiveness of PGD symptoms from those of depression and anxiety was supported by the results of the confirmatory factor analysis and the fact that PGD occurred in isolation in the studied sample. The prevalence of PGD was13.9%. ICG is a valid instrument for use in the Chinese context. Several key characteristics of PGD in Chinese, either different from or comparable to findings in Western samples, may stimulate further research and clinical interest in the concept by providing empirical evidence from an large and influential Eastern country. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Implications of "Too Good to Be True" for Replication, Theoretical Claims, and Experimental Design: An Example Using Prominent Studies of Racial Bias.

    PubMed

    Francis, Gregory

    2016-01-01

    In response to concerns about the validity of empirical findings in psychology, some scientists use replication studies as a way to validate good science and to identify poor science. Such efforts are resource intensive and are sometimes controversial (with accusations of researcher incompetence) when a replication fails to show a previous result. An alternative approach is to examine the statistical properties of the reported literature to identify some cases of poor science. This review discusses some details of this process for prominent findings about racial bias, where a set of studies seems "too good to be true." This kind of analysis is based on the original studies, so it avoids criticism from the original authors about the validity of replication studies. The analysis is also much easier to perform than a new empirical study. A variation of the analysis can also be used to explore whether it makes sense to run a replication study. As demonstrated here, there are situations where the existing data suggest that a direct replication of a set of studies is not worth the effort. Such a conclusion should motivate scientists to generate alternative experimental designs that better test theoretical ideas.

  17. Implications of “Too Good to Be True” for Replication, Theoretical Claims, and Experimental Design: An Example Using Prominent Studies of Racial Bias

    PubMed Central

    Francis, Gregory

    2016-01-01

    In response to concerns about the validity of empirical findings in psychology, some scientists use replication studies as a way to validate good science and to identify poor science. Such efforts are resource intensive and are sometimes controversial (with accusations of researcher incompetence) when a replication fails to show a previous result. An alternative approach is to examine the statistical properties of the reported literature to identify some cases of poor science. This review discusses some details of this process for prominent findings about racial bias, where a set of studies seems “too good to be true.” This kind of analysis is based on the original studies, so it avoids criticism from the original authors about the validity of replication studies. The analysis is also much easier to perform than a new empirical study. A variation of the analysis can also be used to explore whether it makes sense to run a replication study. As demonstrated here, there are situations where the existing data suggest that a direct replication of a set of studies is not worth the effort. Such a conclusion should motivate scientists to generate alternative experimental designs that better test theoretical ideas. PMID:27713708

  18. Description and pilot results from a novel method for evaluating return of incidental findings from next-generation sequencing technologies.

    PubMed

    Goddard, Katrina A B; Whitlock, Evelyn P; Berg, Jonathan S; Williams, Marc S; Webber, Elizabeth M; Webster, Jennifer A; Lin, Jennifer S; Schrader, Kasmintan A; Campos-Outcalt, Doug; Offit, Kenneth; Feigelson, Heather Spencer; Hollombe, Celine

    2013-09-01

    The aim of this study was to develop, operationalize, and pilot test a transparent, reproducible, and evidence-informed method to determine when to report incidental findings from next-generation sequencing technologies. Using evidence-based principles, we proposed a three-stage process. Stage I "rules out" incidental findings below a minimal threshold of evidence and is evaluated using inter-rater agreement and comparison with an expert-based approach. Stage II documents criteria for clinical actionability using a standardized approach to allow experts to consistently consider and recommend whether results should be routinely reported (stage III). We used expert opinion to determine the face validity of stages II and III using three case studies. We evaluated the time and effort for stages I and II. For stage I, we assessed 99 conditions and found high inter-rater agreement (89%), and strong agreement with a separate expert-based method. Case studies for familial adenomatous polyposis, hereditary hemochromatosis, and α1-antitrypsin deficiency were all recommended for routine reporting as incidental findings. The method requires <3 days per topic. We establish an operational definition of clinically actionable incidental findings and provide documentation and pilot testing of a feasible method that is scalable to the whole genome.

  19. Validity of injecting drug users' self report of hepatitis A, B, and C.

    PubMed

    Schlicting, Erin G; Johnson, Mark E; Brems, Christiane; Wells, Rebecca S; Fisher, Dennis G; Reynolds, Grace

    2003-01-01

    To test the validity of drug users self-reports of diseases associated with drug use, in this case hepatitis A, B, and C. Injecting drug users (n = 653) were recruited and asked whether they had been diagnosed previously with hepatitis A, B, and/or C. These self-report data were compared to total hepatitis A antibody, hepatitis B core antibody, and hepatitis C antibody seromarkers as a means of determining the validity of the self-reported information. Anchorage, Alaska. Criteria for inclusion included being at least 18-years old; testing positive on urinalysis for cocaine metabolites, amphetamine, or morphine; having visible signs of injection (track marks). Serological testing for hepatitis A, B, and C. Findings indicate high specificity, low sensitivity, and low kappa coefficients for all three self-report measures. Subgroup analyses revealed significant differences in sensitivity associated with previous substance abuse treatment experience for hepatitis B self-report and with gender for hepatitis C self-report. Given the low sensitivity, the validity of drug users, self-reported information on hepatitis should be considered with caution.

  20. Ontology-Based Method for Fault Diagnosis of Loaders.

    PubMed

    Xu, Feixiang; Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei

    2018-02-28

    This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study.

  1. Ontology-Based Method for Fault Diagnosis of Loaders

    PubMed Central

    Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei

    2018-01-01

    This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study. PMID:29495646

  2. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  3. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  4. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  5. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  6. 42 CFR 493.569 - Consequences of a finding of noncompliance as a result of a validation inspection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... result of a validation inspection. 493.569 Section 493.569 Public Health CENTERS FOR MEDICARE & MEDICAID... validation inspection. (a) Laboratory with a certificate of accreditation. If a validation inspection results... validation inspection results in a finding that a CLIA-exempt laboratory is out of compliance with one or...

  7. Thermodynamics of higher dimensional black holes with higher order thermal fluctuations

    NASA Astrophysics Data System (ADS)

    Pourhassan, B.; Kokabi, K.; Rangyan, S.

    2017-12-01

    In this paper, we consider higher order corrections of the entropy, which coming from thermal fluctuations, and find their effect on the thermodynamics of higher dimensional charged black holes. Leading order thermal fluctuation is logarithmic term in the entropy while higher order correction is proportional to the inverse of original entropy. We calculate some thermodynamics quantities and obtain the effect of logarithmic and higher order corrections of entropy on them. Validity of the first law of thermodynamics investigated and Van der Waals equation of state of dual picture studied. We find that five-dimensional black hole behaves as Van der Waals, but higher dimensional case have not such behavior. We find that thermal fluctuations are important in stability of black hole hence affect unstable/stable black hole phase transition.

  8. Development and validation of a registry-based definition of eosinophilic esophagitis in Denmark

    PubMed Central

    Dellon, Evan S; Erichsen, Rune; Pedersen, Lars; Shaheen, Nicholas J; Baron, John A; Sørensen, Henrik T; Vyberg, Mogens

    2013-01-01

    AIM: To develop and validate a case definition of eosinophilic esophagitis (EoE) in the linked Danish health registries. METHODS: For case definition development, we queried the Danish medical registries from 2006-2007 to identify candidate cases of EoE in Northern Denmark. All International Classification of Diseases-10 (ICD-10) and prescription codes were obtained, and archived pathology slides were obtained and re-reviewed to determine case status. We used an iterative process to select inclusion/exclusion codes, refine the case definition, and optimize sensitivity and specificity. We then re-queried the registries from 2008-2009 to yield a validation set. The case definition algorithm was applied, and sensitivity and specificity were calculated. RESULTS: Of the 51 and 49 candidate cases identified in both the development and validation sets, 21 and 24 had EoE, respectively. Characteristics of EoE cases in the development set [mean age 35 years; 76% male; 86% dysphagia; 103 eosinophils per high-power field (eos/hpf)] were similar to those in the validation set (mean age 42 years; 83% male; 67% dysphagia; 77 eos/hpf). Re-review of archived slides confirmed that the pathology coding for esophageal eosinophilia was correct in greater than 90% of cases. Two registry-based case algorithms based on pathology, ICD-10, and pharmacy codes were successfully generated in the development set, one that was sensitive (90%) and one that was specific (97%). When these algorithms were applied to the validation set, they remained sensitive (88%) and specific (96%). CONCLUSION: Two registry-based definitions, one highly sensitive and one highly specific, were developed and validated for the linked Danish national health databases, making future population-based studies feasible. PMID:23382628

  9. A validated case definition for chronic rhinosinusitis in administrative data: a Canadian perspective.

    PubMed

    Rudmik, Luke; Xu, Yuan; Kukec, Edward; Liu, Mingfu; Dean, Stafford; Quan, Hude

    2016-11-01

    Pharmacoepidemiological research using administrative databases has become increasingly popular for chronic rhinosinusitis (CRS); however, without a validated case definition the cohort evaluated may be inaccurate resulting in biased and incorrect outcomes. The objective of this study was to develop and validate a generalizable administrative database case definition for CRS using International Classification of Diseases, 9th edition (ICD-9)-coded claims. A random sample of 100 patients with a guideline-based diagnosis of CRS and 100 control patients were selected and then linked to a Canadian physician claims database from March 31, 2010, to March 31, 2015. The proportion of CRS ICD-9-coded claims (473.x and 471.x) for each of these 200 patients were reviewed and the validity of 7 different ICD-9-based coding algorithms was evaluated. The CRS case definition of ≥2 claims with a CRS ICD-9 code (471.x or 473.x) within 2 years of the reference case provides a balanced validity with a sensitivity of 77% and specificity of 79%. Applying this CRS case definition to the claims database produced a CRS cohort of 51,000 patients with characteristics that were consistent with published demographics and rates of comorbid asthma, allergic rhinitis, and depression. This study has validated several coding algorithms; based on the results a case definition of ≥2 physician claims of CRS (ICD-9 of 471.x or 473.x) within 2 years provides an optimal level of validity. Future studies will need to validate this administrative case definition from different health system perspectives and using larger retrospective chart reviews from multiple providers. © 2016 ARS-AAOA, LLC.

  10. Diagnosis of a 64-year-old patient presenting with suspected lumbar spinal stenosis: an evidence-based case report

    PubMed Central

    Emary, Peter C.

    2015-01-01

    Objective: To present an evidence-based case report on the diagnosis of a patient with suspected lumbar spinal stenosis (LSS). Case: A 64-year-old man presented with signs and symptoms suggestive of LSS, but physical examination and diagnostic imaging findings were inconclusive. Other co-morbidities included diabetes, congestive heart failure, and left hip joint osteoarthritis. Outcome: PubMed was searched for systematic reviews of diagnostic studies on LSS. Two recent articles were found and appraised with respect to their validity, importance, and applicability in diagnosing the current patient. Copies of his magnetic resonance imaging were also obtained and used in combination with the appraised literature, including diagnostic test specificities and likelihood ratios, to confirm an LSS diagnosis. Summary: This case illustrates how research evidence can be used in clinical practice, particularly in the diagnosis of an individual patient. PMID:25729085

  11. A genetic algorithm for dynamic inbound ordering and outbound dispatching problem with delivery time windows

    NASA Astrophysics Data System (ADS)

    Kim, Byung Soo; Lee, Woon-Seek; Koh, Shiegheun

    2012-07-01

    This article considers an inbound ordering and outbound dispatching problem for a single product in a third-party warehouse, where the demands are dynamic over a discrete and finite time horizon, and moreover, each demand has a time window in which it must be satisfied. Replenishing orders are shipped in containers and the freight cost is proportional to the number of containers used. The problem is classified into two cases, i.e. non-split demand case and split demand case, and a mathematical model for each case is presented. An in-depth analysis of the models shows that they are very complicated and difficult to find optimal solutions as the problem size becomes large. Therefore, genetic algorithm (GA) based heuristic approaches are designed to solve the problems in a reasonable time. To validate and evaluate the algorithms, finally, some computational experiments are conducted.

  12. A systematic review of validated methods for identifying hypersensitivity reactions other than anaphylaxis (fever, rash, and lymphadenopathy), using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of hypersensitivity reactions. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the hypersensitivity reactions of health outcomes of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify hypersensitivity reactions and including validation estimates of the coding algorithms. We identified five studies that provided validated hypersensitivity-reaction algorithms. Algorithm positive predictive values (PPVs) for various definitions of hypersensitivity reactions ranged from 3% to 95%. PPVs were high (i.e. 90%-95%) when both exposures and diagnoses were very specific. PPV generally decreased when the definition of hypersensitivity was expanded, except in one study that used data mining methodology for algorithm development. The ability of coding algorithms to identify hypersensitivity reactions varied, with decreasing performance occurring with expanded outcome definitions. This examination of hypersensitivity-reaction coding algorithms provides an example of surveillance bias resulting from outcome definitions that include mild cases. Data mining may provide tools for algorithm development for hypersensitivity and other health outcomes. Research needs to be conducted on designing validation studies to test hypersensitivity-reaction algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Assessing the Validity of Using Serious Game Technology to Analyze Physician Decision Making

    PubMed Central

    Mohan, Deepika; Angus, Derek C.; Ricketts, Daniel; Farris, Coreen; Fischhoff, Baruch; Rosengart, Matthew R.; Yealy, Donald M.; Barnato, Amber E.

    2014-01-01

    Background Physician non-compliance with clinical practice guidelines remains a critical barrier to high quality care. Serious games (using gaming technology for serious purposes) have emerged as a method of studying physician decision making. However, little is known about their validity. Methods We created a serious game and evaluated its construct validity. We used the decision context of trauma triage in the Emergency Department of non-trauma centers, given widely accepted guidelines that recommend the transfer of severely injured patients to trauma centers. We designed cases with the premise that the representativeness heuristic influences triage (i.e. physicians make transfer decisions based on archetypes of severely injured patients rather than guidelines). We randomized a convenience sample of emergency medicine physicians to a control or cognitive load arm, and compared performance (disposition decisions, number of orders entered, time spent per case). We hypothesized that cognitive load would increase the use of heuristics, increasing the transfer of representative cases and decreasing the transfer of non-representative cases. Findings We recruited 209 physicians, of whom 168 (79%) began and 142 (68%) completed the task. Physicians transferred 31% of severely injured patients during the game, consistent with rates of transfer for severely injured patients in practice. They entered the same average number of orders in both arms (control (C): 10.9 [SD 4.8] vs. cognitive load (CL):10.7 [SD 5.6], p = 0.74), despite spending less time per case in the control arm (C: 9.7 [SD 7.1] vs. CL: 11.7 [SD 6.7] minutes, p<0.01). Physicians were equally likely to transfer representative cases in the two arms (C: 45% vs. CL: 34%, p = 0.20), but were more likely to transfer non-representative cases in the control arm (C: 38% vs. CL: 26%, p = 0.03). Conclusions We found that physicians made decisions consistent with actual practice, that we could manipulate cognitive load, and that load increased the use of heuristics, as predicted by cognitive theory. PMID:25153149

  14. A New Methodology for Modeling National Command Level Decisionmaking in War Games and Simulations.

    DTIC Science & Technology

    1986-07-01

    Conclusions about Utility and Development Options, The Rand Corporation, R-2945-DNA, March 1983. Drucker , Peter F., Management : Tasks, Responsibilities...looks to the worst case can readily find himself paralyzed. Of course, it is also true ’The effort to affect an opponent’s image of oneself is part of...how to manage forces on a continuing basis. So long as the broad features of the NCL-specified plan continue to appear valid, it is the military com

  15. Incidents Prediction in Road Junctions Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Hajji, Tarik; Alami Hassani, Aicha; Ouazzani Jamil, Mohammed

    2018-05-01

    The implementation of an incident detection system (IDS) is an indispensable operation in the analysis of the road traffics. However the IDS may, in no case, represent an alternative to the classical monitoring system controlled by the human eye. The aim of this work is to increase detection and prediction probability of incidents in camera-monitored areas. Knowing that, these areas are monitored by multiple cameras and few supervisors. Our solution is to use Artificial Neural Networks (ANN) to analyze moving objects trajectories on captured images. We first propose a modelling of the trajectories and their characteristics, after we develop a learning database for valid and invalid trajectories, and then we carry out a comparative study to find the artificial neural network architecture that maximizes the rate of valid and invalid trajectories recognition.

  16. Validation of the early childhood attitude toward women in science scale (ECWiSS): A pilot administration

    NASA Astrophysics Data System (ADS)

    Mulkey, Lynn M.

    The intention of this research was to measure attitudes of young children toward women scientists. A 27-item instrument, the Early Childhood Women in Science Scale (ECWiSS) was validated in a test case of the proposition that differential socialization predicts entry into the scientific talent pool. Estimates of internal consistency indicated that the scale is highly reliable. Known groups and correlates procedures, employed to determine the validity of the instrument, revealed that the scale is able to discriminate significant differences between groups and distinguishes three dimensions of attitude (role-specific self-concept, home-related sex-role conflict, and work-related sex-role conflict). Results of the analyses also confirmed the anticipated pattern of correlations with measures of another construct. The findings suggest the utility of the ECWiSS for measurement of early childhood attitudes in models of the ascriptive and/or meritocratic processes affecting recruitment to science and more generally in program and curriculum evaluation where attitude toward women in science is the construct of interest.

  17. Financial Decision-making Abilities and Financial Exploitation in Older African Americans: Preliminary Validity Evidence for the Lichtenberg Financial Decision Rating Scale (LFDRS)

    PubMed Central

    Ficker, Lisa J.; Rahman-Filipiak, Annalise

    2015-01-01

    This study examines preliminary evidence for the Lichtenberg Financial Decision Rating Scale (LFDRS), a new person-centered approach to assessing capacity to make financial decisions, and its relationship to self-reported cases of financial exploitation in 69 older African Americans. More than one third of individuals reporting financial exploitation also had questionable decisional abilities. Overall, decisional ability score and current decision total were significantly associated with cognitive screening test and financial ability scores, demonstrating good criterion validity. Financially exploited individuals, and non-exploited individuals, showed mean group differences on the Mini Mental State Exam, Financial Situational Awareness, Psychological Vulnerability, Current Decisional Ability, and Susceptibility to undue influence subscales, and Total Lichtenberg Financial Decision Rating Scale Score. Study findings suggest that impaired decisional abilities may render older adults more vulnerable to financial exploitation, and that the LFDRS is a valid tool for measuring both decisional abilities and financial exploitation. PMID:26285038

  18. Testing the validity and acceptability of the diagnostic criteria for Hoarding Disorder: a DSM-5 survey.

    PubMed

    Mataix-Cols, D; Fernández de la Cruz, L; Nakao, T; Pertusa, A

    2011-12-01

    The DSM-5 Obsessive-Compulsive Spectrum Sub-Workgroup is recommending the creation of a new diagnostic category named Hoarding Disorder (HD). The validity and acceptability of the proposed diagnostic criteria have yet to be formally tested. Obsessive-compulsive disorder/hoarding experts and random members of the American Psychiatric Association (APA) were shown eight brief clinical vignettes (four cases meeting criteria for HD, three with hoarding behaviour secondary to other mental disorders, and one with subclinical hoarding behaviour) and asked to decide the most appropriate diagnosis in each case. Participants were also asked about the perceived acceptability of the criteria and whether they supported the inclusion of HD in the main manual. Altogether, 211 experts and 48 APA members completed the survey (30% and 10% response rates, respectively). The sensitivity and specificity of the HD diagnosis and the individual criteria were high (80-90%) across various types of professionals, irrespective of their experience with hoarding cases. About 90% of participants in both samples thought the criteria would be very/somewhat acceptable for professionals and sufferers. Most experts (70%) supported the inclusion of HD in the main manual, whereas only 50% of the APA members did. The proposed criteria for HD have high sensitivity and specificity. The criteria are also deemed acceptable for professionals and sufferers alike. Training of professionals and the development and validation of semi-structured diagnostic instruments should improve diagnostic accuracy even further. A field trial is now needed to confirm these encouraging findings with real patients in real clinical settings.

  19. Using genetic methods to define the targets of compounds with antimalarial activity

    PubMed Central

    Flannery, Erika L.; Fidock, David A.; Winzeler, Elizabeth A.

    2013-01-01

    Although phenotypic cellular screening has been used to drive antimalarial drug discovery in recent years, in some cases target-based drug discovery remains more attractive. This is especially true when appropriate high-throughput cellular assays are lacking, as is the case for drug discovery efforts that aim to provide a replacement for primaquine (4-N-(6-methoxyquinolin-8-yl)pentane-1,4-diamine), the only drug that can block Plasmodium transmission to Anopheles mosquitoes and eliminate liver-stage hypnozoites. At present, however, there are no known chemically validated parasite protein targets that are important in all Plasmodium parasite developmental stages and that can be used in traditional biochemical compound screens. We propose that a plethora of novel, chemically validated, cross-stage antimalarial targets still remain to be discovered from the ~5,500 proteins encoded by the Plasmodium genomes. Here we discuss how in vitro evolution of drug-resistant strains of Plasmodium falciparum and subsequent whole-genome analysis can be used to find the targets of some of the many compounds discovered in whole-cell phenotypic screens. PMID:23927658

  20. The determinations of remote sensing satellite data delivery service quality: A positivistic case study in Chinese context

    NASA Astrophysics Data System (ADS)

    Jin, Jiahua; Yan, Xiangbin; Tan, Qiaoqiao; Li, Yijun

    2014-03-01

    With the development of remote sensing technology, remote-sensing satellite has been widely used in many aspects of national construction. Big data with different standards and massive users with different needs, make the satellite data delivery service to be a complex giant system. How to deliver remote-sensing satellite data efficiently and effectively is a big challenge. Based on customer service theory, this paper proposes a hierarchy conceptual model for examining the determinations of remote-sensing satellite data delivery service quality in the Chinese context. Three main dimensions: service expectation, service perception and service environment, and 8 sub-dimensions are included in the model. Large amount of first-hand data on the remote-sensing satellite data delivery service have been obtained through field research, semi-structured questionnaire and focused interview. A positivist case study is conducted to validate and develop the proposed model, as well as to investigate the service status and related influence mechanisms. Findings from the analysis demonstrate the explanatory validity of the model, and provide potentially helpful insights for future practice.

  1. Numerical simulations of loop quantum Bianchi-I spacetimes

    NASA Astrophysics Data System (ADS)

    Diener, Peter; Joe, Anton; Megevand, Miguel; Singh, Parampreet

    2017-05-01

    Due to the numerical complexities of studying evolution in an anisotropic quantum spacetime, in comparison to the isotropic models, the physics of loop quantized anisotropic models has remained largely unexplored. In particular, robustness of bounce and the validity of effective dynamics have so far not been established. Our analysis fills these gaps for the case of vacuum Bianchi-I spacetime. To efficiently solve the quantum Hamiltonian constraint we perform an implementation of the Cactus framework which is conventionally used for applications in numerical relativity. Using high performance computing, numerical simulations for a large number of initial states with a wide variety of fluctuations are performed. Big bang singularity is found to be replaced by anisotropic bounces for all the cases. We find that for initial states which are sharply peaked at the late times in the classical regime and bounce at a mean volume much greater than the Planck volume, effective dynamics is an excellent approximation to the underlying quantum dynamics. Departures of the effective dynamics from the quantum evolution appear for the states probing deep Planck volumes. A detailed analysis of the behavior of this departure reveals a non-monotonic and subtle dependence on fluctuations of the initial states. We find that effective dynamics in almost all of the cases underestimates the volume and hence overestimates the curvature at the bounce, a result in synergy with earlier findings in the isotropic case. The expansion and shear scalars are found to be bounded throughout the evolution.

  2. Approach to addressing missing data for electronic medical records and pharmacy claims data research.

    PubMed

    Bounthavong, Mark; Watanabe, Jonathan H; Sullivan, Kevin M

    2015-04-01

    The complete capture of all values for each variable of interest in pharmacy research studies remains aspirational. The absence of these possibly influential values is a common problem for pharmacist investigators. Failure to account for missing data may translate to biased study findings and conclusions. Our goal in this analysis was to apply validated statistical methods for missing data to a previously analyzed data set and compare results when missing data methods were implemented versus standard analytics that ignore missing data effects. Using data from a retrospective cohort study, the statistical method of multiple imputation was used to provide regression-based estimates of the missing values to improve available data usable for study outcomes measurement. These findings were then contrasted with a complete-case analysis that restricted estimation to subjects in the cohort that had no missing values. Odds ratios were compared to assess differences in findings of the analyses. A nonadjusted regression analysis ("crude analysis") was also performed as a reference for potential bias. Veterans Integrated Systems Network that includes VA facilities in the Southern California and Nevada regions. New statin users between November 30, 2006, and December 2, 2007, with a diagnosis of dyslipidemia. We compared the odds ratios (ORs) and 95% confidence intervals (CIs) for the crude, complete-case, and multiple imputation analyses for the end points of a 25% or greater reduction in atherogenic lipids. Data were missing for 21.5% of identified patients (1665 subjects of 7739). Regression model results were similar for the crude, complete-case, and multiple imputation analyses with overlap of 95% confidence limits at each end point. The crude, complete-case, and multiple imputation ORs (95% CIs) for a 25% or greater reduction in low-density lipoprotein cholesterol were 3.5 (95% CI 3.1-3.9), 4.3 (95% CI 3.8-4.9), and 4.1 (95% CI 3.7-4.6), respectively. The crude, complete-case, and multiple imputation ORs (95% CIs) for a 25% or greater reduction in non-high-density lipoprotein cholesterol were 3.5 (95% CI 3.1-3.9), 4.5 (95% CI 4.0-5.2), and 4.4 (95% CI 3.9-4.9), respectively. The crude, complete-case, and multiple imputation ORs (95% CIs) for 25% or greater reduction in TGs were 3.1 (95% CI 2.8-3.6), 4.0 (95% CI 3.5-4.6), and 4.1 (95% CI 3.6-4.6), respectively. The use of the multiple imputation method to account for missing data did not alter conclusions based on a complete-case analysis. Given the frequency of missing data in research using electronic health records and pharmacy claims data, multiple imputation may play an important role in the validation of study findings. © 2015 Pharmacotherapy Publications, Inc.

  3. Challenges in assessing depressive symptoms in Fiji: A psychometric evaluation of the CES-D.

    PubMed

    Opoliner, April; Blacker, Deborah; Fitzmaurice, Garrett; Becker, Anne

    2014-06-01

    The CES-D is a commonly used self-report assessment for depressive symptomatology. However, its psychometric properties have not been evaluated in Fiji. This study aims to evaluate the reliability and validity of English language and Fijian vernacular versions in ethnic Fijian adolescent schoolgirls. As part of the HEALTHY Fiji study, ethnic Fijian female adolescents (N = 523) completed the CES-D. Participants selected to respond in English or the local vernacular. Reliability (internal consistency, item-total score correlation, and test-retest estimates), validity (associations with other proxies for depression) and factor structure were assessed. Evaluations considered differences between language versions. In this sample, the CES-D had a Cronbach's α of 0.81 and item-total score correlation coefficients ranged between 0.2 and 0.63. One week test-retest reliability (ICC(2)) was 0.57. CES-D scores were higher among individuals who endorsed feelings of depression and suicidality compared to those who did not. ROC analyses of the CES-D versus binary depression and suicidality variables produced AUCs around 0.70 and did not support a discrete cut-off for significant disturbance. Findings were similar across the two language groups. The CES-D has acceptable reliability and validity among ethnic Fijian female adolescents in English and in the Fijian vernacular language. Findings support its utility as a dimensional measure for depressive symptomatology in this study population. Further examination of its clinical utility for case finding for depression in Fijian school-based and community populations is warranted. © The Author(s) 2013.

  4. In silico modeling to predict drug-induced phospholipidosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G., E-mail: luis.valerio@fda.hhs.gov

    2013-06-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the constructionmore » and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL.« less

  5. Investigation of fatalities due to acute gasoline poisoning.

    PubMed

    Martínez, María A; Ballesteros, Salomé

    2005-10-01

    This paper presents a simple, rapid, reliable, and validated method suited for forensic examination of gasoline in biological samples. The proposed methodology has been applied to the investigation of four fatal cases due to gasoline poisoning that occurred in Spain in 2003 and 2004. Case histories and pathological and toxicological findings are described in order to illustrate the danger of gasoline exposure under several circumstances. Gasoline's tissular distribution, its quantitative toxicological significance, and the possible mechanisms leading to death are also discussed. The toxicological screening and quantitation of gasoline was performed by means of gas chromatography (GC) with flame-ionization detection, and confirmation was performed using GC-mass spectrometry in total ion chromatogram mode. m,p-Xylene peak was selected to estimate gasoline in all biological samples. Gasoline analytical methodology was validated at five concentration levels from 1 to 100 mg/L. The method provided extraction recoveries between 77.6% and 98.3%. The limit of detection was 0.3 mg/L, and the limit of quantitation was 1.0 mg/L. The linearity of the blood calibration curves was excellent with r2 values of > 0.997. Intraday and interday precisions had a coefficient of variation < or = 5.4% in all cases. Cases 1 and 2 consist of the accidental inhalation of gasoline vapor inside a small enclosed space. Case 3 is a death by recreational gasoline inhalation in a male adolescent. Heart blood concentrations were 28.4, 18.0, and 38.3 mg/L, respectively; liver concentrations were 41.4, 52.9, and 124.2 mg/kg, respectively; and lung concentrations were 5.6, 8.4, and 39.3 mg/kg, respectively. Case 4 was an accidental death due to gasoline ingestion of a woman with senile dementia. Peripheral blood concentration was 122.4 mg/L, the highest in our experience. Because pathological findings were consistent with other reports of gasoline intoxication and constituents of gasoline were found in the body, cause of death was attributed to acute gasoline intoxication. As a rule, this kind of poisoning offers little difficulty in diagnosis because there is a history of exposure, and the odor usually clings to the clothes, skin, or gastric contents. However, anatomic autopsy findings will be nonspecific and therefore toxicological analysis is necessary. There is a paucity of recent references regarding analytical and toxicological data, and this article provides evidence about toxic concentrations and is a useful adjunct to the postmortem toxicological interpretation of fatalities if the decedent has been involved in gasoline use.

  6. Chronic obstructive lung disease "expert system": validation of a predictive tool for assisting diagnosis.

    PubMed

    Braido, Fulvio; Santus, Pierachille; Corsico, Angelo Guido; Di Marco, Fabiano; Melioli, Giovanni; Scichilone, Nicola; Solidoro, Paolo

    2018-01-01

    The purposes of this study were development and validation of an expert system (ES) aimed at supporting the diagnosis of chronic obstructive lung disease (COLD). A questionnaire and a WebFlex code were developed and validated in silico. An expert panel pilot validation on 60 cases and a clinical validation on 241 cases were performed. The developed questionnaire and code validated in silico resulted in a suitable tool to support the medical diagnosis. The clinical validation of the ES was performed in an academic setting that included six different reference centers for respiratory diseases. The results of the ES expressed as a score associated with the risk of suffering from COLD were matched and compared with the final clinical diagnoses. A set of 60 patients were evaluated by a pilot expert panel validation with the aim of calculating the sample size for the clinical validation study. The concordance analysis between these preliminary ES scores and diagnoses performed by the experts indicated that the accuracy was 94.7% when both experts and the system confirmed the COLD diagnosis and 86.3% when COLD was excluded. Based on these results, the sample size of the validation set was established in 240 patients. The clinical validation, performed on 241 patients, resulted in ES accuracy of 97.5%, with confirmed COLD diagnosis in 53.6% of the cases and excluded COLD diagnosis in 32% of the cases. In 11.2% of cases, a diagnosis of COLD was made by the experts, although the imaging results showed a potential concomitant disorder. The ES presented here (COLD ES ) is a safe and robust supporting tool for COLD diagnosis in primary care settings.

  7. Is the Charcot and Bernard case (1883) of loss of visual imagery really based on neurological impairment?

    PubMed

    Zago, Stefano; Allegri, Nicola; Cristoffanini, Marta; Ferrucci, Roberta; Porta, Mauro; Priori, Alberto

    2011-11-01

    INTRODUCTION. The Charcot and Bernard case of visual imagery, Monsieur X, is a classic case in the history of neuropsychology. Published in 1883, it has been considered the first case of visual imagery loss due to brain injury. Also in recent times a neurological valence has been given to it. However, the presence of analogous cases of loss of visual imagery in the psychiatric field have led us to hypothesise functional origins rather than organic. METHODS. In order to assess the validity of such an inference, we have compared the symptomatology of Monsieur X with that found in cases of loss of visual mental images, both psychiatric and neurological, presented in literature. RESULTS. The clinical findings show strong assonances of the Monsieur X case with the symptoms manifested over time by the patients with functionally based loss of visual imagery. CONCLUSION. Although Monsieur X's damage was initially interpreted as neurological, reports of similar symptoms in the psychiatric field lead us to postulate a functional cause for his impairment as well.

  8. K-11 students’ creative thinking ability on static fluid: a case study

    NASA Astrophysics Data System (ADS)

    Hanni, I. U.; Muslim; Hasanah, L.; Samsudin, A.

    2018-05-01

    Creative thinking is one of the fundamental components of 21st-century education that needs to be possessed and developed in students. Thus, the students have the ability to find many alternative solutions to solve problems in physics learning. The study aimed at providing the students’ creative thinking ability on Static Fluid. A case study has been implemented through a single case, namely embedded design. Participants in this study are 27 K-11 students. The instrument utilized is Test for Creative Thinking-Static Fluid (TCT-SF) which has been validated by the experts. The result shows that 10.74 (approximately 35.8%) of the maximum scores. In conclusion, students’ creative thinking ability on Static Fluid is still stumpy, hence, it is needed to develop creative thinking ability in K-11 students’ context.

  9. Marketing blood drives to students: a case study.

    PubMed

    Leigh, Laurence; Bist, Michael; Alexe, Roxana

    2007-01-01

    The aim of this paper is to motivate blood donation among international students and demonstrate the applicability of marketing techniques in the health care sector. The paper uses a combination of focus groups and a questionnaire-based survey. The paper finds that donors primarily find gratification from their altruistic acts through awareness of their contribution to saving lives. Receiving information on how each individual donation is used is seen as a powerful means of reinforcement. Practical benefits such as receiving free blood test information are also useful motivators, while communicating the professionalism of the blood collection techniques are important for reassuring the minority of prospective donors who expressed fears about possible risks associated with blood donation. Since this was a small-scale study among Hungarian and international students in Budapest, further research is necessary to validate its results among other demographic groups. Findings were reported to the International Federation of Red Cross and Red Crescent Societies in Hungary in order to increase blood donations among students in Hungary. Subject to validation through further research, applying recommended approaches in different countries and other demographic groups is suggested. This is the first research paper on motivation toward blood donation among international students and offers new and practical suggestions for increasing their level of participation in blood drives.

  10. Interprofessional collaboration in family health teams

    PubMed Central

    Goldman, Joanne; Meuser, Jamie; Rogers, Jess; Lawrie, Lynne; Reeves, Scott

    2010-01-01

    ABSTRACT OBJECTIVE To examine family health team (FHT) members’ perspectives and experiences of interprofessional collaboration and perceived benefits. DESIGN Qualitative case study using semistructured interviews. SETTING Fourteen FHTs in urban and rural Ontario. PARTICIPANTS Purposeful sample of the members of 14 FHTs, including family physicians, nurse practitioners, nurses, dietitians, social workers, pharmacists, and managers. METHODS A multiple case-study approach involving 14 FHTs was employed. Thirty-two semistructured interviews were conducted and data were analyzed by employing an inductive thematic approach. A member-checking technique was also undertaken to enhance the validity of the findings. MAIN FINDINGS Five main themes are reported: rethinking traditional roles and scopes of practice, management and leadership, time and space, interprofessional initiatives, and early perceptions of collaborative care. CONCLUSION This study shows the importance of issues such as roles and scopes of practice, leadership, and space to effective team-based primary care, and provides a framework for understanding different types of interprofessional interventions used to support interprofessional collaboration. PMID:20944025

  11. Increased CNV-Region deletions in mild cognitive impairment (MCI) and Alzheimer's disease (AD) subjects in the ADNI sample

    PubMed Central

    Guffanti, Guia; Torri, Federica; Rasmussen, Jerod; Clark, Andrew P.; Lakatos, Anita; Turner, Jessica A.; Fallon, James H.; Saykin, Andrew J.; Weiner, Michael; Vawter, Marquis P.; Knowles, James A.; Potkin, Steven G.; Macciardi, Fabio

    2014-01-01

    We investigated the genome-wide distribution of CNVs in the Alzheimer's disease (AD) Neuroimaging Initiative (ADNI) sample (146 with AD, 313 with Mild Cognitive Impairment (MCI), and 181 controls). Comparison of single CNVs between cases (MCI and AD) and controls shows overrepresentation of large heterozygous deletions in cases (p-value < 0.0001). The analysis of CNV-Regions identifies 44 copy number variable loci of heterozygous deletions, with more CNV-Regions among affected than controls (p = 0.005). Seven of the 44 CNV-Regions are nominally significant for association with cognitive impairment. We validated and confirmed our main findings with genome re-sequencing of selected patients and controls. The functional pathway analysis of the genes putatively affected by deletions of CNV-Regions reveals enrichment of genes implicated in axonal guidance, cell–cell adhesion, neuronal morphogenesis and differentiation. Our findings support the role of CNVs in AD, and suggest an association between large deletions and the development of cognitive impairment PMID:23583670

  12. Development and validation of surgical training tool: cystectomy assessment and surgical evaluation (CASE) for robot-assisted radical cystectomy for men.

    PubMed

    Hussein, Ahmed A; Sexton, Kevin J; May, Paul R; Meng, Maxwell V; Hosseini, Abolfazl; Eun, Daniel D; Daneshmand, Siamak; Bochner, Bernard H; Peabody, James O; Abaza, Ronney; Skinner, Eila C; Hautmann, Richard E; Guru, Khurshid A

    2018-04-13

    We aimed to develop a structured scoring tool: cystectomy assessment and surgical evaluation (CASE) that objectively measures and quantifies performance during robot-assisted radical cystectomy (RARC) for men. A multinational 10-surgeon expert panel collaborated towards development and validation of CASE. The critical steps of RARC in men were deconstructed into nine key domains, each assessed by five anchors. Content validation was done utilizing the Delphi methodology. Each anchor was assessed in terms of context, score concordance, and clarity. The content validity index (CVI) was calculated for each aspect. A CVI ≥ 0.75 represented consensus, and this statement was removed from the next round. This process was repeated until consensus was achieved for all statements. CASE was used to assess de-identified videos of RARC to determine reliability and construct validity. Linearly weighted percent agreement was used to assess inter-rater reliability (IRR). A logit model for odds ratio (OR) was used to assess construct validation. The expert panel reached consensus on CASE after four rounds. The final eight domains of the CASE included: pelvic lymph node dissection, development of the peri-ureteral space, lateral pelvic space, anterior rectal space, control of the vascular pedicle, anterior vesical space, control of the dorsal venous complex, and apical dissection. IRR > 0.6 was achieved for all eight domains. Experts outperformed trainees across all domains. We developed and validated a reliable structured, procedure-specific tool for objective evaluation of surgical performance during RARC. CASE may help differentiate novice from expert performances.

  13. A nomogram based on mammary ductoscopic indicators for evaluating the risk of breast cancer in intraductal neoplasms with nipple discharge.

    PubMed

    Lian, Zhen-Qiang; Wang, Qi; Zhang, An-Qin; Zhang, Jiang-Yu; Han, Xiao-Rong; Yu, Hai-Yun; Xie, Si-Mei

    2015-04-01

    Mammary ductoscopy (MD) is commonly used to detect intraductal lesions associated with nipple discharge. This study investigated the relationships between ductoscopic image-based indicators and breast cancer risk, and developed a nomogram for evaluating breast cancer risk in intraductal neoplasms with nipple discharge. A total of 879 consecutive inpatients (916 breasts) with nipple discharge who underwent selective duct excision for intraductal neoplasms detected by MD from June 2008 to April 2014 were analyzed retrospectively. A nomogram was developed using a multivariate logistic regression model based on data from a training set (687 cases) and validated in an independent validation set (229 cases). A Youden-derived cut-off value was assigned to the nomogram for the diagnosis of breast cancer. Color of discharge, location, appearance, and surface of neoplasm, and morphology of ductal wall were independent predictors for breast cancer in multivariate logistic regression analysis. A nomogram based on these predictors performed well. The P value of the Hosmer-Lemeshow test for the prediction model was 0.36. Area under the curve values of 0.812 (95 % confidence interval (CI) 0.763-0.860) and 0.738 (95 % CI 0.635-0.841) was obtained in the training and validation sets, respectively. The accuracies of the nomogram for breast cancer diagnosis were 71.2 % in the training set and 75.5 % in the validation set. We developed a nomogram for evaluating breast cancer risk in intraductal neoplasms with nipple discharge based on MD image findings. This model may aid individual risk assessment and guide treatment in clinical practice.

  14. Validation of the bladder control self-assessment questionnaire (B-SAQ) in men.

    PubMed

    Sahai, Arun; Dowson, Christopher; Cortes, Eduardo; Seth, Jai; Watkins, Jane; Khan, Muhammed Shamim; Dasgupta, Prokar; Cardozo, Linda; Chapple, Christopher; De Ridder, Dirk; Wagg, Adrian; Kelleher, Cornelius

    2014-05-01

    To validate the Bladder Control Self-Assessment Questionnaire (B-SAQ), a short screener to assess lower urinary tract symptoms (LUTS) and overactive bladder (OAB) in men. This was a prospective, single-centre study including 211 patients in a urology outpatient setting. All patients completed the B-SAQ and Kings Health Questionnaire (KHQ) before consultation, and the consulting urologist made an independent assessment of LUTS and the need for treatment. The psychometric properties of the B-SAQ were analysed. A total of 98% of respondents completed all items correctly in <5 min. The mean B-SAQ scores were 12 and 3.3, respectively for cases (n = 101) and controls (n = 108) (P < 0.001). Good correlation was evident between the B-SAQ and the KHQ. The agreement percentages between the individual B-SAQ items and the KHQ symptom severity scale were 86, 85, 84 and 79% for frequency, urgency, nocturia and urinary incontinence, respectively. Using a B-SAQ symptom score threshold of ≥4 alone had sensitivity, specificity and positive predictive values for detecting LUTS of 75, 86 and 84%, respectively, with an area under the curve of 0.88; however, in combination with a bother score threshold of ≥1 these values changed to 92, 46 and 86%, respectively. The B-SAQ is an easy and quick valid case-finding tool for LUTS/OAB in men, but appears to be less specific in men than in women. The B-SAQ has the potential to raise awareness of LUTS. Further validation in a community setting is required. © 2013 The Authors. BJU International © 2013 BJU International.

  15. [Benign proliferative breast disease with and without atypia].

    PubMed

    Coutant, C; Canlorbe, G; Bendifallah, S; Beltjens, F

    2015-12-01

    In the last few years, diagnostics of high-risk breast lesions (atypical ductal hyperplasia [ADH], flat epithelial atypia [FEA], lobular neoplasia: atypical lobular hyperplasia [ALH], lobular carcinoma in situ [LCIS], radial scar [RS], usual ductal hyperplasia [UDH], adenosis, sclerosing adenosis [SA], papillary breast lesions, mucocele-like lesion [MLL]) have increased with the growing number of breast percutaneous biopsies. The management of these lesions is highly conditioned by the enlarged risk of breast cancer combined with either an increased probability of finding cancer after surgery, either a possible malignant transformation (in situ or invasive cancer), or an increased probability of developing cancer on the long range. An overview of the literature reports grade C recommendations concerning the management and follow-up of these lesions: in case of ADH, FEA, ALH, LCIS, RS, MLL with atypia, diagnosed on percutaneous biopsies: surgical excision is recommended; in case of a diagnostic based on vacuum-assisted core biopsy with complete disappearance of radiological signal for FEA or RS without atypia: surgical abstention is a valid alternative approved by multidisciplinary meeting. In case of ALH (incidental finding) associated with benign lesion responsible of radiological signal: abstention may be proposed; in case of UDH, adenosis, MLL without atypia, diagnosed on percutaneous biopsies: the concordance of radiology and histopathology findings must be ensured. No data is available to recommend surgery; in case of non-in sano resection for ADH, FEA, ALH, LCIS (except pleomorphic type), RS, MLL: surgery does not seem to be necessary; in case of previous ADH, ALH, LCIS: a specific follow-up is recommended in accordance with HAS's recommendations. In case of FEA and RS or MLL combined with atypia, little data are yet available to differ the management from others lesions with atypia; in case of UDH, usual sclerosing adenosis, RS without atypia, fibro cystic disease: no specific follow-up is recommended in agreement with HAS's recommendations. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  16. Five Data Validation Cases

    ERIC Educational Resources Information Center

    Simkin, Mark G.

    2008-01-01

    Data-validation routines enable computer applications to test data to ensure their accuracy, completeness, and conformance to industry or proprietary standards. This paper presents five programming cases that require students to validate five different types of data: (1) simple user data entries, (2) UPC codes, (3) passwords, (4) ISBN numbers, and…

  17. Citrate Content of Bone as a Measure of Postmortem Interval: An External Validation Study.

    PubMed

    Brown, Michael A; Bunch, Ann W; Froome, Charles; Gerling, Rebecca; Hennessy, Shawn; Ellison, Jeffrey

    2017-12-26

    The postmortem interval (PMI) of skeletal remains is a crucial piece of information that can help establish the time dimension in criminal cases. Unfortunately, the accurate and reliable determination of PMI from bone continues to evade forensic investigators despite concerted efforts over the past decades to develop suitable qualitative and quantitative methods. A relatively new PMI method based on the analysis of citrate content of bone was developed by Schwarcz et al. The main objective of our research was to determine whether this work could be externally validated. Thirty-one bone samples were obtained from the Forensic Anthropology Center, University of Tennessee, Knoxville, and the Onondaga County Medical Examiner's Office. Results from analyzing samples with PMI greater than 2 years suggest that the hypothetical relationship between the citrate content of bone and PMI is much weaker than reported. It was also observed that the average absolute error between the PMI value estimated using the equation proposed by Schwarcz et al. and the actual ("true") PMI of the sample was negative indicating an underestimation in PMI. These findings are identical to those reported by Kanz et al. Despite these results this method may still serve as a technique to sort ancient from more recent skeletal cases, after further, similar validation studies have been conducted. © 2017 American Academy of Forensic Sciences.

  18. Validation of an imaging based cardiovascular risk score in a Scottish population.

    PubMed

    Kockelkoren, Remko; Jairam, Pushpa M; Murchison, John T; Debray, Thomas P A; Mirsadraee, Saeed; van der Graaf, Yolanda; Jong, Pim A de; van Beek, Edwin J R

    2018-01-01

    A radiological risk score that determines 5-year cardiovascular disease (CVD) risk using routine care CT and patient information readily available to radiologists was previously developed. External validation in a Scottish population was performed to assess the applicability and validity of the risk score in other populations. 2915 subjects aged ≥40 years who underwent routine clinical chest CT scanning for non-cardiovascular diagnostic indications were followed up until first diagnosis of, or death from, CVD. Using a case-cohort approach, all cases and a random sample of 20% of the participant's CT examinations were visually graded for cardiovascular calcifications and cardiac diameter was measured. The radiological risk score was determined using imaging findings, age, gender, and CT indication. Performance on 5-year CVD risk prediction was assessed. 384 events occurred in 2124 subjects during a mean follow-up of 4.25 years (0-6.4 years). The risk score demonstrated reasonable performance in the studied population. Calibration showed good agreement between actual and 5-year predicted risk of CVD. The c-statistic was 0.71 (95%CI:0.67-0.75). The radiological CVD risk score performed adequately in the Scottish population offering a potential novel strategy for identifying patients at high risk for developing cardiovascular disease using routine care CT data. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Diagnostic validity of early-onset obsessive-compulsive disorder in the Danish Psychiatric Central Register: findings from a cohort sample

    PubMed Central

    Powell, Shelagh; Koch, Susanne V; Crowley, James J; Matthiesen, Manuel; Grice, Dorothy E; Thomsen, Per H; Parner, E

    2017-01-01

    Objectives Employing national registers for research purposes depends on a high diagnostic validity. The aim of the present study was to examine the diagnostic validity of recorded diagnoses of early-onset obsessive-compulsive disorder (OCD) in the Danish Psychiatric Central Register (DPCR). Design Review of patient journals selected randomly through the DPCR. Method One hundred cases of OCD were randomly selected from DPCR. Using a predefined coding scheme based on the Children’s Yale Brown Obsessive Compulsive Scale (CYBOCS), experienced research nurse or child and adolescent psychiatrists assessed each journal to determine the presence/absence of OCD diagnostic criteria. The detailed assessments were reviewed by two senior child and adolescent psychiatrists to determine if diagnostic criteria were met. Primary outcome measurements Positive predictive value (PPV) was used as the primary outcome measurement. Results A total of 3462 children/adolescents received an OCD diagnosis as the main diagnosis between 1 January 1995 and 31 December 2015. The average age at diagnosis was 13.21±2.89 years. The most frequent registered OCD subcode was the combined diagnosis DF42.2. Of the 100 cases we examined, 35 had at least one registered comorbidity. For OCD, the PPV was good (PPV 0.85). Excluding journals with insufficient information, the PPV was 0.96. For the subcode F42.2 the PPV was 0.77. The inter-rater reliability was 0.94. The presence of the CYBOCS in the journal significantly increased the PPV for the OCD diagnosis altogether and for the subcode DF42.2. Conclusion The validity and reliability of International Classification of Disease 10th revision codes for OCD in the DPCR is generally high. The subcodes for predominant obsessions/predominant compulsions are less certain and should be used with caution. The results apply for both children and adolescents and for both older and more recent cases. Altogether, the study suggests that there is a high validity of the OCD diagnosis in the Danish National Registers. PMID:28928194

  20. Validity of a family-centered approach for assessing infants' social-emotional wellbeing and their developmental context: a prospective cohort study.

    PubMed

    Hielkema, Margriet; De Winter, Andrea F; Reijneveld, Sijmen A

    2017-06-15

    Family-centered care seems promising in preventive pediatrics, but evidence is lacking as to whether this type of care is also valid as a means to identify risks to infants' social-emotional development. We aimed to examine the validity of such a family-centered approach. We conducted a prospective cohort study. During routine well-child visits (2-15 months), Preventive Child Healthcare (PCH) professionals used a family-centered approach, assessing domains as parents' competence, role of the partner, social support, barriers within the care-giving context, and child's wellbeing for 2976 children as protective, indistinct or a risk. If, based on the overall assessment (the families were labeled as "cases", N = 87), an intervention was considered necessary, parents filled in validated questionnaires covering the aforementioned domains. These questionnaires served as gold standards. For each case, two controls, matched by child-age and gender, also filled in questionnaires (N = 172). We compared PCH professionals' assessments with the parent-reported gold standards. Moreover, we evaluated which domain mostly contributed to the overall assessment. Spearman's rank correlation coefficients between PCH professionals' assessments and gold standards were overall reasonable (Spearman's rho 0.17-0.39) except for the domain barriers within the care-giving context. Scores on gold standards were significantly higher when PCH assessments were rated as "at risk" (overall and per domain).We found reasonable to excellent agreement regarding the absence of risk factors (negative agreement rate: 0.40-0.98), but lower agreement regarding the presence of risk factors (positive agreement rate: 0.00-0.67). An "at risk" assessment for the domain Barriers or life events within the care-giving context contributed most to being overall at risk, i.e. a case, odds ratio 100.1, 95%-confidence interval: 22.6 - infinity. Findings partially support the convergent validity of a family-centered approach in well-child care to assess infants' social-emotional wellbeing and their developmental context. Agreement was reasonable to excellent regarding protective factors, but lower regarding risk factors. Netherlands Trialregister, NTR2681. Date of registration: 05-01-2011, URL: http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=2681 .

  1. In vivo Raman spectroscopy of cervix cancers

    NASA Astrophysics Data System (ADS)

    Rubina, S.; Sathe, Priyanka; Dora, Tapas Kumar; Chopra, Supriya; Maheshwari, Amita; Krishna, C. Murali

    2014-03-01

    Cervix-cancer is the third most common female cancer worldwide. It is the leading cancer among Indian females with more than million new diagnosed cases and 50% mortality, annually. The high mortality rates can be attributed to late diagnosis. Efficacy of Raman spectroscopy in classification of normal and pathological conditions in cervix cancers on diverse populations has already been demonstrated. Our earlier ex vivo studies have shown the feasibility of classifying normal and cancer cervix tissues as well as responders/non-responders to Concurrent chemoradiotherapy (CCRT). The present study was carried out to explore feasibility of in vivo Raman spectroscopic methods in classifying normal and cancerous conditions in Indian population. A total of 182 normal and 132 tumor in vivo Raman spectra, from 63 subjects, were recorded using a fiberoptic probe coupled HE-785 spectrometer, under clinical supervision. Spectra were acquired for 5 s and averaged over 3 times at 80 mW laser power. Spectra of normal conditions suggest strong collagenous features and abundance of non-collagenous proteins and DNA in case of tumors. Preprocessed spectra were subjected to Principal Component-Linear Discrimination Analysis (PCLDA) followed by leave-one-out-cross-validation. Classification efficiency of ~96.7% and 100% for normal and cancerous conditions respectively, were observed. Findings of the study corroborates earlier studies and suggest applicability of Raman spectroscopic methods in combination with appropriate multivariate tool for objective, noninvasive and rapid diagnosis of cervical cancers in Indian population. In view of encouraging results, extensive validation studies will be undertaken to confirm the findings.

  2. The validity of upper-limb neurodynamic tests for detecting peripheral neuropathic pain.

    PubMed

    Nee, Robert J; Jull, Gwendolen A; Vicenzino, Bill; Coppieters, Michel W

    2012-05-01

    The validity of upper-limb neurodynamic tests (ULNTs) for detecting peripheral neuropathic pain (PNP) was assessed by reviewing the evidence on plausibility, the definition of a positive test, reliability, and concurrent validity. Evidence was identified by a structured search for peer-reviewed articles published in English before May 2011. The quality of concurrent validity studies was assessed with the Quality Assessment of Diagnostic Accuracy Studies tool, where appropriate. Biomechanical and experimental pain data support the plausibility of ULNTs. Evidence suggests that a positive ULNT should at least partially reproduce the patient's symptoms and that structural differentiation should change these symptoms. Data indicate that this definition of a positive ULNT is reliable when used clinically. Limited evidence suggests that the median nerve test, but not the radial nerve test, helps determine whether a patient has cervical radiculopathy. The median nerve test does not help diagnose carpal tunnel syndrome. These findings should be interpreted cautiously, because diagnostic accuracy might have been distorted by the investigators' definitions of a positive ULNT. Furthermore, patients with PNP who presented with increased nerve mechanosensitivity rather than conduction loss might have been incorrectly classified by electrophysiological reference standards as not having PNP. The only evidence for concurrent validity of the ulnar nerve test was a case study on cubital tunnel syndrome. We recommend that researchers develop more comprehensive reference standards for PNP to accurately assess the concurrent validity of ULNTs and continue investigating the predictive validity of ULNTs for prognosis or treatment response.

  3. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  4. Leading and leadership: reflections on a case study.

    PubMed

    Joyce, Pauline

    2010-05-01

    The aim of this case study was to explore if observing leaders in the context of their day-to-day work can provide an insight into how they lead in particular circumstances. The study was carried out in a small organization which was set up 5 years ago. A case study methodology was used. Data were collected by field notes of non-participant and participant observations. Follow-up interviews were transcribed and analysed to contextualize the observations. A reflective diary was used by the researcher to add to the richness of the data. The data demonstrates how the leader responded in key circumstances during scheduled meetings with staff, interactions in the office and during coffee time. These responses are linked to literature on leadership in the areas of power, personal development, coaching and delegation. The findings suggest that observing a leader in the context of their day-to-day work can provide evidence to validate what leaders do in particular circumstances. The implications of the findings for nursing management are the opportunities to use observation as a tool to understand what managers/leaders do, how they manage or lead and why others respond as they do, and with what outcomes.

  5. Dissolution of hypotheses in biochemistry: three case studies.

    PubMed

    Fry, Michael

    2016-12-01

    The history of biochemistry and molecular biology is replete with examples of erroneous theories that persisted for considerable lengths of time before they were rejected. This paper examines patterns of dissolution of three such erroneous hypotheses: The idea that nucleic acids are tetrads of the four nucleobases ('the tetranucleotide hypothesis'); the notion that proteins are collinear with their encoding genes in all branches of life; and the hypothesis that proteins are synthesized by reverse action of proteolytic enzymes. Analysis of these cases indicates that amassed contradictory empirical findings did not prompt critical experimental testing of the prevailing theories nor did they elicit alternative hypotheses. Rather, the incorrect models collapsed when experiments that were not purposely designed to test their validity exposed new facts.

  6. Quantum loop corrections of a charged de Sitter black hole

    NASA Astrophysics Data System (ADS)

    Naji, J.

    2018-03-01

    A charged black hole in de Sitter (dS) space is considered and logarithmic corrected entropy used to study its thermodynamics. Logarithmic corrections of entropy come from thermal fluctuations, which play a role of quantum loop correction. In that case we are able to study the effect of quantum loop on black hole thermodynamics and statistics. As a black hole is a gravitational object, it helps to obtain some information about the quantum gravity. The first and second laws of thermodynamics are investigated for the logarithmic corrected case and we find that it is only valid for the charged dS black hole. We show that the black hole phase transition disappears in the presence of logarithmic correction.

  7. Investigation of a fatality due to diesel fuel No. 2 ingestion.

    PubMed

    Martínez, María A; Ballesteros, Salomé

    2006-10-01

    This paper presents a simple, rapid, reliable, and validated analytical method suited for forensic examination of diesel fuel No. 2 in biological specimens. The proposed methodology has been applied to the investigation of a forensic case with diesel fuel No. 2 ingestion. Case history and pathological and toxicological findings are described here to illustrate the toxicity of this complex hydrocarbon mixture. The toxicological significance and the possible mechanisms leading to death are also discussed. The toxicological initial screening and quantitation were performed by means of gas chromatography with flame-ionization detection and confirmation was performed using gas chromatography-mass spectrometry in total ion chromatogram mode. n-Tetradecane peak was selected to estimate diesel fuel No. 2 in all biological samples. Diesel fuel No. 2 analytical methodology was validated at five concentration levels from 5 to 400 mg/L. The method provided extraction recoveries between 89.0% and 97.9%. The limit of detection was 1 mg/L and the limit of quantitation was 5 mg/L. The linearity of the blood calibration curves was excellent with r2 values of >0.999. Intraday and interday precisions had a coefficient of variation

  8. Electromagnetic compatibility and safety design of a patient compliance-free, inductive implant charger.

    PubMed

    Theodoridis, Michael P; Mollov, Stefan V

    2014-10-01

    This article presents the design of a domestic, radiofrequency induction charger for implants toward compliance with the Federal Communications Commission safety and electromagnetic compatibility regulations. The suggested arrangement does not impose any patient compliance requirements other than the use of a designated bed for night sleep, and therefore can find a domestic use. The method can be applied to a number of applications; a rechargeable pacemaker is considered as a case study. The presented work has proven that it is possible to realize a fully compliant inductive charging system with minimal patient interaction, and has generated important information for consideration by the designers of inductive charging systems. Experimental results have verified the validity of the theoretical findings.

  9. Rational design of new electrolyte materials for electrochemical double layer capacitors

    NASA Astrophysics Data System (ADS)

    Schütter, Christoph; Husch, Tamara; Viswanathan, Venkatasubramanian; Passerini, Stefano; Balducci, Andrea; Korth, Martin

    2016-09-01

    The development of new electrolytes is a centerpiece of many strategies to improve electrochemical double layer capacitor (EDLC) devices. We present here a computational screening-based rational design approach to find new electrolyte materials. As an example application, the known chemical space of almost 70 million compounds is investigated in search of electrochemically more stable solvents. Cyano esters are identified as especially promising new compound class. Theoretical predictions are validated with subsequent experimental studies on a selected case. These studies show that based on theoretical predictions only, a previously untested, but very well performing compound class was identified. We thus find that our rational design strategy is indeed able to successfully identify completely new materials with substantially improved properties.

  10. Rapid Diagnosis of Tuberculosis with the Xpert MTB/RIF Assay in High Burden Countries: A Cost-Effectiveness Analysis

    PubMed Central

    Vassall, Anna; van Kampen, Sanne; Sohn, Hojoon; Michael, Joy S.; John, K. R.; den Boon, Saskia; Davis, J. Lucian; Whitelaw, Andrew; Nicol, Mark P.; Gler, Maria Tarcela; Khaliqov, Anar; Zamudio, Carlos; Perkins, Mark D.; Boehme, Catharina C.; Cobelens, Frank

    2011-01-01

    Background Xpert MTB/RIF (Xpert) is a promising new rapid diagnostic technology for tuberculosis (TB) that has characteristics that suggest large-scale roll-out. However, because the test is expensive, there are concerns among TB program managers and policy makers regarding its affordability for low- and middle-income settings. Methods and Findings We estimate the impact of the introduction of Xpert on the costs and cost-effectiveness of TB care using decision analytic modelling, comparing the introduction of Xpert to a base case of smear microscopy and clinical diagnosis in India, South Africa, and Uganda. The introduction of Xpert increases TB case finding in all three settings; from 72%–85% to 95%–99% of the cohort of individuals with suspected TB, compared to the base case. Diagnostic costs (including the costs of testing all individuals with suspected TB) also increase: from US$28–US$49 to US$133–US$146 and US$137–US$151 per TB case detected when Xpert is used “in addition to” and “as a replacement of” smear microscopy, respectively. The incremental cost effectiveness ratios (ICERs) for using Xpert “in addition to” smear microscopy, compared to the base case, range from US$41–$110 per disability adjusted life year (DALY) averted. Likewise the ICERS for using Xpert “as a replacement of” smear microscopy range from US$52–$138 per DALY averted. These ICERs are below the World Health Organization (WHO) willingness to pay threshold. Conclusions Our results suggest that Xpert is a cost-effective method of TB diagnosis, compared to a base case of smear microscopy and clinical diagnosis of smear-negative TB in low- and middle-income settings where, with its ability to substantially increase case finding, it has important potential for improving TB diagnosis and control. The extent of cost-effectiveness gain to TB programmes from deploying Xpert is primarily dependent on current TB diagnostic practices. Further work is required during scale-up to validate these findings. Please see later in the article for the Editors' Summary PMID:22087078

  11. Fatigue after stroke: the development and evaluation of a case definition.

    PubMed

    Lynch, Joanna; Mead, Gillian; Greig, Carolyn; Young, Archie; Lewis, Susan; Sharpe, Michael

    2007-11-01

    While fatigue after stroke is a common problem, it has no generally accepted definition. Our aim was to develop a case definition for post-stroke fatigue and to test its psychometric properties. A case definition with face validity and an associated structured interview was constructed. After initial piloting, the feasibility, reliability (test-retest and inter-rater) and concurrent validity (in relation to four fatigue severity scales) were determined in 55 patients with stroke. All participating patients provided satisfactory answers to all the case definition probe questions demonstrating its feasibility For test-retest reliability, kappa was 0.78 (95% CI, 0.57-0.94, P<.01) and for inter-rater reliability kappa was 0.80 (95% CI, 0.62-0.99, P<.01). Patients fulfilling the case definition also had substantially higher fatigue scores on four fatigue severity scales (P<.001) indicating concurrent validity. The proposed case definition is feasible to administer and reliable in practice, and there is evidence of concurrent validity. It requires further evaluation in different settings.

  12. Validation of verbal autopsy methods using hospital medical records: a case study in Vietnam.

    PubMed

    Tran, Hong Thi; Nguyen, Hoa Phuong; Walker, Sue M; Hill, Peter S; Rao, Chalapati

    2018-05-18

    Information on causes of death (COD) is crucial for measuring the health outcomes of populations and progress towards the Sustainable Development Goals. In many countries such as Vietnam where the civil registration and vital statistics (CRVS) system is dysfunctional, information on vital events will continue to rely on verbal autopsy (VA) methods. This study assesses the validity of VA methods used in Vietnam, and provides recommendations on methods for implementing VA validation studies in Vietnam. This validation study was conducted on a sample of 670 deaths from a recent VA study in Quang Ninh province. The study covered 116 cases from this sample, which met three inclusion criteria: a) the death occurred within 30 days of discharge after last hospitalisation, and b) medical records (MRs) for the deceased were available from respective hospitals, and c) the medical record mentioned that the patient was terminally ill at discharge. For each death, the underlying cause of death (UCOD) identified from MRs was compared to the UCOD from VA. The validity of VA diagnoses for major causes of death was measured using sensitivity, specificity and positive predictive value (PPV). The sensitivity of VA was at least 75% in identifying some leading CODs such as stroke, road traffic accidents and several site-specific cancers. However, sensitivity was less than 50% for other important causes including ischemic heart disease, chronic obstructive pulmonary diseases, and diabetes. Overall, there was 57% agreement between UCOD from VA and MR, which increased to 76% when multiple causes from VA were compared to UCOD from MR. Our findings suggest that VA is a valid method to ascertain UCOD in contexts such as Vietnam. Furthermore, within cultural contexts in which patients prefer to die at home instead of a healthcare facility, using the available MRs as the gold standard may be meaningful to the extent that recall bias from the interval between last hospital discharge and death can be minimized. Therefore, future studies should evaluate validity of MRs as a gold standard for VA studies in contexts similar to the Vietnamese context.

  13. The influence of validity criteria on Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) test-retest reliability among high school athletes.

    PubMed

    Brett, Benjamin L; Solomon, Gary S

    2017-04-01

    Research findings to date on the stability of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) Composite scores have been inconsistent, requiring further investigation. The use of test validity criteria across these studies also has been inconsistent. Using multiple measures of stability, we examined test-retest reliability of repeated ImPACT baseline assessments in high school athletes across various validity criteria reported in previous studies. A total of 1146 high school athletes completed baseline cognitive testing using the online ImPACT test battery at two time periods of approximately two-year intervals. No participant sustained a concussion between assessments. Five forms of validity criteria used in previous test-retest studies were applied to the data, and differences in reliability were compared. Intraclass correlation coefficients (ICCs) ranged in composite scores from .47 (95% confidence interval, CI [.38, .54]) to .83 (95% CI [.81, .85]) and showed little change across a two-year interval for all five sets of validity criteria. Regression based methods (RBMs) examining the test-retest stability demonstrated a lack of significant change in composite scores across the two-year interval for all forms of validity criteria, with no cases falling outside the expected range of 90% confidence intervals. The application of more stringent validity criteria does not alter test-retest reliability, nor does it account for some of the variation observed across previously performed studies. As such, use of the ImPACT manual validity criteria should be utilized in the determination of test validity and in the individualized approach to concussion management. Potential future efforts to improve test-retest reliability are discussed.

  14. Condition Based vs. Time Based Maintenance: Case Study on Hypergolic Pumps

    NASA Technical Reports Server (NTRS)

    Gibson, Lewis J.

    2007-01-01

    Two Pad 39B Ox pumps were monitored with the Baker Instruments Explorer Motor tester. Using the torque spectrum it was determined that Ox pump #2 had a significant peak at a frequency, which indicated lubricant fluid whirl. Similar testing on Ox pump #1 didn't indicate this peak, an indication that this pump was in good mechanical condition. Subsequent disassembly of both motors validated these findings. Ox pump #2 rear bearing showed significant wear, the front bearing showed little wear. Ox pump #1 was still within manufacturers tolerances.

  15. No association between FTO or HHEX andendometrial cancer risk

    PubMed Central

    Gaudet, Mia M.; Yang, Hannah P.; Bosquet, Jesus Gonzalez; Healey, Catherine S.; Ahmed, Shahana; Dunning, Alison M.; Easton, Doug F.; Spurdle, Amanda B.; Ferguson, Kaltin; O’Mara, Tracy; Group, ANECS; Lambrechts, Diether; Despierre, Evelyn; Vergote, Ignace; Amant, Frederic; Lacey, James V.; Lissowska, Jola; Peplonska, Beata; Brinton, Louise A.; Chanock, Stephen; Garcia-Closas, Montserrat

    2010-01-01

    Introduction Obesity and diabetes are known risk factors for endometrial cancer; thus genetic risk factors of these phenotypes may also be associated with endometrial cancer risk. To evaluate this hypothesis, we genotyped tagSNPs and candidate SNPs in FTO and HHEX in a primary set of 417 endometrial cancer cases and 406 population-based controls, and validated significant findings in a replication set of approximately 2,347 cases and 3,140 controls from three additional studies. Methods We genotyped 189 tagSNPs in FTO (including rs8050136) and five tagSNPs in HHEX (including rs1111875) in the primary set and one SNP in each of FTO (rs12927155) and HHEX (rs1111875) in the validation set. Per allele odds ratios (OR) and 95% confidence intervals (CI) were calculated to estimate the association between the genotypes of each SNPs(as an ordinal variable)and endometrial cancer risk using unconditional logistic regression models, controlling for age and site. Results In the primary study, the most significant findings in FTO was rs12927155 (OR=1.56, 95% CI 1.21–2.01, p=5.8×10−4) and HHEX was rs1111875 (OR=0.80, 95% CI 0.66–0.97; p=0.026). In the validation studies, the pooled per allele ORs, adjusted for age and study, were for FTO rs12927155: OR=0.94, 95% CI 0.83–1.06, p=0.29 and for HHEX rs1111875: OR=1.00, 95%CI 0.92–1.10, p=0.96. Conclusion Our data indicate that common genetic variants in two genes previously related to obesity (FTO) and diabetes (HHEX) by genome-wide association scans are not associatedwith endometrial cancer risk. Impact Polymorphisms in FTO and HHEX are unlikely to have large effects on endometrial cancer risk but may have weaker effects. PMID:20647405

  16. Developing a dengue forecast model using machine learning: A case study in China

    PubMed Central

    Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun

    2017-01-01

    Background In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Methodology/Principal findings Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011–2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. Conclusion and significance The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics. PMID:29036169

  17. Team effectiveness in academic medical libraries: a multiple case study*

    PubMed Central

    Russo Martin, Elaine

    2006-01-01

    Objectives: The objective of this study is to apply J. Richard Hackman's framework on team effectiveness to academic medical library settings. Methods: The study uses a qualitative, multiple case study design, employing interviews and focus groups to examine team effectiveness in three academic medical libraries. Another site was selected as a pilot to validate the research design, field procedures, and methods to be used with the cases. In all, three interviews and twelve focus groups, with approximately seventy-five participants, were conducted at the case study libraries. Findings: Hackman identified five conditions leading to team effectiveness and three outcomes dimensions that defined effectiveness. The participants in this study identified additional characteristics of effectiveness that focused on enhanced communication, leadership personality and behavior, and relationship building. The study also revealed an additional outcome dimension related to the evolution of teams. Conclusions: Introducing teams into an organization is not a trivial matter. Hackman's model of effectiveness has implications for designing successful library teams. PMID:16888659

  18. Actinomyces israelii in radicular cysts: a molecular study.

    PubMed

    Gomes, Nathália Rodrigues; Diniz, Marina Gonçalves; Pereira, Thais Dos Santos Fontes; Estrela, Carlos; de Macedo Farias, Luiz; de Andrade, Bruno Augusto Benevenuto; Gomes, Carolina Cavaliéri; Gomez, Ricardo Santiago

    2017-05-01

    To investigate whether the microscopic filamentous aggregates observed in radicular cysts are associated with the molecular identification of Actinomyces israelii. Moreover, to verify whether this bacterium can be detected in radicular cyst specimens not presenting aggregates. Microscopic colonies suggestive of Actinomyces were found in 8 out of 279 radicular cyst samples (case group). The case and control groups (n = 12; samples without filamentous colonies) were submitted to the semi-nested polymerase chain reaction to test the presence of A israelii. DNA sequencing was performed to validate polymerase chain reaction results. Two and 3 samples in the case and control groups, respectively, did not present a functional genomic DNA template and were excluded from the study. A israelii was identified in all samples of the case group and in 3 out of 9 samples of the control group. Although A israelii is more commonly identified in radicular cysts presenting filamentous aggregates, it also appears to be detected in radicular cysts without this microscopic finding. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Flux quench in a system of interacting spinless fermions in one dimension

    NASA Astrophysics Data System (ADS)

    Nakagawa, Yuya O.; Misguich, Grégoire; Oshikawa, Masaki

    2016-05-01

    We study a quantum quench in a one-dimensional spinless fermion model (equivalent to the XXZ spin chain), where a magnetic flux is suddenly switched off. This quench is equivalent to imposing a pulse of electric field and therefore generates an initial particle current. This current is not a conserved quantity in the presence of a lattice and interactions, and we investigate numerically its time evolution after the quench, using the infinite time-evolving block decimation method. For repulsive interactions or large initial flux, we find oscillations that are governed by excitations deep inside the Fermi sea. At long times we observe that the current remains nonvanishing in the gapless cases, whereas it decays to zero in the gapped cases. Although the linear response theory (valid for a weak flux) predicts the same long-time limit of the current for repulsive and attractive interactions (relation with the zero-temperature Drude weight), larger nonlinearities are observed in the case of repulsive interactions compared with that of the attractive case.

  20. [Clinical decision making: Fostering critical thinking in the nursing diagnostic process through case studies].

    PubMed

    Müller-Staub, Maria; Stuker-Studer, Ursula

    2006-10-01

    Case studies, based on actual patients' situations, provide a method of clinical decision making to foster critical thinking in nurses. This paper describes the method and process of group case studies applied in continuous education settings. This method bases on Balints' case supervision and was further developed and combined with the nursing diagnostic process. A case study contains different phases: Pre-phase, selection phase, case delineation and case work. The case provider narratively tells the situation of a patient. This allows the group to analyze and cluster signs and symptoms, to state nursing diagnoses and to derive nursing interventions. Results of the case study are validated by applying the theoretical background and critical appraisal of the case provider. Learning effects of the case studies were evaluated by means of qualitative questionnaires and analyzed according to Mayring. Findings revealed the following categories: a) Patients' problems are perceived in a patient centred way, accurate nursing diagnoses are stated and effective nursing interventions implemented. b) Professional nursing tasks are more purposefully perceived and named more precise. c) Professional nursing relationship, communication and respectful behaviour with patients were perceived in differentiated ways. The theoretical framework is described in the paper "Clinical decision making and critical thinking in the nursing diagnostic process". (Müller-Staub, 2006).

  1. Faster than classical quantum algorithm for dense formulas of exact satisfiability and occupation problems

    NASA Astrophysics Data System (ADS)

    Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán

    2016-07-01

    We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.

  2. Sensitivity to scale of willingness-to-pay within the context of menorrhagia.

    PubMed

    Sanghera, Sabina; Frew, Emma; Gupta, Janesh Kumar; Kai, Joe; Roberts, Tracy Elizabeth

    2017-04-01

    Willingness-to-pay (WTP) provides a broad assessment of well-being, capturing benefits beyond health. However, the validity of the approach has been questioned and the evidence relating to the sensitivity of WTP to changes in health status is mixed. Using menorrhagia (heavy menstrual bleeding) as a case study, this exploratory study assesses the sensitivity to scale of WTP to change in health status as measured by a condition-specific measure, MMAS, which includes both health and non-health benefits. The relationship between EQ-5D and change in health status is also assessed. Baseline EQ-5D and MMAS values were collected from women taking part in a randomized controlled trial for pharmaceutical treatment of menorrhagia. Following treatment, these measures were administered along with a WTP exercise. The relationship between the measures was assessed using Spearman's correlation analysis, and the sensitivity to scale of WTP was measured by identifying differences in WTP alongside differences in MMAS and EQ5D values. Our exploratory findings indicated that WTP, and not EQ-5D, was significantly positively correlated with change in MMAS, providing some evidence for convergent validity. These findings suggest that WTP is capturing the non-health benefits within the MMAS measure. Mean WTP also increased with percentage improvements in MMAS, suggesting sensitivity to scale. When compared to quality of life measured using the condition-specific MMAS measure, the convergent validity and sensitivity to scale of WTP is indicated. The findings suggest that WTP is more sensitive to change in MMAS, than with EQ-5D. © 2016 The Authors Health Expectations Published by John Wiley & Sons Ltd.

  3. Validation of 2D flood models with insurance claims

    NASA Astrophysics Data System (ADS)

    Zischg, Andreas Paul; Mosimann, Markus; Bernet, Daniel Benjamin; Röthlisberger, Veronika

    2018-02-01

    Flood impact modelling requires reliable models for the simulation of flood processes. In recent years, flood inundation models have been remarkably improved and widely used for flood hazard simulation, flood exposure and loss analyses. In this study, we validate a 2D inundation model for the purpose of flood exposure analysis at the river reach scale. We validate the BASEMENT simulation model with insurance claims using conventional validation metrics. The flood model is established on the basis of available topographic data in a high spatial resolution for four test cases. The validation metrics were calculated with two different datasets; a dataset of event documentations reporting flooded areas and a dataset of insurance claims. The model fit relating to insurance claims is in three out of four test cases slightly lower than the model fit computed on the basis of the observed inundation areas. This comparison between two independent validation data sets suggests that validation metrics using insurance claims can be compared to conventional validation data, such as the flooded area. However, a validation on the basis of insurance claims might be more conservative in cases where model errors are more pronounced in areas with a high density of values at risk.

  4. Conundrums in neurology: diagnosing serotonin syndrome - a meta-analysis of cases.

    PubMed

    Werneke, Ursula; Jamshidi, Fariba; Taylor, David M; Ott, Michael

    2016-07-12

    Serotonin syndrome is a toxic state, caused by serotonin (5HT) excess in the central nervous system. Serotonin syndrome's main feature is neuro-muscular hyperexcitability, which in many cases is mild but in some cases can become life-threatening. The diagnosis of serotonin syndrome remains challenging since it can only be made on clinical grounds. Three diagnostic criteria systems, Sternbach, Radomski and Hunter classifications, are available. Here we test the validity of four assumptions that have become widely accepted: (1) The Hunter classification performs clinically better than the Sternbach and Radomski criteria; (2) in contrast to neuroleptic malignant syndrome, the onset of serotonin syndrome is usually rapid; (3) hyperthermia is a hallmark of severe serotonin syndrome; and (4) serotonin syndrome can readily be distinguished from neuroleptic malignant syndrome on clinical grounds and on the basis of medication history. Systematic review and meta-analysis of all cases of serotonin syndrome and toxicity published between 2004 and 2014, using PubMed and Web of Science. Two of the four assumptions (1 and 2) are based on only one published study each and have not been independently validated. There is little agreement between current criteria systems for the diagnosis of serotonin syndrome. Although frequently thought to be the gold standard for the diagnosis of the serotonin syndrome, the Hunter criteria did not perform better than the Sternbach and Radomski criteria. Not all cases seem to be of rapid onset and only relatively few cases may present with hyperthermia. The 0 differential diagnosis between serotonin syndrome and neuroleptic malignant syndrome is not always clear-cut. Our findings challenge four commonly made assumptions about serotonin syndrome. We propose our meta-analysis of cases (MAC) method as a new way to systematically pool and interpret anecdotal but important clinical information concerning uncommon or emergent phenomena that cannot be captured in any other way but through case reports.

  5. Multimodal Navigation in Endoscopic Transsphenoidal Resection of Pituitary Tumors Using Image-Based Vascular and Cranial Nerve Segmentation: A Prospective Validation Study.

    PubMed

    Dolati, Parviz; Eichberg, Daniel; Golby, Alexandra; Zamani, Amir; Laws, Edward

    2016-11-01

    Transsphenoidal surgery (TSS) is the most common approach for the treatment of pituitary tumors. However, misdirection, vascular damage, intraoperative cerebrospinal fluid leakage, and optic nerve injuries are all well-known complications, and the risk of adverse events is more likely in less-experienced hands. This prospective study was conducted to validate the accuracy of image-based segmentation coupled with neuronavigation in localizing neurovascular structures during TSS. Twenty-five patients with a pituitary tumor underwent preoperative 3-T magnetic resonance imaging (MRI), and MRI images loaded into the navigation platform were used for segmentation and preoperative planning. After patient registration and subsequent surgical exposure, each segmented neural or vascular element was validated by manual placement of the navigation probe or Doppler probe on or as close as possible to the target. Preoperative segmentation of the internal carotid artery and cavernous sinus matched with the intraoperative endoscopic and micro-Doppler findings in all cases. Excellent correspondence between image-based segmentation and the endoscopic view was also evident at the surface of the tumor and at the tumor-normal gland interfaces. Image guidance assisted the surgeons in localizing the optic nerve and chiasm in 64% of cases. The mean accuracy of the measurements was 1.20 ± 0.21 mm. Image-based preoperative vascular and neural element segmentation, especially with 3-dimensional reconstruction, is highly informative preoperatively and potentially could assist less-experienced neurosurgeons in preventing vascular and neural injury during TSS. In addition, the accuracy found in this study is comparable to previously reported neuronavigation measurements. This preliminary study is encouraging for future prospective intraoperative validation with larger numbers of patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. An investigation of the validity of the Work Assessment Triage Tool clinical decision support tool for selecting optimal rehabilitation interventions for workers with musculoskeletal injuries.

    PubMed

    Qin, Ziling; Armijo-Olivo, Susan; Woodhouse, Linda J; Gross, Douglas P

    2016-03-01

    To evaluate the concurrent validity of a clinical decision support tool (Work Assessment Triage Tool (WATT)) developed to select rehabilitation treatments for injured workers with musculoskeletal conditions. Methodological study with cross-sectional and prospective components. Data were obtained from the Workers' Compensation Board of Alberta rehabilitation facility in Edmonton, Canada. A total of 432 workers' compensation claimants evaluated between November 2011 and June 2012. Percentage agreement between the Work Assessment Triage Tool and clinician recommendations was used to determine concurrent validity. In claimants returning to work, frequencies of matching were calculated and compared between clinician and Work Assessment Triage Tool recommendations and the actual programs undertaken by claimants. The frequency of each intervention recommended by clinicians, Work Assessment Triage Tool, and case managers were also calculated and compared. Percentage agreement between clinician and Work Assessment Triage Tool recommendations was poor (19%) to moderate (46%) and Kappa = 0.37 (95% CI -0.02, 0.76). The Work Assessment Triage Tool did not improve upon clinician recommendations as only 14 out of 31 claimants returning to work had programs that contradicted clinician recommendations, but were consistent with Work Assessment Triage Tool recommendations. Clinicians and case managers were inclined to recommend functional restoration, physical therapy, or no rehabilitation while the Work Assessment Triage Tool recommended additional evidence-based interventions, such as workplace-based interventions. Our findings do not provide evidence of concurrent validity for the Work Assessment Triage Tool compared with clinician recommendations. Based on these results, we cannot recommend further implementation of the Work Assessment Triage Tool. However, the Work Assessment Triage Tool appeared more likely than clinicians to recommend interventions supported by evidence; thus warranting further research. © The Author(s) 2015.

  7. Validity of a computerized population registry of dementia based on clinical databases.

    PubMed

    Mar, J; Arrospide, A; Soto-Gordoa, M; Machón, M; Iruin, Á; Martinez-Lage, P; Gabilondo, A; Moreno-Izco, F; Gabilondo, A; Arriola, L

    2018-05-08

    The handling of information through digital media allows innovative approaches for identifying cases of dementia through computerized searches within the clinical databases that include systems for coding diagnoses. The aim of this study was to analyze the validity of a dementia registry in Gipuzkoa based on the administrative and clinical databases existing in the Basque Health Service. This is a descriptive study based on the evaluation of available data sources. First, through review of medical records, the diagnostic validity was evaluated in 2 samples of cases identified and not identified as dementia. The sensitivity, specificity and positive and negative predictive value of the diagnosis of dementia were measured. Subsequently, the cases of living dementia in December 31, 2016 were searched in the entire Gipuzkoa population to collect sociodemographic and clinical variables. The validation samples included 986 cases and 327 no cases. The calculated sensitivity was 80.2% and the specificity was 99.9%. The negative predictive value was 99.4% and positive value was 95.1%. The cases in Gipuzkoa were 10,551, representing 65% of the cases predicted according to the literature. Antipsychotic medication were taken by a 40% and a 25% of the cases were institutionalized. A registry of dementias based on clinical and administrative databases is valid and feasible. Its main contribution is to show the dimension of dementia in the health system. Copyright © 2018 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. The control effect in a detached laminar boundary layer of an array of normal synthetic jets

    NASA Astrophysics Data System (ADS)

    Valenzuela Calva, Fernando; Avila Rodriguez, Ruben

    2016-11-01

    In this work, 3D numerical simulations of an array of three normal circular synthetic jets embedded in an attached laminar boundary layer that separates under the influence of an inclined flap are performed for flow separation control. At the beginning of the present study, three cases are used to validate the numerical simulation with data obtained from experiments. The experimental data is chosen based on the cases which presented higher repeatability and reliability. Simulations showed reasonable agreement when compared with experiments. The simulations are undertaken at three synthetic jet operating conditions, i.e. Case A: L = 2, VR = 0.32; Case B: L = 4, VR = 0.64 and Case C: L = 6, VR = 0.96. The vortical structures produced for each synthetic jet operating condition are hairpin vortices for Case A and tilted vortices for Case B and C, respectively. By examining the spatial wall shear stress variations, the effect on the boundary layer prior to separation of the middle synthetic jet is evaluated. For effective flow control, produced at a relatively low the finding from this study suggests that hairpin vortical structures are more desirable structures. Universidad Nacional Autonoma de Mexico.

  9. Frequency, probability, and prediction: easy solutions to cognitive illusions?

    PubMed

    Griffin, D; Buehler, R

    1999-02-01

    Many errors in probabilistic judgment have been attributed to people's inability to think in statistical terms when faced with information about a single case. Prior theoretical analyses and empirical results imply that the errors associated with case-specific reasoning may be reduced when people make frequentistic predictions about a set of cases. In studies of three previously identified cognitive biases, we find that frequency-based predictions are different from-but no better than-case-specific judgments of probability. First, in studies of the "planning fallacy, " we compare the accuracy of aggregate frequency and case-specific probability judgments in predictions of students' real-life projects. When aggregate and single-case predictions are collected from different respondents, there is little difference between the two: Both are overly optimistic and show little predictive validity. However, in within-subject comparisons, the aggregate judgments are significantly more conservative than the single-case predictions, though still optimistically biased. Results from studies of overconfidence in general knowledge and base rate neglect in categorical prediction underline a general conclusion. Frequentistic predictions made for sets of events are no more statistically sophisticated, nor more accurate, than predictions made for individual events using subjective probability. Copyright 1999 Academic Press.

  10. Identification of limiting case between DBA and SBDBA (CL break area sensitivity): A new model for the boron injection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez Gonzalez, R.; Petruzzi, A.; D'Auria, F.

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and (e.g., oblique Control Rods, Positive Void coefficient) required a developed and validated complex three dimensional (3D) neutron kinetics (NK) coupled thermal hydraulic (TH) model. Reactor shut-down is obtained by oblique CRs and, during accidental conditions, by an emergency shut-down system (JDJ) injecting a highly concentrated boron solution (boron clouds) in the moderator tank, the boron clouds reconstruction is obtained using a CFD (CFX) code calculation. A complete LBLOCA calculation implies the application of the RELAP5-3D{sup C} system code. Within the framework of themore » third Agreement 'NA-SA - Univ. of Pisa' a new RELAP5-3D control system for the boron injection system was developed and implemented in the validated coupled RELAP5-3D/NESTLE model of the Atucha 2 NPP. The aim of this activity is to find out the limiting case (maximum break area size) for the Peak Cladding Temperature for LOCAs under fixed boundary conditions. (authors)« less

  11. Mass-Spectrometry-Based Proteomics Reveals Organ-Specific Expression Patterns To Be Used as Forensic Evidence.

    PubMed

    Dammeier, Sascha; Nahnsen, Sven; Veit, Johannes; Wehner, Frank; Ueffing, Marius; Kohlbacher, Oliver

    2016-01-04

    Standard forensic procedures to examine bullets after an exchange of fire include a mechanical or ballistic reconstruction of the event. While this is routine to identify which projectile hit a subject by DNA analysis of biological material on the surface of the projectile, it is rather difficult to determine which projectile caused the lethal injury--often the crucial point with regard to legal proceedings. With respect to fundamental law it is the duty of the public authority to make every endeavor to solve every homicide case. To improve forensic examinations, we present a forensic proteomic method to investigate biological material from a projectile's surface and determine the tissues traversed by it. To obtain a range of relevant samples, different major bovine organs were penetrated with projectiles experimentally. After tryptic "on-surface" digestion, mass-spectrometry-based proteome analysis, and statistical data analysis, we were able to achieve a cross-validated organ classification accuracy of >99%. Different types of anticipated external variables exhibited no prominent influence on the findings. In addition, shooting experiments were performed to validate the results. Finally, we show that these concepts could be applied to a real case of murder to substantially improve the forensic reconstruction.

  12. Epithelial Membrane Protein-2 Expression is an Early Predictor of Endometrial Cancer Development

    PubMed Central

    Habeeb, Omar; Goodglick, Lee; Soslow, Robert A.; Rao, Rajiv; Gordon, Lynn K.; Schirripa, Osvaldo; Horvath, Steve; Braun, Jonathan; Seligson, David B.; Wadehra, Madhuri

    2010-01-01

    BACKGROUND Endometrial cancer (EC) is a common malignancy worldwide. It is often preceded by endometrial hyperplasia, whose management and risk of neoplastic progression vary. Previously, we have shown that the tetraspan protein Epithelial Membrane Protein-2 (EMP2) is a prognostic indicator for EC aggressiveness and survival. Here we validate the expression of EMP2 in EC, and further examine whether EMP2 expression within preneoplastic lesions is an early prognostic biomarker for EC development. METHODS A tissue microarray (TMA) was constructed with a wide representation of benign and malignant endometrial samples. The TMA contains a metachronous cohort of cases from individuals who either developed or did not develop EC. Intensity and frequency of EMP2 expression were assessed using immunohistochemistry. RESULTS There was a stepwise, statistically-significant increase in the average EMP2 expression from benign to hyperplasia to atypia to EC. Furthermore, detailed analysis of EMP2 expression in potentially premalignant cases demonstrated that EMP2 positivity was a strong predictor for EC development. CONCLUSION EMP2 is an early predictor of EC development in preneoplastic lesions. In addition, combined with our previous findings, these results validate that EMP2 as a novel biomarker for EC development. PMID:20578181

  13. Analysis of 25 C NBOMe in Seized Blotters by HPTLC and GC–MS

    PubMed Central

    Duffau, Boris; Camargo, Cristian; Kogan, Marcelo; Fuentes, Edwar; Cassels, Bruce Kennedy

    2016-01-01

    Use of unauthorized synthetic drugs is a serious, forensic, regulatory and public health issue. In this scenario, consumption of drug-impregnated blotters is very frequent. For decades, blotters have been generally impregnated with the potent hallucinogen known as lysergic acid diethylamide (LSD); however, since 2013 blotter stamps with N-2 methoxybenzyl-substituted phenylethylamine hallucinogen designated as “NBOMes” have been seized in Chile. To address this issue with readily accessible laboratory equipment, we have developed and validated a new HPTLC method for the identification and quantitation of 25-C-NBOMe in seized blotters and its confirmation by GC–MS. The proposed method was validated according to SWGTOX recommendations and is suitable for routine analysis of seized blotters containing 25-C-NBOMe. With the validated method, we analyzed 15 real samples, in all cases finding 25-C-NBOMe in a wide dosage range (701.0–1943.5 µg per blotter). In this situation, we can assume that NBOMes are replacing LSD as the main hallucinogenic drug consumed in blotters in Chile. PMID:27406128

  14. Validating an artificial intelligence human proximity operations system with test cases

    NASA Astrophysics Data System (ADS)

    Huber, Justin; Straub, Jeremy

    2013-05-01

    An artificial intelligence-controlled robot (AICR) operating in close proximity to humans poses risk to these humans. Validating the performance of an AICR is an ill posed problem, due to the complexity introduced by the erratic (noncomputer) actors. In order to prove the AICR's usefulness, test cases must be generated to simulate the actions of these actors. This paper discusses AICR's performance validation in the context of a common human activity, moving through a crowded corridor, using test cases created by an AI use case producer. This test is a two-dimensional simplification relevant to autonomous UAV navigation in the national airspace.

  15. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  16. Validity of three clinical performance assessments of internal medicine clerks.

    PubMed

    Hull, A L; Hodder, S; Berger, B; Ginsberg, D; Lindheim, N; Quan, J; Kleinhenz, M E

    1995-06-01

    To analyze the construct validity of three methods to assess the clinical performances of internal medicine clerks. A multitrait-multimethod (MTMM) study was conducted at the Case Western Reserve University School of Medicine to determine the convergent and divergent validity of a clinical evaluation form (CEF) completed by faculty and residents, an objective structured clinical examination (OSCE), and the medicine subject test of the National Board of Medical Examiners. Three traits were involved in the analysis: clinical skills, knowledge, and personal characteristics. A correlation matrix was computed for 410 third-year students who completed the clerkship between August 1988 and July 1991. There was a significant (p < .01) convergence of the four correlations that assessed the same traits by using different methods. However, the four convergent correlations were of moderate magnitude (ranging from .29 to .47). Divergent validity was assessed by comparing the magnitudes of the convergence correlations with the magnitudes of correlations among unrelated assessments (i.e., different traits by different methods). Seven of nine possible coefficients were smaller than the convergent coefficients, suggesting evidence of divergent validity. A significant CEF method effect was identified. There was convergent validity and some evidence of divergent validity with a significant method effect. The findings were similar for correlations corrected for attenuation. Four conclusions were reached: (1) the reliability of the OSCE must be improved, (2) the CEF ratings must be redesigned to further discriminate among the specific traits assessed, (3) additional methods to assess personal characteristics must be instituted, and (4) several assessment methods should be used to evaluate individual student performances.

  17. Cognitive-Motor Interference in an Ecologically Valid Street Crossing Scenario.

    PubMed

    Janouch, Christin; Drescher, Uwe; Wechsler, Konstantin; Haeger, Mathias; Bock, Otmar; Voelcker-Rehage, Claudia

    2018-01-01

    Laboratory-based research revealed that gait involves higher cognitive processes, leading to performance impairments when executed with a concurrent loading task. Deficits are especially pronounced in older adults. Theoretical approaches like the multiple resource model highlight the role of task similarity and associated attention distribution problems. It has been shown that in cases where these distribution problems are perceived relevant to participant's risk of falls, older adults prioritize gait and posture over the concurrent loading task. Here we investigate whether findings on task similarity and task prioritization can be transferred to an ecologically valid scenario. Sixty-three younger adults (20-30 years of age) and 61 older adults (65-75 years of age) participated in a virtual street crossing simulation. The participants' task was to identify suitable gaps that would allow them to cross a simulated two way street safely. Therefore, participants walked on a manual treadmill that transferred their forward motion to forward displacements in a virtual city. The task was presented as a single task (crossing only) and as a multitask. In the multitask condition participants were asked, among others, to type in three digit numbers that were presented either visually or auditorily. We found that for both age groups, street crossing as well as typing performance suffered under multitasking conditions. Impairments were especially pronounced for older adults (e.g., longer crossing initiation phase, more missed opportunities). However, younger and older adults did not differ in the speed and success rate of crossing. Further, deficits were stronger in the visual compared to the auditory task modality for most parameters. Our findings conform to earlier studies that found an age-related decline in multitasking performance in less realistic scenarios. However, task similarity effects were inconsistent and question the validity of the multiple resource model within ecologically valid scenarios.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez-Monroy, J.A., E-mail: antosan@gmail.com; Quimbay, C.J., E-mail: cjquimbayh@unal.edu.co; Centro Internacional de Fisica, Bogota D.C.

    In the context of a semiclassical approach where vectorial gauge fields can be considered as classical fields, we obtain exact static solutions of the SU(N) Yang-Mills equations in an (n+1)-dimensional curved space-time, for the cases n=1,2,3. As an application of the results obtained for the case n=3, we consider the solutions for the anti-de Sitter and Schwarzschild metrics. We show that these solutions have a confining behavior and can be considered as a first step in the study of the corrections of the spectra of quarkonia in a curved background. Since the solutions that we find in this work aremore » valid also for the group U(1), the case n=2 is a description of the (2+1) electrodynamics in the presence of a point charge. For this case, the solution has a confining behavior and can be considered as an application of the planar electrodynamics in a curved space-time. Finally we find that the solution for the case n=1 is invariant under a parity transformation and has the form of a linear confining solution. - Highlights: Black-Right-Pointing-Pointer We study exact static confining solutions of the SU(N) Yang-Mills equations in an (n+1)-dimensional curved space-time. Black-Right-Pointing-Pointer The solutions found are a first step in the study of the corrections on the spectra of quarkonia in a curved background. Black-Right-Pointing-Pointer A expression for the confinement potential in low dimensionality is found.« less

  19. Criminal profiling as expert witness evidence: The implications of the profiler validity research.

    PubMed

    Kocsis, Richard N; Palermo, George B

    The use and development of the investigative tool colloquially known as criminal profiling has steadily increased over the past five decades throughout the world. Coupled with this growth has been a diversification in the suggested range of applications for this technique. Possibly the most notable of these has been the attempted transition of the technique from a tool intended to assist police investigations into a form of expert witness evidence admissible in legal proceedings. Whilst case law in various jurisdictions has considered with mutual disinclination the evidentiary admissibility of criminal profiling, a disjunction has evolved between these judicial examinations and the scientifically vetted research testing the accuracy (i.e., validity) of the technique. This article offers an analysis of the research directly testing the validity of the criminal profiling technique and the extant legal principles considering its evidentiary admissibility. This analysis reveals that research findings concerning the validity of criminal profiling are surprisingly compatible with the extant legal principles. The overall conclusion is that a discrete form of crime behavioural analysis is supported by the profiler validity research and could be regarded as potentially admissible expert witness evidence. Finally, a number of theoretical connections are also identified concerning the skills and qualifications of individuals who may feasibly provide such expert testimony. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Proteochemometric Modeling of the Interaction Space of Carbonic Anhydrase and its Inhibitors: An Assessment of Structure-based and Sequence-based Descriptors.

    PubMed

    Rasti, Behnam; Namazi, Mohsen; Karimi-Jafari, M H; Ghasemi, Jahan B

    2017-04-01

    Due to its physiological and clinical roles, carbonic anhydrase (CA) is one of the most interesting case studies. There are different classes of CAinhibitors including sulfonamides, polyamines, coumarins and dithiocarbamates (DTCs). However, many of them hardly act as a selective inhibitor against a specific isoform. Therefore, finding highly selective inhibitors for different isoforms of CA is still an ongoing project. Proteochemometrics modeling (PCM) is able to model the bioactivity of multiple compounds against different isoforms of a protein. Therefore, it would be extremely applicable when investigating the selectivity of different ligands towards different receptors. Given the facts, we applied PCM to investigate the interaction space and structural properties that lead to the selective inhibition of CA isoforms by some dithiocarbamates. Our models have provided interesting structural information that can be considered to design compounds capable of inhibiting different isoforms of CA in an improved selective manner. Validity and predictivity of the models were confirmed by both internal and external validation methods; while Y-scrambling approach was applied to assess the robustness of the models. To prove the reliability and the applicability of our findings, we showed how ligands-receptors selectivity can be affected by removing any of these critical findings from the modeling process. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Study methods, recruitment, socio-demographic findings and demographic representativeness in the OPPERA study

    PubMed Central

    Slade, Gary D.; Bair, Eric; By, Kunthel; Mulkey, Flora; Baraian, Cristina; Rothwell, Rebecca; Reynolds, Maria; Miller, Vanessa; Gonzalez, Yoly; Gordon, Sharon; Ribeiro-Dasilva, Margarete; Lim, Pei Feng; Greenspan, Joel D; Dubner, Ron; Fillingim, Roger B; Diatchenko, Luda; Maixner, William; Dampier, Dawn; Knott, Charles; Ohrbach, Richard

    2011-01-01

    This paper describes methods used in the project “Orofacial Pain Prospective Evaluation and Risk Assessment” (OPPERA) and evaluates socio-demographic characteristics associated with temporomandibular disorders (TMD) in the OPPERA case-control study. Representativeness was investigated by comparing socio-demographic profiles of OPPERA participants with population census profiles of counties near study sites and by comparing age- and gender-associations with TMD in OPPERA and the 2007-09 US National Health Interview Survey. Volunteers aged 18-44 years were recruited at four US study sites: 3,263 people without TMD were enrolled into the prospective cohort study; 1,633 of them were selected as controls for the baseline case-control study. Cases were 185 volunteers with examiner-classified TMD. Distributions of some demographic characteristics among OPPERA participants differed from census profiles, although there was less difference in socio-economic profiles. Odds of TMD was associated with greater age in this 18-44 year range; females had three times the odds of TMD as males; and relative to non-Hispanic-Whites, other racial groups had one-fifth the odds of TMD. Age- and gender-associations with chronic TMD were strikingly similar to associations observed in the US population. Assessments of representativeness in this demographically diverse group of community volunteers suggest that OPPERA case-control findings have good internal validity. PMID:22074749

  2. Criticality Calculations with MCNP6 - Practical Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2016-11-29

    These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less

  3. Scrutinizing A Survey-Based Measure of Science and Mathematics Teacher Knowledge: Relationship to Observations of Teaching Practice

    ERIC Educational Resources Information Center

    Talbot, Robert M., III

    2017-01-01

    There is a clear need for valid and reliable instrumentation that measures teacher knowledge. However, the process of investigating and making a case for instrument validity is not a simple undertaking; rather, it is a complex endeavor. This paper presents the empirical case of one aspect of such an instrument validation effort. The particular…

  4. Quantification of construction waste prevented by BIM-based design validation: Case studies in South Korea.

    PubMed

    Won, Jongsung; Cheng, Jack C P; Lee, Ghang

    2016-03-01

    Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Serbian translation of the 20-item Toronto Alexithymia Scale: psychometric properties and the new methodological approach in translating scales.

    PubMed

    Trajanović, Nikola N; Djurić, Vladimir; Latas, Milan; Milovanović, Srdjan; Jovanović, Aleksandar A; Djurić, Dusan

    2013-01-01

    Since inception of the alexithymia construct in 1970's, there has been a continuous effort to improve both its theoretical postulates and the clinical utility through development, standardization and validation of assessment scales. The aim of this study was to validate the Serbian translation of the 20-item Toronto Alexithymia Scale (TAS-20) and to propose a new method of translation of scales with a property of temporal stability. The scale was expertly translated by bilingual medical professionals and a linguist, and given to a sample of bilingual participants from the general population who completed both the English and the Serbian version of the scale one week apart. The findings showed that the Serbian version of the TAS-20 had a good internal consistency reliability regarding total scale (alpha=0.86), and acceptable reliability of the three factors (alpha=0.71-0.79). The analysis confirmed the validity and consistency of the Serbian translation of the scale, with observed weakness of the factorial structure consistent with studies in other languages. The results also showed that the method of utilizing a self-control bilingual subject is a useful alternative to the back-translation method, particularly in cases of linguistically and structurally sensitive scales, or in cases where a larger sample is not available. This method, dubbed as 'forth-translation' could be used to translate psychometric scales measuring properties which have temporal stability over the period of at least several weeks.

  6. Validation of Normalizations, Scaling, and Photofading Corrections for FRAP Data Analysis

    PubMed Central

    Kang, Minchul; Andreani, Manuel; Kenworthy, Anne K.

    2015-01-01

    Fluorescence Recovery After Photobleaching (FRAP) has been a versatile tool to study transport and reaction kinetics in live cells. Since the fluorescence data generated by fluorescence microscopy are in a relative scale, a wide variety of scalings and normalizations are used in quantitative FRAP analysis. Scaling and normalization are often required to account for inherent properties of diffusing biomolecules of interest or photochemical properties of the fluorescent tag such as mobile fraction or photofading during image acquisition. In some cases, scaling and normalization are also used for computational simplicity. However, to our best knowledge, the validity of those various forms of scaling and normalization has not been studied in a rigorous manner. In this study, we investigate the validity of various scalings and normalizations that have appeared in the literature to calculate mobile fractions and correct for photofading and assess their consistency with FRAP equations. As a test case, we consider linear or affine scaling of normal or anomalous diffusion FRAP equations in combination with scaling for immobile fractions. We also consider exponential scaling of either FRAP equations or FRAP data to correct for photofading. Using a combination of theoretical and experimental approaches, we show that compatible scaling schemes should be applied in the correct sequential order; otherwise, erroneous results may be obtained. We propose a hierarchical workflow to carry out FRAP data analysis and discuss the broader implications of our findings for FRAP data analysis using a variety of kinetic models. PMID:26017223

  7. Combination of Adaptive Feedback Cancellation and Binaural Adaptive Filtering in Hearing Aids

    NASA Astrophysics Data System (ADS)

    Lombard, Anthony; Reindl, Klaus; Kellermann, Walter

    2009-12-01

    We study a system combining adaptive feedback cancellation and adaptive filtering connecting inputs from both ears for signal enhancement in hearing aids. For the first time, such a binaural system is analyzed in terms of system stability, convergence of the algorithms, and possible interaction effects. As major outcomes of this study, a new stability condition adapted to the considered binaural scenario is presented, some already existing and commonly used feedback cancellation performance measures for the unilateral case are adapted to the binaural case, and possible interaction effects between the algorithms are identified. For illustration purposes, a blind source separation algorithm has been chosen as an example for adaptive binaural spatial filtering. Experimental results for binaural hearing aids confirm the theoretical findings and the validity of the new measures.

  8. Genetic Similarities between Compulsive Overeating and Addiction Phenotypes: A Case for "Food Addiction"?

    PubMed

    Carlier, Nina; Marshe, Victoria S; Cmorejova, Jana; Davis, Caroline; Müller, Daniel J

    2015-12-01

    There exists a continuous spectrum of overeating, where at the extremes there are casual overindulgences and at the other a 'pathological' drive to consume palatable foods. It has been proposed that pathological eating behaviors may be the result of addictive appetitive behavior and loss of ability to regulate the consumption of highly processed foods containing refined carbohydrates, fats, salt, and caffeine. In this review, we highlight the genetic similarities underlying substance addiction phenotypes and overeating compulsions seen in individuals with binge eating disorder. We relate these similarities to findings from neuroimaging studies on reward processing and clinical diagnostic criteria based on addiction phenotypes. The abundance of similarities between compulsive overeating and substance addictions puts forth a case for a 'food addiction' phenotype as a valid, diagnosable disorder.

  9. Women in Science and Engineering Building Community Online

    NASA Astrophysics Data System (ADS)

    Kleinman, Sharon S.

    This article explores the constructs of online community and online social support and discusses a naturalistic case study of a public, unmoderated, online discussion group dedicated to issues of interest to women in science and engineering. The benefits of affiliation with OURNET (a pseudonym) were explored through participant observation over a 4-year period, telephone interviews with 21 subscribers, and content analysis of e-mail messages posted to the discussion group during a 125-day period. The case study findings indicated that through affiliation with the online discussion group, women in traditionally male-dominated fields expanded their professional networks, increased their knowledge, constituted and validated positive social identities, bolstered their self-confidence, obtained social support and information from people with a wide range of experiences and areas of expertise, and, most significantly, found community.

  10. Self-energy renormalization for inhomogeneous nonequilibrium systems and field expansion via complete set of time-dependent wavefunctions

    NASA Astrophysics Data System (ADS)

    Kuwahara, Y.; Nakamura, Y.; Yamanaka, Y.

    2018-04-01

    The way to determine the renormalized energy of inhomogeneous systems of a quantum field under an external potential is established for both equilibrium and nonequilibrium scenarios based on thermo field dynamics. The key step is to find an extension of the on-shell concept valid in homogeneous case. In the nonequilibrium case, we expand the field operator by time-dependent wavefunctions that are solutions of the appropriately chosen differential equation, synchronizing with temporal change of thermal situation, and the quantum transport equation is derived from the renormalization procedure. Through numerical calculations of a triple-well model with a reservoir, we show that the number distribution and the time-dependent wavefunctions are relaxed consistently to the correct equilibrium forms at the long-term limit.

  11. SORL1 variants and risk of late-onset Alzheimer's disease.

    PubMed

    Li, Yonghong; Rowland, Charles; Catanese, Joseph; Morris, John; Lovestone, Simon; O'Donovan, Michael C; Goate, Alison; Owen, Michael; Williams, Julie; Grupe, Andrew

    2008-02-01

    A recent study reported significant association of late-onset Alzheimer's disease (LOAD) with multiple single nucleotide polymorphisms (SNPs) and haplotypes in SORL1, a neuronal sortilin-related receptor protein known to be involved in the trafficking and processing of amyloid precursor protein. Here we attempted to validate this finding in three large, well characterized case-control series. Approximately 2000 samples from the three series were individually genotyped for 12 SNPs, including the 10 reported significant SNPs and 2 that constitute the reported significant haplotypes. A total of 25 allelic and haplotypic association tests were performed. One SNP rs2070045 was marginally replicated in the three sample sets combined (nominal P=0.035); however, this result does not remain significant when accounting for multiple comparisons. Further validation in other sample sets will be required to assess the true effects of SORL1 variants in LOAD.

  12. Commitment to sustainability: A content analysis of website for university organisations

    NASA Astrophysics Data System (ADS)

    Hasim, M. S.; Hashim, A. E.; Ariff, N. R. M.; Sapeciay, Z.; Abdullah, A. S.

    2018-02-01

    This research aim on investigating the commitments of organisations towards sustainability. For this research context, ‘commitment’ refers to the extent of information provided by universities in their website which demonstrated initiatives towards achieving the sustainability goal. The objective of this study was to identify sustainability initiatives highlighted within university websites using Australia as a case study. Thirty-nine (39) websites were reviewed and web content analysis was performed to publicly available data including any relevant accessible PDF documents attached to the universities website. Specific websites information was reviewed to detect sustainability themes in the broad university management and operations (i.e., in general policies, corporate mission statements, research activities, positions available and strategies). The commitment of Australian universities was significant and well established with a set of twenty (20) related themes were identified. The findings have some limitations because the established themes only emerged from the websites’ content without human validation which possibly weakens the correlations between website information and organisations actual practice. This possibility is recognised and for this reason, further assessment may be advantageous to provide verification of the findings. Therefore, further studies using other techniques are suggested such as interviews or observations for validation of data and reinforce the entire conclusions. An interesting aspect of this study is the validity of reviewing organisational websites for gauging actual practice and a number of researchers supporting this approached as indicated in methodology section of this paper.

  13. Three cases of donor-derived pulmonary tuberculosis in lung transplant recipients and review of 12 previously reported cases: opportunities for early diagnosis and prevention.

    PubMed

    Mortensen, E; Hellinger, W; Keller, C; Cowan, L S; Shaw, T; Hwang, S; Pegues, D; Ahmedov, S; Salfinger, M; Bower, W A

    2014-02-01

    Solid organ transplant recipients have a higher frequency of tuberculosis (TB) than the general population, with mortality rates of approximately 30%. Although donor-derived TB is reported to account for <5% of TB in solid organ transplants, the source of Mycobacterium tuberculosis infection is infrequently determined. We report 3 new cases of pulmonary TB in lung transplant recipients attributed to donor infection, and review the 12 previously reported cases to assess whether cases could have been prevented and whether any cases that might occur in the future could be detected and investigated more quickly. Specifically, we evaluate whether opportunities existed to determine TB risk on the basis of routine donor history, to expedite diagnosis through routine mycobacterial smears and cultures of respiratory specimens early post transplant, and to utilize molecular tools to investigate infection sources epidemiologically. On review, donor TB risk was present among 7 cases. Routine smears and cultures diagnosed 4 asymptomatic cases. Genotyping was used to support epidemiologic findings in 6 cases. Validated screening protocols, including microbiological testing and newer technologies (e.g., interferon-gamma release assays) to identify unrecognized M. tuberculosis infection in deceased donors, are warranted. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. The lack of selection bias in a snowball sampled case-control study on drug abuse.

    PubMed

    Lopes, C S; Rodrigues, L C; Sichieri, R

    1996-12-01

    Friend controls in matched case-control studies can be a potential source of bias based on the assumption that friends are more likely to share exposure factors. This study evaluates the role of selection bias in a case-control study that used the snowball sampling method based on friendship for the selection of cases and controls. The cases selected fro the study were drug abusers located in the community. Exposure was defined by the presence of at least one psychiatric diagnosis. Psychiatric and drug abuse/dependence diagnoses were made according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R) criteria. Cases and controls were matched on sex, age and friendship. The measurement of selection bias was made through the comparison of the proportion of exposed controls selected by exposed cases (p1) with the proportion of exposed controls selected by unexposed cases (p2). If p1 = p2 then, selection bias should not occur. The observed distribution of the 185 matched pairs having at least one psychiatric disorder showed a p1 value of 0.52 and a p2 value of 0.51, indicating no selection bias in this study. Our findings support the idea that the use of friend controls can produce a valid basis for a case-control study.

  15. A Deep Analysis of Center Displacement in An Idealized Tropical Cyclone with Low-wavenumber Asymmetries

    NASA Astrophysics Data System (ADS)

    Zhao, C.; Song, J.; Leng, H.

    2017-12-01

    The Tropical Cyclone(TC) center-finding technique plays an important role when diagnostic analyses of TC structure are performed, especially when dealing with low-wavenumber asymmetries. Previous works have already established that structure of TCs can vary greatly depending on the displacement induced by center-finding techniques. As it is difficult to define a true TC center in the real world, this work seeks to explore how low-wavenumber azimuthal Fourier analyses can vary with center displacement using idealized, parametric TC-like vortices with different perturbation structures. It is shown that the errors is sensitive to the location and radial structure of the adding perturbation. In the case of adding azimuthal wavenumber 1 and 3 asymmetries, the increasing radial shear of initial asymmetries will enhance the corresponding spectral energy around radius of maximum wind(RMW) significantly, and they also have a great effect on spectral energy of wavenumber 2. On the contrary, the wavenumber 2 cases show a reduction from 1RMW to outer radius when shear is increasing and has little effect on spectral energy of wavenumber 1 or 2. Pervious findings indicated that the aliasing is dependent on the placement of center relative to the location of the asymmetries, which is also valid in these shearing situations. Moreover, it is found that this aliasing caused by phase displacement is less sensitive with the radial shear in wavenumber 2 and 3 cases, while it shows an significant amplification and deformation when wavenumber 1 asymmetry is added.

  16. Medical history and the onset of complex regional pain syndrome (CRPS).

    PubMed

    de Mos, M; Huygen, F J P M; Dieleman, J P; Koopman, J S H A; Stricker, B H Ch; Sturkenboom, M C J M

    2008-10-15

    Knowledge concerning the medical history prior to the onset of complex regional pain syndrome (CRPS) might provide insight into its risk factors and potential underlying disease mechanisms. To evaluate prior to CRPS medical conditions, a case-control study was conducted in the Integrated Primary Care Information (IPCI) project, a general practice (GP) database in the Netherlands. CRPS patients were identified from the records and validated through examination by the investigator (IASP criteria) or through specialist confirmation. Cases were matched to controls on age, gender and injury type. All diagnoses prior to the index date were assessed by manual review of the medical records. Some pre-specified medical conditions were studied for their association with CRPS, whereas all other diagnoses, grouped by pathogenesis, were tested in a hypothesis-generating approach. Of the identified 259 CRPS patients, 186 cases (697 controls) were included, based on validation by the investigator during a visit (102 of 134 visited patients) or on specialist confirmation (84 of 125 unvisited patients). A medical history of migraine (OR: 2.43, 95% CI: 1.18-5.02) and osteoporosis (OR: 2.44, 95% CI: 1.17-5.14) was associated with CRPS. In a recent history (1-year before CRPS), cases had more menstrual cycle-related problems (OR: 2.60, 95% CI: 1.16-5.83) and neuropathies (OR: 5.7; 95% CI: 1.8-18.7). In a sensitivity analysis, including only visited cases, asthma (OR: 3.0; 95% CI: 1.3-6.9) and CRPS were related. Psychological factors were not associated with CRPS onset. Because of the hypothesis-generating character of this study, the findings should be confirmed by other studies.

  17. Prevalence of tuberculous infection and incidence of tuberculosis; a re-assessment of the Styblo rule

    PubMed Central

    van der Werf, MJ; Borgdorff, MW

    2008-01-01

    Abstract Objective To evaluate the validity of the fixed mathematical relationship between the annual risk of tuberculous infection (ARTI), the prevalence of smear-positive tuberculosis (TB) and the incidence of smear-positive TB specified as the Styblo rule, which TB control programmes use to estimate the incidence of TB disease at a population level and the case detection rate. Methods Population-based tuberculin surveys and surveys on prevalence of smear-positive TB since 1975 were identified through a literature search. For these surveys, the ratio between the number of tuberculous infections (based on ARTI estimates) and the number of smear-positive TB cases was calculated and compared to the ratio of 8 to 12 tuberculous infections per prevalent smear-positive TB case as part of the Styblo rule. Findings Three countries had national population-based data on both ARTI and prevalence of smear-positive TB for more than one point in time. In China the ratio ranged from 3.4 to 5.8, in the Philippines from 2.6 to 4.4, and in the Republic of Korea, from 3.2 to 4.7. All ratios were markedly lower than the ratio that is part of the Styblo rule. Conclusion According to recent country data, there are typically fewer than 8 to 12 tuberculous infections per prevalent smear-positive TB case, and it remains unclear whether this ratio varies significantly among countries. The decrease in the ratio compared to the Styblo rule probably relates to improvements in the prompt treatment of TB disease (by national TB programmes). A change in the number of tuberculous infections per prevalent smear-positive TB case in population-based surveys makes the assumed fixed mathematical relationship between ARTI and incidence of smear-positive TB no longer valid. PMID:18235886

  18. Social validity in single-case research: A systematic literature review of prevalence and application.

    PubMed

    Snodgrass, Melinda R; Chung, Moon Y; Meadan, Hedda; Halle, James W

    2018-03-01

    Single-case research (SCR) has been a valuable methodology in special education research. Montrose Wolf (1978), an early pioneer in single-case methodology, coined the term "social validity" to refer to the social importance of the goals selected, the acceptability of procedures employed, and the effectiveness of the outcomes produced in applied investigations. Since 1978, many contributors to SCR have included social validity as a feature of their articles and several authors have examined the prevalence and role of social validity in SCR. We systematically reviewed all SCR published in six highly-ranked special education journals from 2005 to 2016 to establish the prevalence of social validity assessments and to evaluate their scientific rigor. We found relatively low, but stable prevalence with only 28 publications addressing all three factors of the social validity construct (i.e., goals, procedures, outcomes). We conducted an in-depth analysis of the scientific rigor of these 28 publications. Social validity remains an understudied construct in SCR, and the scientific rigor of social validity assessments is often lacking. Implications and future directions are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. On the validity of the use of a localized approximation for helical beams. I. Formal aspects

    NASA Astrophysics Data System (ADS)

    Gouesbet, Gérard; André Ambrosio, Leonardo

    2018-03-01

    The description of an electromagnetic beam for use in light scattering theories may be carried out by using an expansion over vector spherical wave functions with expansion coefficients expressed in terms of Beam Shape Coefficients (BSCs). A celebrated method to evaluate these BSCs has been the use of localized approximations (with several existing variants). We recently established that the use of any existing localized approximation is of limited validity in the case of Bessel and Mathieu beams. In the present paper, we address a warning against the use of any existing localized approximation in the case of helical beams. More specifically, we demonstrate that a procedure used to validate any existing localized approximation fails in the case of helical beams. Numerical computations in a companion paper will confirm that existing localized approximations are of limited validity in the case of helical beams.

  20. Guidelines for Reporting Case Studies on Extracorporeal Treatments in Poisonings: Methodology

    PubMed Central

    Lavergne, Valéry; Ouellet, Georges; Bouchard, Josée; Galvao, Tais; Kielstein, Jan T; Roberts, Darren M; Kanji, Salmaan; Mowry, James B; Calello, Diane P; Hoffman, Robert S; Gosselin, Sophie; Nolin, Thomas D; Goldfarb, David S; Burdmann, Emmanuel A; Dargan, Paul I; Decker, Brian Scott; Hoegberg, Lotte C; Maclaren, Robert; Megarbane, Bruno; Sowinski, Kevin M; Yates, Christopher; Mactier, Robert; Wiegand, Timothy; Ghannoum, Marc

    2014-01-01

    A literature review performed by the EXtracorporeal TReatments In Poisoning (EXTRIP) workgroup highlighted deficiencies in the existing literature, especially the reporting of case studies. Although general reporting guidelines exist for case studies, there are none in the specific field of extracorporeal treatments in toxicology. Our goal was to construct and propose a checklist that systematically outlines the minimum essential items to be reported in a case study of poisoned patients undergoing extracorporeal treatments. Through a modified two-round Delphi technique, panelists (mostly chosen from the EXTRIP workgroup) were asked to vote on the pertinence of a set of items to identify those considered minimally essential for reporting complete and accurate case reports. Furthermore, independent raters validated the clarity of each selected items between each round of voting. All case reports containing data on extracorporeal treatments in poisoning published in Medline in 2011 were reviewed during the external validation rounds. Twenty-one panelists (20 from the EXTRIP workgroup and an invited expert on pharmacology reporting guidelines) participated in the modified Delphi technique. This group included journal editors and experts in nephrology, clinical toxicology, critical care medicine, emergency medicine, and clinical pharmacology. Three independent raters participated in the validation rounds. Panelists voted on a total of 144 items in the first round and 137 items in the second round, with response rates of 96.3% and 98.3%, respectively. Twenty case reports were evaluated at each validation round and the independent raters' response rate was 99.6% and 98.8% per validation round. The final checklist consists of 114 items considered essential for case study reporting. This methodology of alternate voting and external validation rounds was useful in developing the first reporting guideline for case studies in the field of extracorporeal treatments in poisoning. We believe that this guideline will improve the completeness and transparency of published case reports and that the systematic aggregation of information from case reports may provide early signals of effectiveness and/or harm, thereby improving healthcare decision-making. PMID:24890576

  1. Instruments Measuring Integrated Care: A Systematic Review of Measurement Properties

    PubMed Central

    BAUTISTA, MARY ANN C.; NURJONO, MILAWATY; DESSERS, EZRA; VRIJHOEF, HUBERTUS JM

    2016-01-01

    Policy Points: Investigations on systematic methodologies for measuring integrated care should coincide with the growing interest in this field of research.A systematic review of instruments provides insights into integrated care measurement, including setting the research agenda for validating available instruments and informing the decision to develop new ones.This study is the first systematic review of instruments measuring integrated care with an evidence synthesis of the measurement properties.We found 209 index instruments measuring different constructs related to integrated care; the strength of evidence on the adequacy of the majority of their measurement properties remained largely unassessed. Context Integrated care is an important strategy for increasing health system performance. Despite its growing significance, detailed evidence on the measurement properties of integrated care instruments remains vague and limited. Our systematic review aims to provide evidence on the state of the art in measuring integrated care. Methods Our comprehensive systematic review framework builds on the Rainbow Model for Integrated Care (RMIC). We searched MEDLINE/PubMed for published articles on the measurement properties of instruments measuring integrated care and identified eligible articles using a standard set of selection criteria. We assessed the methodological quality of every validation study reported using the COSMIN checklist and extracted data on study and instrument characteristics. We also evaluated the measurement properties of each examined instrument per validation study and provided a best evidence synthesis on the adequacy of measurement properties of the index instruments. Findings From the 300 eligible articles, we assessed the methodological quality of 379 validation studies from which we identified 209 index instruments measuring integrated care constructs. The majority of studies reported on instruments measuring constructs related to care integration (33%) and patient‐centered care (49%); fewer studies measured care continuity/comprehensive care (15%) and care coordination/case management (3%). We mapped 84% of the measured constructs to the clinical integration domain of the RMIC, with fewer constructs related to the domains of professional (3.7%), organizational (3.4%), and functional (0.5%) integration. Only 8% of the instruments were mapped to a combination of domains; none were mapped exclusively to the system or normative integration domains. The majority of instruments were administered to either patients (60%) or health care providers (20%). Of the measurement properties, responsiveness (4%), measurement error (7%), and criterion (12%) and cross‐cultural validity (14%) were less commonly reported. We found <50% of the validation studies to be of good or excellent quality for any of the measurement properties. Only a minority of index instruments showed strong evidence of positive findings for internal consistency (15%), content validity (19%), and structural validity (7%); with moderate evidence of positive findings for internal consistency (14%) and construct validity (14%). Conclusions Our results suggest that the quality of measurement properties of instruments measuring integrated care is in need of improvement with the less‐studied constructs and domains to become part of newly developed instruments. PMID:27995711

  2. Improving early detection of childhood depression in mental health care: the Children׳s Depression Screener (ChilD-S).

    PubMed

    Allgaier, Antje-Kathrin; Krick, Kathrin; Opitz, Ansgar; Saravo, Barbara; Romanos, Marcel; Schulte-Körne, Gerd

    2014-07-30

    Diagnosing childhood depression can pose a challenge, even for mental health specialists. Screening tools can aid clinicians within the initial step of the diagnostic process. For the first time, the Children׳s Depression Screener (ChilD-S) is validated in a mental health setting as a novel field of application beyond the previously examined pediatric setting. Based on a structured interview, DSM-IV-TR diagnoses of depression were made for 79 psychiatric patients aged 9-12, serving as the gold standard for validation. For assessing criterion validity, receiver operating characteristic (ROC) curves were calculated. Point prevalence of major depression and dysthymia was 28%. Diagnostic accuracy in terms of the area under the ROC curve was high (0.97). At the optimal cut-off point ≥12 according to the Youden׳s index, sensitivity was 0.91 and specificity was 0.81. The findings suggest that the ChilD-S is not only a valid screening instrument for childhood depression in pediatric care but also in mental health settings. As a brief tool it can easily be implemented into daily clinical practice of mental health professionals facilitating the diagnostic process, especially in case of comorbid depression. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Defining asthma and assessing asthma outcomes using electronic health record data: a systematic scoping review.

    PubMed

    Al Sallakh, Mohammad A; Vasileiou, Eleftheria; Rodgers, Sarah E; Lyons, Ronan A; Sheikh, Aziz; Davies, Gwyneth A

    2017-06-01

    There is currently no consensus on approaches to defining asthma or assessing asthma outcomes using electronic health record-derived data. We explored these approaches in the recent literature and examined the clarity of reporting.We systematically searched for asthma-related articles published between January 1, 2014 and December 31, 2015, extracted the algorithms used to identify asthma patients and assess severity, control and exacerbations, and examined how the validity of these outcomes was justified.From 113 eligible articles, we found significant heterogeneity in the algorithms used to define asthma (n=66 different algorithms), severity (n=18), control (n=9) and exacerbations (n=24). For the majority of algorithms (n=106), validity was not justified. In the remaining cases, approaches ranged from using algorithms validated in the same databases to using nonvalidated algorithms that were based on clinical judgement or clinical guidelines. The implementation of these algorithms was suboptimally described overall.Although electronic health record-derived data are now widely used to study asthma, the approaches being used are significantly varied and are often underdescribed, rendering it difficult to assess the validity of studies and compare their findings. Given the substantial growth in this body of literature, it is crucial that scientific consensus is reached on the underlying definitions and algorithms. Copyright ©ERS 2017.

  4. The Chinese version of the Child and Adolescent Scale of Environment (CASE-C): validity and reliability for children with disabilities in Taiwan.

    PubMed

    Kang, Lin-Ju; Yen, Chia-Feng; Bedell, Gary; Simeonsson, Rune J; Liou, Tsan-Hon; Chi, Wen-Chou; Liu, Shu-Wen; Liao, Hua-Fang; Hwang, Ai-Wen

    2015-03-01

    Measurement of children's participation and environmental factors is a key component of the assessment in the new Disability Evaluation System (DES) in Taiwan. The Child and Adolescent Scale of Environment (CASE) was translated into Traditional Chinese (CASE-C) and used for assessing environmental factors affecting the participation of children and youth with disabilities in the DES. The aim of this study was to validate the CASE-C. Participants were 614 children and youth aged 6.0-17.9 years with disabilities, with the largest condition group comprised of children with intellectual disability (61%). Internal structure, internal consistency, test-retest reliability, convergent validity, and discriminant (known group) validity were examined using exploratory factor analyses, Cronbach's α coefficient, intra-class correlation coefficients (ICC), correlation analyses, and univariate ANOVAs. A three-factor structure (Family/Community Resources, Assistance/Attitude Supports, and Physical Design Access) of the CASE-C was produced with 38% variance explained. The CASE-C had adequate internal consistency (Cronbach's α=.74-.86) and test-retest reliability (ICCs=.73-.90). Children and youth with disabilities who had higher levels of severity of impairment encountered more environmental barriers and those experiencing more environmental problems also had greater restrictions in participation. The CASE-C scores were found to distinguish children on the basis of disability condition and impairment severity, but not on the basis of age or sex. The CASE-C is valid for assessing environmental problems experienced by children and youth with disabilities in Taiwan. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Keeping children safe at home: protocol for three matched case–control studies of modifiable risk factors for falls

    PubMed Central

    Kendrick, Denise; Stewart, Jane; Clacy, Rose; Coffey, Frank; Cooper, Nicola; Coupland, Carol; Hayes, Mike; McColl, Elaine; Reading, Richard; Sutton, Alex; M L Towner, Elizabeth; Craig Watson, Michael

    2012-01-01

    Background Childhood falls result in considerable morbidity, mortality and health service use. Despite this, little evidence exists on protective factors or effective falls prevention interventions in young children. Objectives To estimate ORs for three types of medically attended fall injuries in young children in relation to safety equipment, safety behaviours and hazard reduction and explore differential effects by child and family factors and injury severity. Design Three multicentre case–control studies in UK hospitals with validation of parental reported exposures using home observations. Cases are aged 0–4 years with a medically attended fall injury occurring at home, matched on age and sex with community controls. Children attending hospital for other types of injury will serve as unmatched hospital controls. Matched analyses will use conditional logistic regression to adjust for potential confounding variables. Unmatched analyses will use unconditional logistic regression, adjusted for age, sex, deprivation and distance from hospital in addition to other confounders. Each study requires 496 cases and 1984 controls to detect an OR of 0.7, with 80% power, 5% significance level, a correlation between cases and controls of 0.1 and a range of exposure prevalences. Main outcome measures Falls on stairs, on one level and from furniture. Discussion As the largest in the field to date, these case control studies will adjust for potential confounders, validate measures of exposure and investigate modifiable risk factors for specific falls injury mechanisms. Findings should enhance the evidence base for falls prevention for young children. PMID:22628151

  6. Measurement Issues: Screening and diagnostic instruments for autism spectrum disorders – lessons from research and practice

    PubMed Central

    Charman, Tony; Gotham, Katherine

    2012-01-01

    Background and Scope Significant progress has been made over the past two decades in the development of screening and diagnostic instruments for autism spectrum disorders (ASD). This article reviews this progress, including recent innovations, focussing on those instruments for which the strongest research data on validity exists, and then turns to addressing issues arising from their use in clinical settings. Findings Research studies have evaluated the ability of screens to prospectively identify cases of ASD in population-based and clinically-referred samples, as well as the accuracy of diagnostic instruments to map onto ‘gold standard’ clinical best estimate diagnosis. However, extension of the findings to clinical services must be done with caution, with a full understanding that instrument properties are sample-specific. Furthermore, we are limited by the lack of a true test for ASD, which remains a behaviourally-defined disorder. In addition screening and diagnostic instruments help clinicians least in the cases where they are most in want of direction, since their accuracy will always be lower for marginal cases. Conclusion Instruments help clinicians to collect detailed, structured information and increase accuracy and reliability of referral for in-depth assessment and recommendations for support, but further research is needed to refine their effective use in clinical settings. PMID:23539140

  7. Validity threats: overcoming interference with proposed interpretations of assessment data.

    PubMed

    Downing, Steven M; Haladyna, Thomas M

    2004-03-01

    Factors that interfere with the ability to interpret assessment scores or ratings in the proposed manner threaten validity. To be interpreted in a meaningful manner, all assessments in medical education require sound, scientific evidence of validity. The purpose of this essay is to discuss 2 major threats to validity: construct under-representation (CU) and construct-irrelevant variance (CIV). Examples of each type of threat for written, performance and clinical performance examinations are provided. The CU threat to validity refers to undersampling the content domain. Using too few items, cases or clinical performance observations to adequately generalise to the domain represents CU. Variables that systematically (rather than randomly) interfere with the ability to meaningfully interpret scores or ratings represent CIV. Issues such as flawed test items written at inappropriate reading levels or statistically biased questions represent CIV in written tests. For performance examinations, such as standardised patient examinations, flawed cases or cases that are too difficult for student ability contribute CIV to the assessment. For clinical performance data, systematic rater error, such as halo or central tendency error, represents CIV. The term face validity is rejected as representative of any type of legitimate validity evidence, although the fact that the appearance of the assessment may be an important characteristic other than validity is acknowledged. There are multiple threats to validity in all types of assessment in medical education. Methods to eliminate or control validity threats are suggested.

  8. Development and validation of an epidemiologic case definition of epilepsy for use with routinely collected Australian health data.

    PubMed

    Tan, Michael; Wilson, Ian; Braganza, Vanessa; Ignatiadis, Sophia; Boston, Ray; Sundararajan, Vijaya; Cook, Mark J; D'Souza, Wendyl J

    2015-10-01

    We report the diagnostic validity of a selection algorithm for identifying epilepsy cases. Retrospective validation study of International Classification of Diseases 10th Revision Australian Modification (ICD-10AM)-coded hospital records and pharmaceutical data sampled from 300 consecutive potential epilepsy-coded cases and 300 randomly chosen cases without epilepsy from 3/7/2012 to 10/7/2013. Two epilepsy specialists independently validated the diagnosis of epilepsy. A multivariable logistic regression model was fitted to identify the optimum coding algorithm for epilepsy and was internally validated. One hundred fifty-eight out of three hundred (52.6%) epilepsy-coded records and 0/300 (0%) nonepilepsy records were confirmed to have epilepsy. The kappa for interrater agreement was 0.89 (95% CI=0.81-0.97). The model utilizing epilepsy (G40), status epilepticus (G41) and ≥1 antiepileptic drug (AED) conferred the highest positive predictive value of 81.4% (95% CI=73.1-87.9) and a specificity of 99.9% (95% CI=99.9-100.0). The area under the receiver operating curve was 0.90 (95% CI=0.88-0.93). When combined with pharmaceutical data, the precision of case identification for epilepsy data linkage design was considerably improved and could provide considerable potential for efficient and reasonably accurate case ascertainment in epidemiological studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Validation Studies for Diet History Questionnaire II | EGRP/DCCPS/NCI/NIH

    Cancer.gov

    Links to validation findings from the original Diet History Questionnaire (DHQ). These findings are unlikely to be greatly modified by minimal modifications to DHQ II food list and the updated nutrient database.

  10. Multimodal Navigation in Endoscopic Transsphenoidal Resection of Pituitary Tumors using Image-based Vascular and Cranial Nerve Segmentation: A Prospective Validation Study

    PubMed Central

    Dolati, Parviz; Eichberg, Daniel; Golby, Alexandra; Zamani, Amir; Laws, Edward

    2016-01-01

    Introduction Transsphenoidal surgery (TSS) is a well-known approach for the treatment of pituitary tumors. However, lateral misdirection and vascular damage, intraoperative CSF leakage, and optic nerve and vascular injuries are all well-known complications, and the risk of adverse events is more likely in less experienced hands. This prospective study was conducted to validate the accuracy of image-based segmentation in localization of neurovascular structures during TSS. Methods Twenty-five patients with pituitary tumors underwent preoperative 3TMRI, which included thin-sectioned 3D space T2, 3D Time of Flight and MPRAGE sequences. Images were reviewed by an expert independent neuroradiologist. Imaging sequences were loaded in BrainLab iPlanNet (16/25 cases) or Stryker (9/25 cases) image guidance platforms for segmentation and pre-operative planning. After patient registration into the neuronavigation system and subsequent surgical exposure, each segmented neural or vascular element was validated by manual placement of the navigation probe on or as close as possible to the target. The audible pulsations of the bilateral ICA were confirmed using a micro-Doppler probe. Results Pre-operative segmentation of the ICA and cavernous sinus matched with the intra-operative endoscopic and micro-Doppler findings in all cases (Dice Similarity Coefficient =1). This information reassured the surgeons with regard to the lateral extent of bone removal at the sellar floor and the limits of lateral exploration. Excellent correspondence between image-based segmentation and the endoscopic view was also evident at the surface of the tumor and at the tumor-normal gland interfaces. This assisted in preventing unnecessary removal of the normal pituitary gland. Image-guidance assisted the surgeons in localizing the optic nerve and chiasm in 64% of the cases and the diaphragma sella in 52% of cases, which helped to determine the limits of upward exploration and to decrease the risk of CSF leakage. The accuracy of the measurements was 1.20 + 0.21 mm (mean +/−SD). Conclusion Image-based pre-operative vascular and neural element segmentation, especially with 3D reconstruction, is highly informative preoperatively and potentially could assist less experienced neurosurgeons in preventing vascular and neural injury during TSS. Additionally, the accuracy found in this study is comparable to previously reported neuronavigation measurements. This novel preliminary study is encouraging for future prospective intraoperative validation with larger numbers of patients. PMID:27302558

  11. DNA methylation as a predictor of fetal alcohol spectrum disorder.

    PubMed

    Lussier, Alexandre A; Morin, Alexander M; MacIsaac, Julia L; Salmon, Jenny; Weinberg, Joanne; Reynolds, James N; Pavlidis, Paul; Chudley, Albert E; Kobor, Michael S

    2018-01-01

    Fetal alcohol spectrum disorder (FASD) is a developmental disorder that manifests through a range of cognitive, adaptive, physiological, and neurobiological deficits resulting from prenatal alcohol exposure. Although the North American prevalence is currently estimated at 2-5%, FASD has proven difficult to identify in the absence of the overt physical features characteristic of fetal alcohol syndrome. As interventions may have the greatest impact at an early age, accurate biomarkers are needed to identify children at risk for FASD. Building on our previous work identifying distinct DNA methylation patterns in children and adolescents with FASD, we have attempted to validate these associations in a different clinical cohort and to use our DNA methylation signature to develop a possible epigenetic predictor of FASD. Genome-wide DNA methylation patterns were analyzed using the Illumina HumanMethylation450 array in the buccal epithelial cells of a cohort of 48 individuals aged 3.5-18 (24 FASD cases, 24 controls). The DNA methylation predictor of FASD was built using a stochastic gradient boosting model on our previously published dataset FASD cases and controls (GSE80261). The predictor was tested on the current dataset and an independent dataset of 48 autism spectrum disorder cases and 48 controls (GSE50759). We validated findings from our previous study that identified a DNA methylation signature of FASD, replicating the altered DNA methylation levels of 161/648 CpGs in this independent cohort, which may represent a robust signature of FASD in the epigenome. We also generated a predictive model of FASD using machine learning in a subset of our previously published cohort of 179 samples (83 FASD cases, 96 controls), which was tested in this novel cohort of 48 samples and resulted in a moderately accurate predictor of FASD status. Upon testing the algorithm in an independent cohort of individuals with autism spectrum disorder, we did not detect any bias towards autism, sex, age, or ethnicity. These findings further support the association of FASD with distinct DNA methylation patterns, while providing a possible entry point towards the development of epigenetic biomarkers of FASD.

  12. Educational testing validity and reliability in pharmacy and medical education literature.

    PubMed

    Hoover, Matthew J; Jung, Rose; Jacobs, David M; Peeters, Michael J

    2013-12-16

    To evaluate and compare the reliability and validity of educational testing reported in pharmacy education journals to medical education literature. Descriptions of validity evidence sources (content, construct, criterion, and reliability) were extracted from articles that reported educational testing of learners' knowledge, skills, and/or abilities. Using educational testing, the findings of 108 pharmacy education articles were compared to the findings of 198 medical education articles. For pharmacy educational testing, 14 articles (13%) reported more than 1 validity evidence source while 83 articles (77%) reported 1 validity evidence source and 11 articles (10%) did not have evidence. Among validity evidence sources, content validity was reported most frequently. Compared with pharmacy education literature, more medical education articles reported both validity and reliability (59%; p<0.001). While there were more scholarship of teaching and learning (SoTL) articles in pharmacy education compared to medical education, validity, and reliability reporting were limited in the pharmacy education literature.

  13. When is hub gene selection better than standard meta-analysis?

    PubMed

    Langfelder, Peter; Mischel, Paul S; Horvath, Steve

    2013-01-01

    Since hub nodes have been found to play important roles in many networks, highly connected hub genes are expected to play an important role in biology as well. However, the empirical evidence remains ambiguous. An open question is whether (or when) hub gene selection leads to more meaningful gene lists than a standard statistical analysis based on significance testing when analyzing genomic data sets (e.g., gene expression or DNA methylation data). Here we address this question for the special case when multiple genomic data sets are available. This is of great practical importance since for many research questions multiple data sets are publicly available. In this case, the data analyst can decide between a standard statistical approach (e.g., based on meta-analysis) and a co-expression network analysis approach that selects intramodular hubs in consensus modules. We assess the performance of these two types of approaches according to two criteria. The first criterion evaluates the biological insights gained and is relevant in basic research. The second criterion evaluates the validation success (reproducibility) in independent data sets and often applies in clinical diagnostic or prognostic applications. We compare meta-analysis with consensus network analysis based on weighted correlation network analysis (WGCNA) in three comprehensive and unbiased empirical studies: (1) Finding genes predictive of lung cancer survival, (2) finding methylation markers related to age, and (3) finding mouse genes related to total cholesterol. The results demonstrate that intramodular hub gene status with respect to consensus modules is more useful than a meta-analysis p-value when identifying biologically meaningful gene lists (reflecting criterion 1). However, standard meta-analysis methods perform as good as (if not better than) a consensus network approach in terms of validation success (criterion 2). The article also reports a comparison of meta-analysis techniques applied to gene expression data and presents novel R functions for carrying out consensus network analysis, network based screening, and meta analysis.

  14. Trends in testing behaviours for hepatitis C virus infection and associated determinants: results from population-based laboratory surveillance in Alberta, Canada (1998-2001).

    PubMed

    Jayaraman, G C; Lee, B; Singh, A E; Preiksaitis, J K

    2007-04-01

    Little is currently known about hepatitis C virus (HCV) test seeking behaviours at the population level. Given the centralized nature of testing for HCV infection in the province of Alberta, Canada, we had an opportunity to examine HCV testing behaviour at the population level on all newly diagnosed HCV-positive cases using laboratory data to validate the time and number of prior tests for each case. Record linkage identified 3323, 2937, 2660 and 2703 newly diagnosed cases of HCV infections in Alberta during 1998, 1999, 2000 and 2001, respectively, corresponding to age-adjusted rates of 149.8, 129, 114.3 and 113.7 per 100,000 population during these years, respectively. Results from secondary analyses of laboratory data suggest that the majority of HCV cases (95.3%) who were newly diagnosed between 1998 and 2001 were first-time testers for HCV infection. Among repeat testers, analysis of a negative test result within 1 year prior to a first of a positive test report suggests that 211 (38.4%) may be seroconvertors. These findings suggest that 339 or 61.7% of repeat testers may not have discovered their serostatus within 1 year of infection. Among this group, HCV testing was sought infrequently, with a median interval of 2.3 years between the last negative and first positive test. This finding is of concern given the risks for HCV transmission, particularly if risk-taking behaviours are not reduced because of unknown serostatus. These findings also reinforce the need to make the most of each test-seeking event with proper counselling and other appropriate support services.

  15. Genetic algorithms for protein threading.

    PubMed

    Yadgari, J; Amir, A; Unger, R

    1998-01-01

    Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).

  16. Validation of Statistical Sampling Algorithms in Visual Sample Plan (VSP): Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuffer, Lisa L; Sego, Landon H.; Wilson, John E.

    2009-02-18

    The U.S. Department of Homeland Security, Office of Technology Development (OTD) contracted with a set of U.S. Department of Energy national laboratories, including the Pacific Northwest National Laboratory (PNNL), to write a Remediation Guidance for Major Airports After a Chemical Attack. The report identifies key activities and issues that should be considered by a typical major airport following an incident involving release of a toxic chemical agent. Four experimental tasks were identified that would require further research in order to supplement the Remediation Guidance. One of the tasks, Task 4, OTD Chemical Remediation Statistical Sampling Design Validation, dealt with statisticalmore » sampling algorithm validation. This report documents the results of the sampling design validation conducted for Task 4. In 2005, the Government Accountability Office (GAO) performed a review of the past U.S. responses to Anthrax terrorist cases. Part of the motivation for this PNNL report was a major GAO finding that there was a lack of validated sampling strategies in the U.S. response to Anthrax cases. The report (GAO 2005) recommended that probability-based methods be used for sampling design in order to address confidence in the results, particularly when all sample results showed no remaining contamination. The GAO also expressed a desire that the methods be validated, which is the main purpose of this PNNL report. The objective of this study was to validate probability-based statistical sampling designs and the algorithms pertinent to within-building sampling that allow the user to prescribe or evaluate confidence levels of conclusions based on data collected as guided by the statistical sampling designs. Specifically, the designs found in the Visual Sample Plan (VSP) software were evaluated. VSP was used to calculate the number of samples and the sample location for a variety of sampling plans applied to an actual release site. Most of the sampling designs validated are probability based, meaning samples are located randomly (or on a randomly placed grid) so no bias enters into the placement of samples, and the number of samples is calculated such that IF the amount and spatial extent of contamination exceeds levels of concern, at least one of the samples would be taken from a contaminated area, at least X% of the time. Hence, "validation" of the statistical sampling algorithms is defined herein to mean ensuring that the "X%" (confidence) is actually met.« less

  17. Validity and reliability of chronic tic disorder and obsessive-compulsive disorder diagnoses in the Swedish National Patient Register.

    PubMed

    Rück, Christian; Larsson, K Johan; Lind, Kristina; Perez-Vigil, Ana; Isomura, Kayoko; Sariaslan, Amir; Lichtenstein, Paul; Mataix-Cols, David

    2015-06-22

    The usefulness of cases diagnosed in administrative registers for research purposes is dependent on diagnostic validity. This study aimed to investigate the validity and inter-rater reliability of recorded diagnoses of tic disorders and obsessive-compulsive disorder (OCD) in the Swedish National Patient Register (NPR). Chart review of randomly selected register cases and controls. 100 tic disorder cases and 100 OCD cases were randomly selected from the NPR based on codes from the International Classification of Diseases (ICD) 8th, 9th and 10th editions, together with 50 epilepsy and 50 depression control cases. The obtained psychiatric records were blindly assessed by 2 senior psychiatrists according to the criteria of the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) and ICD-10. Positive predictive value (PPV; cases diagnosed correctly divided by the sum of true positives and false positives). Between 1969 and 2009, the NPR included 7286 tic disorder and 24,757 OCD cases. The vast majority (91.3% of tic cases and 80.1% of OCD cases) are coded with the most recent ICD version (ICD-10). For tic disorders, the PPV was high across all ICD versions (PPV=89% in ICD-8, 86% in ICD-9 and 97% in ICD-10). For OCD, only ICD-10 codes had high validity (PPV=91-96%). None of the epilepsy or depression control cases were wrongly diagnosed as having tic disorders or OCD, respectively. Inter-rater reliability was outstanding for both tic disorders (κ=1) and OCD (κ=0.98). The validity and reliability of ICD codes for tic disorders and OCD in the Swedish NPR is generally high. We propose simple algorithms to further increase the confidence in the validity of these codes for epidemiological research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. The Validation of a Case-Based, Cumulative Assessment and Progressions Examination

    PubMed Central

    Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David

    2016-01-01

    Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435

  19. Deaths associated with insertion of nasogastric tubes for enteral nutrition in the medical intensive care unit: Clinical and autopsy findings

    PubMed Central

    Smith, Avery L.; Santa Ana, Carol A.; Fordtran, John S.; Guileyardo, Joseph M.

    2018-01-01

    ABSTRACT It is generally assumed that blind insertion of nasogastric tubes for enteral nutrition in patients admitted to medical intensive care units is safe; that is, does not result in life-threatening injury. If death occurs in temporal association with insertion of a nasogastric tube, caregivers typically attribute it to underlying diseases, with little or no consideration of iatrogenic death due to tube insertion. The clinical and autopsy results in three recent cases at Baylor University Medical Center challenge the validity of these notions. PMID:29904295

  20. [Splash basins are contaminated even during operations in a laminar air flow environment].

    PubMed

    Christensen, Mikkel; Sundstrup, Mikkel; Larsen, Helle Raagaard; Olesen, Bente; Ryge, Camilla

    2014-03-03

    Few studies have investigated the potential contamination of splash basins and they have shown very divergent results: contamination ranging from 2.13% to 74% has been reported. This study set out to examine if splash basins used in a laminar air flow (LAF) environment during elective knee and hip arthroplasty constitute an unnecessary risk. Of the 49 cases sampled two cultures were positive (4%; 95% confidence interval = 0.49-13.9). We conclude that splash basins do get contaminated even in an LAF environment. Further studies with larger populations are needed to validate our findings.

  1. Maxillary Teeth Abscesses Result in Atypical Liver Abscesses

    PubMed Central

    Gupta, Vritti; Vivekanandan, Renuga; Gorby, Gary

    2018-01-01

    Hepatic liver abscesses are often misdiagnosed on initial presentation because pyogenic liver lesions are a rare occurrence in the United States. This leads to a delay in proper treatment and results in increasing morbidity and mortality. Our case report demonstrates the atypical presentation of a hepatic liver abscess in the elderly. The source of infection was found to be periapical abscesses of the teeth, which subsequently seeded the portal blood stream of our patient. Our findings validate the potential hazard of Viridans streptococci and illustrate how untreated dental infections can serve as a reservoir for a systemic infection. PMID:29796365

  2. Spectral features of the body fluids of patients with benign and malignant prostate tumours

    NASA Astrophysics Data System (ADS)

    Atif, M.; Devanesan, S.; Farhat, K.; Rabah, D.; AlSalhi, M. S.; Masilamani, V.

    2013-05-01

    In this study, we present the results of fluorescence spectra of blood and urine to detect and discriminate between samples drawn from benign and malignant prostate patients and we find a very good demarcation in terms of spectral features. This preliminary study was carried out as a proof of concept, with limited samples of blood and urine from known cases of patients of BPH and CaP. In the near future it is expected that a detailed clinical validation will be done to establish it as a reliable cancer diagnosis protocol.

  3. Validation of software for calculating the likelihood ratio for parentage and kinship.

    PubMed

    Drábek, J

    2009-03-01

    Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.

  4. Brief Report: Independent Validation of Autism Spectrum Disorder Case Status in the Utah Autism and Developmental Disabilities Monitoring (ADDM) Network Site

    ERIC Educational Resources Information Center

    Bakian, Amanda V.; Bilder, Deborah A.; Carbone, Paul S.; Hunt, Tyler D.; Petersen, Brent; Rice, Catherine E.

    2015-01-01

    An independent validation was conducted of the Utah Autism and Developmental Disabilities Monitoring Network's (UT-ADDM) classification of children with autism spectrum disorder (ASD). UT-ADDM final case status (n = 90) was compared with final case status as determined by independent external expert reviewers (EERs). Inter-rater reliability…

  5. Measuring Graduate Students' Teaching and Research Skills through Self-Report: Descriptive Findings and Validity Evidence

    ERIC Educational Resources Information Center

    Gilmore, Joanna; Feldon, David

    2010-01-01

    This study extends research on graduate student development by examining descriptive findings and validity of a self-report survey designed to capture graduate students' assessments of their teaching and research skills. Descriptive findings provide some information about areas of growth among graduate students' in the first years of their…

  6. Improving machine learning reproducibility in genetic association studies with proportional instance cross validation (PICV).

    PubMed

    Piette, Elizabeth R; Moore, Jason H

    2018-01-01

    Machine learning methods and conventions are increasingly employed for the analysis of large, complex biomedical data sets, including genome-wide association studies (GWAS). Reproducibility of machine learning analyses of GWAS can be hampered by biological and statistical factors, particularly so for the investigation of non-additive genetic interactions. Application of traditional cross validation to a GWAS data set may result in poor consistency between the training and testing data set splits due to an imbalance of the interaction genotypes relative to the data as a whole. We propose a new cross validation method, proportional instance cross validation (PICV), that preserves the original distribution of an independent variable when splitting the data set into training and testing partitions. We apply PICV to simulated GWAS data with epistatic interactions of varying minor allele frequencies and prevalences and compare performance to that of a traditional cross validation procedure in which individuals are randomly allocated to training and testing partitions. Sensitivity and positive predictive value are significantly improved across all tested scenarios for PICV compared to traditional cross validation. We also apply PICV to GWAS data from a study of primary open-angle glaucoma to investigate a previously-reported interaction, which fails to significantly replicate; PICV however improves the consistency of testing and training results. Application of traditional machine learning procedures to biomedical data may require modifications to better suit intrinsic characteristics of the data, such as the potential for highly imbalanced genotype distributions in the case of epistasis detection. The reproducibility of genetic interaction findings can be improved by considering this variable imbalance in cross validation implementation, such as with PICV. This approach may be extended to problems in other domains in which imbalanced variable distributions are a concern.

  7. Structured learning for robotic surgery utilizing a proficiency score: a pilot study.

    PubMed

    Hung, Andrew J; Bottyan, Thomas; Clifford, Thomas G; Serang, Sarfaraz; Nakhoda, Zein K; Shah, Swar H; Yokoi, Hana; Aron, Monish; Gill, Inderbir S

    2017-01-01

    We evaluated feasibility and benefit of implementing structured learning in a robotics program. Furthermore, we assessed validity of a proficiency assessment tool for stepwise graduation. Teaching cases included robotic radical prostatectomy and partial nephrectomy. Procedure steps were categorized: basic, intermediate, and advanced. An assessment tool ["proficiency score" (PS)] was developed to evaluate ability to safely and autonomously complete a step. Graduation required a passing PS (PS ≥ 3) on three consecutive attempts. PS and validated global evaluative assessment of robotic skills (GEARS) were evaluated for completed steps. Linear regression was utilized to determine postgraduate year/PS relationship (construct validity). Spearman's rank correlation coefficient measured correlation between PS and GEARS evaluations (concurrent validity). Intraclass correlation (ICC) evaluated PS agreement between evaluator classes. Twenty-one robotic trainees participated within the pilot program, completing a median of 14 (2-69) cases each. Twenty-three study evaluators scored 14 (1-60) cases. Over 4 months, 229/294 (78 %) cases were designated "teaching" cases. Residents completed 91 % of possible evaluations; faculty completed 78 %. Verbal and quantitative feedback received by trainees increased significantly (p = 0.002, p < 0.001, respectively). Average PS increased with PGY (post-graduate year) for basic and intermediate steps (regression slopes: 0.402 (p < 0.0001), 0.323 (p < 0.0001), respectively) (construct validation). Overall, PS correlated highly with GEARS (ρ = 0.81, p < 0.0001) (concurrent validity). ICC was 0.77 (95 % CI 0.61-0.88) for resident evaluations. Structured learning can be implemented in an academic robotic program with high levels of trainee and evaluator participation, encouraging both quantitative and verbal feedback. A proficiency assessment tool developed for step-specific proficiency has construct and concurrent validity.

  8. Rapid distortion analysis of high speed homogeneous turbulence subject to periodic shear

    DOE PAGES

    Bertsch, Rebecca L.; Girimaji, Sharath S.

    2015-12-30

    The effect of unsteady shear forcing on small perturbation growth in compressible flow is investigated. In particular, flow-thermodynamic field interaction and the resulting effect on the phase-lag between applied shear and Reynolds stress are examined. Simplified linear analysis of the perturbation pressure equation reveals crucial differences between steady and unsteady shear effects. The analytical findings are validated with numerical simulations of inviscid rapid distortion theory (RDT) equations. In contrast to steadily sheared compressible flows, perturbations in the unsteady (periodic) forcing case do not experience an asymptotic growth phase. Further, the resonance growth phenomenon found in incompressible unsteady shear turbulence ismore » absent in the compressible case. Overall, the stabilizing influence of both unsteadiness and compressibility is compounded leading to suppression of all small perturbations. As a result, the underlying mechanisms are explained.« less

  9. Rapid distortion analysis of high speed homogeneous turbulence subject to periodic shear

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertsch, Rebecca L., E-mail: rlb@lanl.gov; Girimaji, Sharath S., E-mail: girimaji@aero.tamu.edu

    2015-12-15

    The effect of unsteady shear forcing on small perturbation growth in compressible flow is investigated. In particular, flow-thermodynamic field interaction and the resulting effect on the phase-lag between applied shear and Reynolds stress are examined. Simplified linear analysis of the perturbation pressure equation reveals crucial differences between steady and unsteady shear effects. The analytical findings are validated with numerical simulations of inviscid rapid distortion theory (RDT) equations. In contrast to steadily sheared compressible flows, perturbations in the unsteady (periodic) forcing case do not experience an asymptotic growth phase. Further, the resonance growth phenomenon found in incompressible unsteady shear turbulence ismore » absent in the compressible case. Overall, the stabilizing influence of both unsteadiness and compressibility is compounded leading to suppression of all small perturbations. The underlying mechanisms are explained.« less

  10. The cost-effectiveness of iodine 131 scintigraphy, ultrasonography, and fine-needle aspiration biopsy in the initial diagnosis of solitary thyroid nodules.

    PubMed

    Khalid, Ayesha N; Hollenbeak, Christopher S; Quraishi, Sadeq A; Fan, Chris Y; Stack, Brendan C

    2006-03-01

    To compare the cost-effectiveness of fine-needle aspiration biopsy, iodine 131 scintigraphy, and ultrasonography for the initial diagnostic workup of a solitary palpable thyroid nodule. A deterministic cost-effectiveness analysis was conducted using a decision tree to model the diagnostic strategies. A single, mid-Atlantic academic medical center. Expected costs, expected number of cases correctly diagnosed, and incremental cost per additional case correctly diagnosed. Relative to the routine use of fine-needle aspiration biopsy, the incremental cost per case correctly diagnosed is 24,554 dollars for the iodine 131 scintigraphy strategy and 1212 dollars for the ultrasound strategy. A diagnostic strategy using initial fine-needle aspiration biopsy for palpable thyroid nodules was found to be cost-effective compared with the other approaches as long as a payor's willingness to pay for an additional correct diagnosis is less than 1212 dollars. Prospective studies are needed to validate these finding in clinical practice.

  11. Identification of cranial nerves around trigeminal schwannomas using diffusion tensor tractography: a technical note and report of 3 cases.

    PubMed

    Wei, Peng-Hu; Qi, Zhi-Gang; Chen, Ge; Li, Ming-Chu; Liang, Jian-Tao; Guo, Hong-Chuan; Bao, Yu-Hai; Hao, Qiang

    2016-03-01

    There are no large series studies identifying the locations of cranial nerves (CNs) around trigeminal schwannomas (TSs); however, surgically induced cranial neuropathies are commonly observed after surgeries to remove TSs. In this study, we preoperatively identified the location of CNs near TSs using diffusion tensor tractography (DTT). An observational study of the DTT results and intraoperative findings was performed. We preoperatively completed tractography from images of patients with TSs who received surgical therapy. The result was later validated during tumorectomy. A total of three consecutive patients were involved in this study. The locations of CNs V-VIII in relation to the tumor was clearly revealed in all cases, except for CN VI in case 3.The predicted fiber tracts were in agreement with intraoperative observations. In this study, preoperative DTT accurately predicted the location of the majority of the nerves of interest. This technique can be applied by surgeons to preoperatively visualize nerve arrangements.

  12. Single-case synthesis tools I: Comparing tools to evaluate SCD quality and rigor.

    PubMed

    Zimmerman, Kathleen N; Ledford, Jennifer R; Severini, Katherine E; Pustejovsky, James E; Barton, Erin E; Lloyd, Blair P

    2018-03-03

    Tools for evaluating the quality and rigor of single case research designs (SCD) are often used when conducting SCD syntheses. Preferred components include evaluations of design features related to the internal validity of SCD to obtain quality and/or rigor ratings. Three tools for evaluating the quality and rigor of SCD (Council for Exceptional Children, What Works Clearinghouse, and Single-Case Analysis and Design Framework) were compared to determine if conclusions regarding the effectiveness of antecedent sensory-based interventions for young children changed based on choice of quality evaluation tool. Evaluation of SCD quality differed across tools, suggesting selection of quality evaluation tools impacts evaluation findings. Suggestions for selecting an appropriate quality and rigor assessment tool are provided and across-tool conclusions are drawn regarding the quality and rigor of studies. Finally, authors provide guidance for using quality evaluations in conjunction with outcome analyses when conducting syntheses of interventions evaluated in the context of SCD. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Processed electroencephalogram during donation after cardiac death.

    PubMed

    Auyong, David B; Klein, Stephen M; Gan, Tong J; Roche, Anthony M; Olson, Daiwai; Habib, Ashraf S

    2010-05-01

    We present a case series of increased bispectral index values during donation after cardiac death (DCD). During the DCD process, a patient was monitored with processed electroencephalogram (EEG), which showed considerable changes traditionally associated with lighter planes of anesthesia immediately after withdrawal of care. Subsequently, to validate the findings of this case, processed EEG was recorded during 2 other cases in which care was withdrawn without the use of hypnotic or anesthetic drugs. We found that changes in processed EEG immediately after withdrawal of care were not only reproducible, but can happen in the absence of changes in major electromyographic or electrocardiographic artifact. It is well documented that processed EEG is prone to artifacts. However, in the setting of DCD, these changes in processed EEG deserve some consideration. If these changes are not due to artifact, dosing of hypnotic or anesthetic drugs might be warranted. Use of these drugs during DCD based primarily on processed EEG values has never been addressed.

  14. RISK SCORE FOR IDENTIFYING ADULTS WITH CSF PLEOCYTOSIS AND NEGATIVE CSF GRAM STAIN AT LOW RISK FOR AN URGENT TREATABLE CAUSE

    PubMed Central

    Hasbun, Rodrigo; Bijlsma, Merijn; Brouwer, Matthijs C; Khoury, Nabil; Hadi, Christiane M; van der Ende, Arie; Wootton, Susan H.; Salazar, Lucrecia; Hossain, Md Monir; Beilke, Mark; van de Beek, Diederik

    2013-01-01

    Background We aimed to derive and validate a risk score that identifies adults with cerebrospinal fluid (CSF) pleocytosis and a negative CSF Gram stain at low risk for an urgent treatable cause. Methods Patients with CSF pleocytosis and a negative CSF Gram stain were stratified into a prospective derivation (n=193) and a retrospective validation (n=567) cohort. Clinically related baseline characteristics were grouped into three composite variables, each independently associated with a set of predefined urgent treatable causes. We subsequently derived a risk score classifying patients into low (0 composite variables present) or high ( ≥ 1 composite variables present) risk for an urgent treatable cause. The sensitivity of the risk score was determined in the validation cohort and in a prospective case series of 214 adults with CSF-culture proven bacterial meningitis, CSF pleocytosis and a negative Gram stain. Findings A total of 41 of 193 patients (21%) in the derivation cohort and 71 of 567 (13%) in the validation cohort had an urgent treatable cause. Sensitivity of the dichotomized risk score to detect an urgent treatable cause was 100.0% (95%CI 93.9-100.0%) in the validation cohort and 100.0% (95%CI 97.8-100.0%) in bacterial meningitis patients. Interpretation The risk score can be used to identify adults with CSF pleocytosis and a negative CSF Gram stain at low risk for an urgent treatable cause. PMID:23619080

  15. Genome-wide copy number variation study associates metabotropic glutamate receptor gene networks with attention deficit hyperactivity disorder

    PubMed Central

    Elia, Josephine; Glessner, Joseph T; Wang, Kai; Takahashi, Nagahide; Shtir, Corina J; Hadley, Dexter; Sleiman, Patrick M A; Zhang, Haitao; Kim, Cecilia E; Robison, Reid; Lyon, Gholson J; Flory, James H; Bradfield, Jonathan P; Imielinski, Marcin; Hou, Cuiping; Frackelton, Edward C; Chiavacci, Rosetta M; Sakurai, Takeshi; Rabin, Cara; Middleton, Frank A; Thomas, Kelly A; Garris, Maria; Mentch, Frank; Freitag, Christine M; Steinhausen, Hans-Christoph; Todorov, Alexandre A; Reif, Andreas; Rothenberger, Aribert; Franke, Barbara; Mick, Eric O; Roeyers, Herbert; Buitelaar, Jan; Lesch, Klaus-Peter; Banaschewski, Tobias; Ebstein, Richard P; Mulas, Fernando; Oades, Robert D; Sergeant, Joseph; Sonuga-Barke, Edmund; Renner, Tobias J; Romanos, Marcel; Romanos, Jasmin; Warnke, Andreas; Walitza, Susanne; Meyer, Jobst; Pálmason, Haukur; Seitz, Christiane; Loo, Sandra K; Smalley, Susan L; Biederman, Joseph; Kent, Lindsey; Asherson, Philip; Anney, Richard J L; Gaynor, J William; Shaw, Philip; Devoto, Marcella; White, Peter S; Grant, Struan F A; Buxbaum, Joseph D; Rapoport, Judith L; Williams, Nigel M; Nelson, Stanley F; Faraone, Stephen V; Hakonarson, Hakon

    2014-01-01

    Attention deficit hyperactivity disorder (ADHD) is a common, heritable neuropsychiatric disorder of unknown etiology. We performed a whole-genome copy number variation (CNV) study on 1,013 cases with ADHD and 4,105 healthy children of European ancestry using 550,000 SNPs. We evaluated statistically significant findings in multiple independent cohorts, with a total of 2,493 cases with ADHD and 9,222 controls of European ancestry, using matched platforms. CNVs affecting metabotropic glutamate receptor genes were enriched across all cohorts (P = 2.1 × 10−9). We saw GRM5 (encoding glutamate receptor, metabotropic 5) deletions in ten cases and one control (P = 1.36 × 10−6). We saw GRM7 deletions in six cases, and we saw GRM8 deletions in eight cases and no controls. GRM1 was duplicated in eight cases. We experimentally validated the observed variants using quantitative RT-PCR. A gene network analysis showed that genes interacting with the genes in the GRM family are enriched for CNVs in ~10% of the cases (P = 4.38 × 10−10) after correction for occurrence in the controls. We identified rare recurrent CNVs affecting glutamatergic neurotransmission genes that were overrepresented in multiple ADHD cohorts. PMID:22138692

  16. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  17. Radiant Energy Measurements from a Scaled Jet Engine Axisymmetric Exhaust Nozzle for a Baseline Code Validation Case

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1994-01-01

    A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.

  18. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE PAGES

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...

    2017-08-19

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  19. Design, construction, and technical implementation of a web-based interdisciplinary symptom evaluation (WISE) - a heuristic proposal for orofacial pain and temporomandibular disorders.

    PubMed

    Ettlin, Dominik A; Sommer, Isabelle; Brönnimann, Ben; Maffioletti, Sergio; Scheidt, Jörg; Hou, Mei-Yin; Lukic, Nenad; Steiger, Beat

    2016-12-01

    Medical symptoms independent of body location burden individuals to varying degrees and may require care by more than one expert. Various paper and computer-based tools exist that aim to comprehensively capture data for optimal clinical management and research. A web-based interdisciplinary symptom evaluation (WISE) was newly designed, constructed, and technically implemented. For worldwide applicability and to avoid copyright infringements, open source software tools and free validated questionnaires available in multiple languages were used. Highly secure data storage limits access strictly to those who use the tool for collecting, storing, and evaluating their data. Concept and implementation is illustrated by a WISE sample tailored for the requirements of a single center in Switzerland providing interdisciplinary care to orofacial pain and temporomandibular disorder patients. By combining a symptom- burden checklist with in-depth questionnaires serving as case-finding instruments, an algorithm was developed that assists in clarifying case complexity and need for targeted expert evaluation. This novel modular approach provides a personalized, response-tailored instrument for the time- and cost-effective collection of symptom-burden focused quantitative data. The tool includes body drawing options and instructional videos. It is applicable for biopsychosocial evaluation in a variety of clinical settings and offers direct feedback by a case report summary. In clinical practice, the new instrument assists in clarifying case complexity and referral need, based on symptom burden and response -tailored case finding. It provides single-case summary reports from a biopsychosocial perspective and includes graphical symptom maps. Secure, centrally stored data collection of anonymous data is possible. The tool enables personalized medicine, facilitates interprofessional education and collaboration, and allows for multicenter patient-reported outcomes research.

  20. Fourier domain optical coherence tomographic and auto-fluorescence findings in indeterminate choroidal melanocytic lesions.

    PubMed

    Singh, Arun D; Belfort, Rubens N; Sayanagi, Kaori; Kaiser, Peter K

    2010-04-01

    To compare detection rates of drusen and subretinal fluid by Fourier domain optical coherence tomography (FD OCT) and orange pigment by fundus autofluorescence (FAF) with ophthalmoscopy in indeterminate choroidal melanocytic lesions. In a consecutive case series of 38 patients with indeterminate choroidal melanocytic lesion that would have been categorised as a small tumour according to the size-based nomenclature used in the Collaborative Ocular Melanoma Study, each eye was submitted to ophthalmoscopic examination, FD OCT and FAF. The presence of drusen, subretinal fluid and orange pigment was recorded for each lesion by a single observer at the time of initial ophthalmoscopic evaluation and on fundus photographs. FD OCT and autofluorescence images were reviewed in all cases in a masked fashion. The ophthalmoscopic examination revealed drusen in 42%, subretinal fluid in 53% and orange pigment in 50% of patients. FD-OCT detected drusen in 45% and subretinal fluid in 58% of cases, and FAF detected orange pigment in 58% of cases. Based on the McNemar test, none of the differences were statistically significant at the 0.05 level. FD OCT and FAF complement clinical examination by verifying and documenting retinal and RPE changes associated with indeterminate choroidal melanocytic lesions. The detection rates by FD OCT and FAF of important qualitative prognostic factors appear to be equivalent to ophthalmoscopy by a trained observer. Once validated in a larger number of patients, FD OCT and FAF findings can be incorporated into diagnostic algorithms.

  1. Contrast enhanced dual energy spectral mammogram, an emerging addendum in breast imaging.

    PubMed

    Kariyappa, Kalpana D; Gnanaprakasam, Francis; Anand, Subhapradha; Krishnaswami, Murali; Ramachandran, Madan

    2016-11-01

    To assess the role of contrast-enhanced dual-energy spectral mammogram (CEDM) as a problem-solving tool in equivocal cases. 44 consenting females with equivocal findings on full-field digital mammogram underwent CEDM. All the images were interpreted by two radiologists independently. Confidence of presence was plotted on a three-point Likert scale and probability of cancer was assigned on Breast Imaging Reporting and Data System scoring. Histopathology was taken as the gold standard. Statistical analyses of all variables were performed. 44 breast lesions were included in the study, among which 77.3% lesions were malignant or precancerous and 22.7% lesions were benign or inconclusive. 20% of lesions were identified only on CEDM. True extent of the lesion was made out in 15.9% of cases, multifocality was established in 9.1% of cases and ductal extension was demonstrated in 6.8% of cases. Statistical significance for CEDM was p-value <0.05. Interobserver kappa value was 0.837. CEDM has a useful role in identifying occult lesions in dense breasts and in triaging lesions. In a mammographically visible lesion, CEDM characterizes the lesion, affirms the finding and better demonstrates response to treatment. Hence, we conclude that CEDM is a useful complementary tool to standard mammogram. Advances in knowledge: CEDM can detect and demonstrate lesions even in dense breasts with the advantage of feasibility of stereotactic biopsy in the same setting. Hence, it has the potential to be a screening modality with need for further studies and validation.

  2. Comparisons of serum miRNA expression profiles in patients with diabetic retinopathy and type 2 diabetes mellitus.

    PubMed

    Ma, Jianping; Wang, Jufang; Liu, Yanfen; Wang, Changyi; Duan, Donghui; Lu, Nanjia; Wang, Kaiyue; Zhang, Lu; Gu, Kaibo; Chen, Sihan; Zhang, Tao; You, Dingyun; Han, Liyuan

    2017-02-01

    The aim of this study was to compare the expression levels of serum miRNAs in diabetic retinopathy and type 2 diabetes mellitus. Serum miRNA expression profiles from diabetic retinopathy cases (type 2 diabetes mellitus patients with diabetic retinopathy) and type 2 diabetes mellitus controls (type 2 diabetes mellitus patients without diabetic retinopathy) were examined by miRNA-specific microarray analysis. Quantitative real-time polymerase chain reaction was used to validate the significantly differentially expressed serum miRNAs from the microarray analysis of 45 diabetic retinopathy cases and 45 age-, sex-, body mass index- and duration-of-diabetes-matched type 2 diabetes mellitus controls. The relative changes in serum miRNA expression levels were analyzed using the 2-ΔΔCt method. A total of 5 diabetic retinopathy cases and 5 type 2 diabetes mellitus controls were included in the miRNA-specific microarray analysis. The serum levels of miR-3939 and miR-1910-3p differed significantly between the two groups in the screening stage; however, quantitative real-time polymerase chain reaction did not reveal significant differences in miRNA expression for 45 diabetic retinopathy cases and their matched type 2 diabetes mellitus controls. Our findings indicate that miR-3939 and miR-1910-3p may not play important roles in the development of diabetic retinopathy; however, studies with a larger sample size are needed to confirm our findings.

  3. A method to determine the mammographic regions that show early changes due to the development of breast cancer

    NASA Astrophysics Data System (ADS)

    Karemore, Gopal; Nielsen, Mads; Karssemeijer, Nico; Brandt, Sami S.

    2014-11-01

    It is well understood nowadays that changes in the mammographic parenchymal pattern are an indicator of a risk of breast cancer and we have developed a statistical method that estimates the mammogram regions where the parenchymal changes, due to breast cancer, occur. This region of interest is computed from a score map by utilising the anatomical breast coordinate system developed in our previous work. The method also makes an automatic scale selection to avoid overfitting while the region estimates are computed by a nested cross-validation scheme. In this way, it is possible to recover those mammogram regions that show a significant difference in classification scores between the cancer and the control group. Our experiments suggested that the most significant mammogram region is the region behind the nipple and that can be justified by previous findings from other research groups. This result was conducted on the basis of the cross-validation experiments on independent training, validation and testing sets from the case-control study of 490 women, of which 245 women were diagnosed with breast cancer within a period of 2-4 years after the baseline mammograms. We additionally generalised the estimated region to another, mini-MIAS study and showed that the transferred region estimate gives at least a similar classification result when compared to the case where the whole breast region is used. In all, by following our method, one most likely improves both preclinical and follow-up breast cancer screening, but a larger study population will be required to test this hypothesis.

  4. General antibiotic exposure is associated with increased risk of developing chronic rhinosinusitis.

    PubMed

    Maxfield, Alice Z; Korkmaz, Hakan; Gregorio, Luciano L; Busaba, Nicolas Y; Gray, Stacey T; Holbrook, Eric H; Guo, Rong; Bleier, Benjamin S

    2017-02-01

    Antibiotic use and chronic rhinosinusitis (CRS) have been independently associated with microbiome diversity depletion and opportunistic infections. This study was undertaken to investigate whether antibiotic use may be an unrecognized risk factor for developing CRS. Case-control study of 1,162 patients referred to a tertiary sinus center for a range of sinonasal disorders. Patients diagnosed with CRS according to established consensus criteria (n = 410) were assigned to the case group (273 without nasal polyps [CRSsNP], 137 with nasal polyps [CRSwNP]). Patients with all other diagnoses (n = 752) were assigned to the control group. Chronic rhinosinusitis disease severity was determined using a validated quality of life (QOL) instrument. The class, diagnosis, and timing of previous nonsinusitis-related antibiotic exposures were recorded. Results were validated using a randomized administrative data review of 452 (38.9%) of patient charts. The odds ratio of developing CRS following antibiotic exposure were calculated, as well as the impact of antibiotic use on the subsequent QOL. Antibiotic use significantly increased the odds of developing CRSsNP (odds ratio: 2.21, 95% confidence interval, 1.66-2.93, P < 0.0001) as compared to nonusers. Antibiotic exposure was significantly associated with worse CRS QOL scores (P = 0.0009) over at least the subsequent 2 years. These findings were confirmed by the administrative data review. Use of antibiotics more than doubles the odds of developing CRSsNP and is associated with a worse QOL for at least 2 years following exposure. These findings expose an unrecognized and concerning consequence of general antibiotic use. 3b. Laryngoscope, 2016 127:296-302, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  5. Investigation of dietary factors and endometrial cancer risk using a nutrient-wide association study approach in the EPIC and Nurses' Health Study (NHS) and NHSII.

    PubMed

    Merritt, Melissa A; Tzoulaki, Ioanna; Tworoger, Shelley S; De Vivo, Immaculata; Hankinson, Susan E; Fernandes, Judy; Tsilidis, Konstantinos K; Weiderpass, Elisabete; Tjønneland, Anne; Petersen, Kristina E N; Dahm, Christina C; Overvad, Kim; Dossus, Laure; Boutron-Ruault, Marie-Christine; Fagherazzi, Guy; Fortner, Renée T; Kaaks, Rudolf; Aleksandrova, Krasimira; Boeing, Heiner; Trichopoulou, Antonia; Bamia, Christina; Trichopoulos, Dimitrios; Palli, Domenico; Grioni, Sara; Tumino, Rosario; Sacerdote, Carlotta; Mattiello, Amalia; Bueno-de-Mesquita, H Bas; Onland-Moret, N Charlotte; Peeters, Petra H; Gram, Inger T; Skeie, Guri; Quirós, J Ramón; Duell, Eric J; Sánchez, María-José; Salmerón, D; Barricarte, Aurelio; Chamosa, Saioa; Ericson, Ulrica; Sonestedt, Emily; Nilsson, Lena Maria; Idahl, Annika; Khaw, Kay-Tee; Wareham, Nicholas; Travis, Ruth C; Rinaldi, Sabina; Romieu, Isabelle; Patel, Chirag J; Riboli, Elio; Gunter, Marc J

    2015-02-01

    Data on the role of dietary factors in endometrial cancer development are limited and inconsistent. We applied a "nutrient-wide association study" approach to systematically evaluate dietary risk associations for endometrial cancer while controlling for multiple hypothesis tests using the false discovery rate (FDR) and validating the results in an independent cohort. We evaluated endometrial cancer risk associations for dietary intake of 84 foods and nutrients based on dietary questionnaires in three prospective studies, the European Prospective Investigation into Cancer and Nutrition (EPIC; N = 1,303 cases) followed by validation of nine foods/nutrients (FDR ≤ 0.10) in the Nurses' Health Studies (NHS/NHSII; N = 1,531 cases). Cox regression models were used to estimate HRs and 95% confidence intervals (CI). In multivariate adjusted comparisons of the extreme categories of intake at baseline, coffee was inversely associated with endometrial cancer risk (EPIC, median intake 750 g/day vs. 8.6; HR, 0.81; 95% CI, 0.68-0.97, Ptrend = 0.09; NHS/NHSII, median intake 1067 g/day vs. none; HR, 0.82; 95% CI, 0.70-0.96, Ptrend = 0.04). Eight other dietary factors that were associated with endometrial cancer risk in the EPIC study (total fat, monounsaturated fat, carbohydrates, phosphorus, butter, yogurt, cheese, and potatoes) were not confirmed in the NHS/NHSII. Our findings suggest that coffee intake may be inversely associated with endometrial cancer risk. Further data are needed to confirm these findings and to examine the mechanisms linking coffee intake to endometrial cancer risk to develop improved prevention strategies. ©2015 American Association for Cancer Research.

  6. Evidence based study of antidiabetic potential of C. maxima seeds - In vivo.

    PubMed

    Kushawaha, Devesh Kumar; Yadav, Manjulika; Chatterji, Sanjukta; Srivastava, Amrita Kumari; Watal, Geeta

    2017-10-01

    In vitro antidiabetic efficacy of Cucurbita maxima seed extract (CMSE) has already been studied in our previous findings. Thus, in order to validate these findings in biological system, in vivo antidiabetic activity of aqueous extract was investigated in normal as well as diabetic experimental models. Variable doses of extract were administered orally to normal and STZ induced mild diabetic rats during fasting blood glucose (FBG) and glucose tolerance test (GTT) studies. In order to determine the extract's antidiabetic potential long-term FBG and post prandial glucose (PPG) studies were also carried out. Most effective dose of 200 mg kg -1 of CMSE decreases the blood glucose level (BGL) in normal rats by 29.02% at 6 h during FBG studies and 23.23% at 3 h during GTT. However, the maximum reduction observed in BGL of mild diabetic rats during GTT the same interval of time was 26.15%. Moreover, in case of severely diabetic rats a significant reduction of 39.33% was observed in FBG levels whereas, in case of positive control, rats treated with 2.5 mg kg -1 of glipizide, a fall of 42.9% in FBG levels was observed after 28 days. Results of PPG level also showed a fall of 33.20% in severely diabetic rats as compared to the positive control showing a fall of 44.2% at the end of the 28 days. Thus, the present study validate the hypoglycemic and antidiabetic effect of CMSE and hence this extract could be explored further for developing as a novel antidiabetic agent.

  7. Differentiating benign from malignant mediastinal lymph nodes visible at EBUS using grey-scale textural analysis.

    PubMed

    Edey, Anthony J; Pollentine, Adrian; Doody, Claire; Medford, Andrew R L

    2015-04-01

    Recent data suggest that grey-scale textural analysis on endobronchial ultrasound (EBUS) imaging can differentiate benign from malignant lymphadenopathy. The objective of studies was to evaluate grey-scale textural analysis and examine its clinical utility. Images from 135 consecutive clinically indicated EBUS procedures were evaluated retrospectively using MATLAB software (MathWorks, Natick, MA, USA). Manual node mapping was performed to obtain a region of interest and grey-scale textural features (range of pixel values and entropy) were analysed. The initial analysis involved 94 subjects and receiver operating characteristic (ROC) curves were generated. The ROC thresholds were then applied on a second cohort (41 subjects) to validate the earlier findings. A total of 371 images were evaluated. There was no difference in proportions of malignant disease (56% vs 53%, P = 0.66) in the prediction (group 1) and validation (group 2) sets. There was no difference in range of pixel values in group 1 but entropy was significantly higher in the malignant group (5.95 vs 5.77, P = 0.03). Higher entropy was seen in adenocarcinoma versus lymphoma (6.00 vs 5.50, P < 0.05). An ROC curve for entropy gave an area under the curve of 0.58 with 51% sensitivity and 71% specificity for entropy greater than 5.94 for malignancy. In group 2, the entropy threshold phenotyped only 47% of benign cases and 20% of malignant cases correctly. These findings suggest that use of EBUS grey-scale textural analysis for differentiation of malignant from benign lymphadenopathy may not be accurate. Further studies are required. © 2015 Asian Pacific Society of Respirology.

  8. Risks and benefits of hormone therapy: has medical dogma now been overturned?

    PubMed

    Shapiro, S; de Villiers, T J; Pines, A; Sturdee, D W; Baber, R J; Panay, N; Stevenson, J C; Mueck, A O; Burger, H G

    2014-06-01

    In an integrated overview of the benefits and risks of menopausal hormone therapy (HT), the Women's Health Initiative (WHI) investigators have claimed that their 'findings … do not support use of this therapy for chronic disease prevention'. In an accompanying editorial, it was claimed that 'the WHI overturned medical dogma regarding menopausal [HT]'. To evaluate those claims. Epidemiological criteria of causation were applied to the evidence. A 'global index' purporting to summarize the overall benefit versus the risk of HT was not valid, and it was biased. For coronary heart disease, an increased risk in users of estrogen plus progestogen (E + P), previously reported by the WHI, was not confirmed. The WHI study did not establish that E+ P increases the risk of breast cancer; the findings suggest that unopposed estrogen therapy (ET) does not increase the risk, and may even reduce it. The findings for stroke and pulmonary embolism were compatible with an increased risk, and among E+ P users there were credible reductions in the risk of colorectal and endometrial cancer. For E+ P and ET users, there were credible reductions in the risk of hip fracture. Under 'worst case' and 'best case' assumptions, the changes in the incidence of the outcomes attributable to HT were minor. Over-interpretation and misrepresentation of the WHI findings have damaged the health and well-being of menopausal women by convincing them and their health professionals that the risks of HT outweigh the benefits.

  9. Virtopsy - the concept of a centralized database in forensic medicine for analysis and comparison of radiological and autopsy data.

    PubMed

    Aghayev, Emin; Staub, Lukas; Dirnhofer, Richard; Ambrose, Tony; Jackowski, Christian; Yen, Kathrin; Bolliger, Stephan; Christe, Andreas; Roeder, Christoph; Aebi, Max; Thali, Michael J

    2008-04-01

    Recent developments in clinical radiology have resulted in additional developments in the field of forensic radiology. After implementation of cross-sectional radiology and optical surface documentation in forensic medicine, difficulties in the validation and analysis of the acquired data was experienced. To address this problem and for the comparison of autopsy and radiological data a centralized database with internet technology for forensic cases was created. The main goals of the database are (1) creation of a digital and standardized documentation tool for forensic-radiological and pathological findings; (2) establishing a basis for validation of forensic cross-sectional radiology as a non-invasive examination method in forensic medicine that means comparing and evaluating the radiological and autopsy data and analyzing the accuracy of such data; and (3) providing a conduit for continuing research and education in forensic medicine. Considering the infrequent availability of CT or MRI for forensic institutions and the heterogeneous nature of case material in forensic medicine an evaluation of benefits and limitations of cross-sectional imaging concerning certain forensic features by a single institution may be of limited value. A centralized database permitting international forensic and cross disciplinary collaborations may provide important support for forensic-radiological casework and research.

  10. SFO-Project: The New Generation of Sharable, Editable and Open-Access CFD Tutorials

    NASA Astrophysics Data System (ADS)

    Javaherchi, Teymour; Javaherchi, Ardeshir; Aliseda, Alberto

    2016-11-01

    One of the most common approaches to develop a Computational Fluid Dynamic (CFD) simulation for a new case study of interest is to search for the most similar, previously developed and validated CFD simulation among other works. A simple search would result into a pool of written/visual tutorials. However, users should spend significant amount of time and effort to find the most correct, compatible and valid tutorial in this pool and further modify it toward their simulation of interest. SFO is an open-source project with the core idea of saving the above-mentioned time and effort. This is done via documenting/sharing scientific and methodological approaches to develop CFD simulations for a wide spectrum of fundamental and industrial case studies in three different CFD solvers; STAR-CCM +, FLUENT and Open FOAM (SFO). All of the steps and required files of these tutorials are accessible and editable under the common roof of Github (a web-based Git repository hosting service). In this presentation we will present the current library of 20 + developed CFD tutorials, discuss the idea and benefit of using them, their educational values and explain how the next generation of open-access and live resource of CFD tutorials can be built further hand-in-hand within our community.

  11. Robust diagnosis of non-Hodgkin lymphoma phenotypes validated on gene expression data from different laboratories.

    PubMed

    Bhanot, Gyan; Alexe, Gabriela; Levine, Arnold J; Stolovitzky, Gustavo

    2005-01-01

    A major challenge in cancer diagnosis from microarray data is the need for robust, accurate, classification models which are independent of the analysis techniques used and can combine data from different laboratories. We propose such a classification scheme originally developed for phenotype identification from mass spectrometry data. The method uses a robust multivariate gene selection procedure and combines the results of several machine learning tools trained on raw and pattern data to produce an accurate meta-classifier. We illustrate and validate our method by applying it to gene expression datasets: the oligonucleotide HuGeneFL microarray dataset of Shipp et al. (www.genome.wi.mit.du/MPR/lymphoma) and the Hu95Av2 Affymetrix dataset (DallaFavera's laboratory, Columbia University). Our pattern-based meta-classification technique achieves higher predictive accuracies than each of the individual classifiers , is robust against data perturbations and provides subsets of related predictive genes. Our techniques predict that combinations of some genes in the p53 pathway are highly predictive of phenotype. In particular, we find that in 80% of DLBCL cases the mRNA level of at least one of the three genes p53, PLK1 and CDK2 is elevated, while in 80% of FL cases, the mRNA level of at most one of them is elevated.

  12. Application of chemical reaction mechanistic domains to an ecotoxicity QSAR model, the KAshinhou Tool for Ecotoxicity (KATE).

    PubMed

    Furuhama, A; Hasunuma, K; Aoki, Y; Yoshioka, Y; Shiraishi, H

    2011-01-01

    The validity of chemical reaction mechanistic domains defined by skin sensitisation in the Quantitative Structure-Activity Relationship (QSAR) ecotoxicity system, KAshinhou Tools for Ecotoxicity (KATE), March 2009 version, has been assessed and an external validation of the current KATE system carried out. In the case of the fish end-point, the group of chemicals with substructures reactive to skin sensitisation always exhibited higher root mean square errors (RMSEs) than chemicals without reactive substructures under identical C- or log P-judgements in KATE. However, in the case of the Daphnia end-point this was not so, and the group of chemicals with reactive substructures did not always have higher RMSEs: the Schiff base mechanism did not function as a high error detector. In addition to the RMSE findings, the presence of outliers suggested that the KATE classification rules needs to be reconsidered, particularly for the amine group. Examination of the dependency of the organism on the toxic action of chemicals in fish and Daphnia revealed that some of the reactive substructures could be applied to the improvement of the KATE system. It was concluded that the reaction mechanistic domains of toxic action for skin sensitisation could provide useful complementary information in predicting acute aquatic ecotoxicity, especially at the fish end-point.

  13. Use of postmortem computed tomography to retrieve small metal fragments derived from a weapon in the bodies of victims in two homicide cases.

    PubMed

    Sano, Rie; Takahashi, Yoichiro; Hayakawa, Akira; Murayama, Masayuki; Kubo, Rieko; Hirasawa, Satoshi; Tokue, Hiroyuki; Shimada, Takehiro; Awata, Sachiko; Takei, Hiroyuki; Yuasa, Masahiro; Uetake, Shinji; Akuzawa, Hisashi; Kominato, Yoshihiko

    2018-05-01

    Postmortem computed tomography (PMCT) is becoming a commonly used modality in routine forensic investigation. Mechanical injuries including lacerations, incisions, stab wounds and gunshot wounds frequently contain foreign bodies that may have significant value as clues in criminal investigations. CT is a sensitive modality for detection of metal foreign bodies that may be associated with injuries to the victim in cases of homicide or traffic accidents. Here we report two cases in which PMCT was able to act as a guide to forensic pathologists for retrieval of metal fragments in the corpses of the victims, the retrieved fragments then being used to validate the confessions of the assailants through comparison with the knife and the crowbar, respectively, that had been used in the crimes. In these cases, the small metal fragments retrieved from the corpses of the victims with the aid of PMCT were decisive pieces of evidence confirming the circumstances of the crimes. These cases illustrate how PMCT can be used to complement the findings of classical autopsy for integrative investigation of corpses with injury. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Measuring child personality when child personality was not measured: Application of a thin-slice approach.

    PubMed

    Tackett, Jennifer L; Smack, Avanté J; Herzhoff, Kathrin; Reardon, Kathleen W; Daoud, Stephanie; Granic, Isabela

    2017-02-01

    Recent efforts have demonstrated that thin-slice (TS) assessment-or assessment of individual characteristics after only brief exposure to that individual's behaviour-can produce reliable and valid measurements of child personality traits. The extent to which this approach can be generalized to archival data not designed to measure personality, and whether it can be used to measure personality pathology traits in youth, is not yet known. Archival video data of a parent-child interaction task was collected as part of a clinical intervention trial for aggressive children (N = 177). Unacquainted observers independently watched the clips and rated children on normal-range (neuroticism, extraversion, agreeableness, conscientiousness and openness to experience) and pathological (callous-unemotional) personality traits. TS ratings of child personality showed strong internal consistency, valid associations with measures of externalizing problems and temperament, and revealed differentiated subgroups of children based on severity. As such, these findings demonstrate an ecologically valid application of TS methodology and illustrate how researchers and clinicians can extend their existing data by measuring child personality using TS methodology, even in cases where child personality was not originally measured. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Anonymization of electronic medical records for validating genome-wide association studies

    PubMed Central

    Loukides, Grigorios; Gkoulalas-Divanis, Aris; Malin, Bradley

    2010-01-01

    Genome-wide association studies (GWAS) facilitate the discovery of genotype–phenotype relations from population-based sequence databases, which is an integral facet of personalized medicine. The increasing adoption of electronic medical records allows large amounts of patients’ standardized clinical features to be combined with the genomic sequences of these patients and shared to support validation of GWAS findings and to enable novel discoveries. However, disseminating these data “as is” may lead to patient reidentification when genomic sequences are linked to resources that contain the corresponding patients’ identity information based on standardized clinical features. This work proposes an approach that provably prevents this type of data linkage and furnishes a result that helps support GWAS. Our approach automatically extracts potentially linkable clinical features and modifies them in a way that they can no longer be used to link a genomic sequence to a small number of patients, while preserving the associations between genomic sequences and specific sets of clinical features corresponding to GWAS-related diseases. Extensive experiments with real patient data derived from the Vanderbilt's University Medical Center verify that our approach generates data that eliminate the threat of individual reidentification, while supporting GWAS validation and clinical case analysis tasks. PMID:20385806

  16. Description of a Website Resource for Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Smith, Brian R.; Huang, George P.

    2010-01-01

    The activities of the Turbulence Model Benchmarking Working Group - which is a subcommittee of the American Institute of Aeronautics and Astronautics (AIAA) Fluid Dynamics Technical Committee - are described. The group s main purpose is to establish a web-based repository for Reynolds-averaged Navier-Stokes turbulence model documentation, including verification and validation cases. This turbulence modeling resource has been established based on feedback from a survey on what is needed to achieve consistency and repeatability in turbulence model implementation and usage, and to document and disseminate information on new turbulence models or improvements to existing models. The various components of the website are described in detail: description of turbulence models, turbulence model readiness rating system, verification cases, validation cases, validation databases, and turbulence manufactured solutions. An outline of future plans of the working group is also provided.

  17. Minimally Invasive Surgery in Pediatric Trauma: One Institution's 20-Year Experience

    PubMed Central

    Xu, Min Li; Lopez, Joseph

    2016-01-01

    Background: Minimally invasive surgery (MIS) for trauma in pediatric cases remains controversial. Recent studies have shown the validity of using minimally invasive techniques to decrease the rate of negative and nontherapeutic laparotomy and thoracotomy. The purpose of this study was to evaluate the diagnostic accuracy and therapeutic options of MIS in pediatric trauma at a level I pediatric trauma center. Methods: We reviewed cases of patients aged 15 years and younger who had undergone laparoscopy or thoracoscopy for trauma in our institution over the past 20 years. Each case was evaluated for mechanism of injury, computed tomographic (CT) scan findings, operative management, and patient outcomes. Results: There were 23 patients in the study (16 boys and 7 girls). Twenty-one had undergone diagnostic laparoscopy and 2 had had diagnostic thoracoscopy. In 16, there were positive findings in diagnostic laparoscopy. Laparoscopic therapeutic interventions were performed in 6 patients; the remaining 10 required conversion to laparotomy. Both patients who underwent diagnostic thoracoscopy had positive findings. One had a thoracoscopic repair, and the other underwent conversion to thoracotomy. There were 5 negative diagnostic laparoscopies. There was no mortality among the 23 patients. Conclusions: The use of laparoscopy and thoracoscopy in pediatric trauma helps to reduce unnecessary laparotomy and thoracotomy. Some injuries can be repaired by a minimally invasive approach. When conversion is necessary, the use of these techniques can guide the placement and size of surgical incisions. The goal is to shift the paradigm in favor of using MIS in the treatment of pediatric trauma as the first-choice modality in stable patients. PMID:26877626

  18. On the validity of the Arrhenius equation for electron attachment rate coefficients.

    PubMed

    Fabrikant, Ilya I; Hotop, Hartmut

    2008-03-28

    The validity of the Arrhenius equation for dissociative electron attachment rate coefficients is investigated. A general analysis allows us to obtain estimates of the upper temperature bound for the range of validity of the Arrhenius equation in the endothermic case and both lower and upper bounds in the exothermic case with a reaction barrier. The results of the general discussion are illustrated by numerical examples whereby the rate coefficient, as a function of temperature for dissociative electron attachment, is calculated using the resonance R-matrix theory. In the endothermic case, the activation energy in the Arrhenius equation is close to the threshold energy, whereas in the case of exothermic reactions with an intermediate barrier, the activation energy is found to be substantially lower than the barrier height.

  19. Monitoring the Snowpack in Remote, Ungauged Mountains

    NASA Astrophysics Data System (ADS)

    Dozier, J.; Davis, R. E.; Bair, N.; Rittger, K. E.

    2013-12-01

    Our objective is to estimate seasonal snow volumes, relative to historical trends and extremes, in snow-dominated mountains that have austere infrastructure, sparse gauging, challenges of accessibility, and emerging or enduring insecurity related to water resources. The world's mountains accumulate substantial snow and, in some areas, produce the bulk of the runoff. In ranges like Afghanistan's Hindu Kush, availability of water resources affects US policy, military and humanitarian operations, and national security. The rugged terrain makes surface measurements difficult and also affects the analysis of remotely sensed data. To judge feasibility, we consider two regions, a validation case and a case representing inaccessible mountains. For the validation case, we use the Sierra Nevada of California, a mountain range of extensive historical study, emerging scientific innovation, and conflicting priorities in managing water for agriculture, urban areas, hydropower, recreation, habitat, and flood control. For the austere regional focus, we use the Hindu Kush, where some of the most persistent drought in the world causes food insecurity and combines with political instability, and occasional flooding. Our approach uses a mix of satellite data and spare modeling to present information essential for planning and decision making, ranging from optimization of proposed infrastructure projects to assessment of water resources stored as snow for seasonal forecasts. We combine optical imagery (MODIS on Terra/Aqua), passive microwave data (SSM/I and AMSR-E), retrospective reconstruction with energy balance calculations, and a snowmelt model to establish the retrospective context. With the passive microwave data we bracket the historical range in snow cover volume. The rank orders of total retrieved volume correlates with reconstructions. From a library of historical reconstruction, we find similar cases that provide insights about snow cover distribution at a finer scale than the passive retrievals. Specifically, we examine the decade-long record from Terra and Aqua to bracket the historical record. In the California Sierra Nevada, surface measurements have sufficient spatial and temporal resolution for us to validate our approach, whereas in the Hindu Kush surface data are sparse and access presents significant difficulties.

  20. Genome-wide methylation profiling identifies an essential role of reactive oxygen species in pediatric glioblastoma multiforme and validates a methylome specific for H3 histone family 3A with absence of G-CIMP/isocitrate dehydrogenase 1 mutation

    PubMed Central

    Jha, Prerana; Pia Patric, Irene Rosita; Shukla, Sudhanshu; Pathak, Pankaj; Pal, Jagriti; Sharma, Vikas; Thinagararanjan, Sivaarumugam; Santosh, Vani; Suri, Vaishali; Sharma, Mehar Chand; Arivazhagan, Arimappamagan; Suri, Ashish; Gupta, Deepak; Somasundaram, Kumaravel; Sarkar, Chitra

    2014-01-01

    Background Pediatric glioblastoma multiforme (GBM) is rare, and there is a single study, a seminal discovery showing association of histone H3.3 and isocitrate dehydrogenase (IDH)1 mutation with a DNA methylation signature. The present study aims to validate these findings in an independent cohort of pediatric GBM, compare it with adult GBM, and evaluate the involvement of important functionally altered pathways. Methods Genome-wide methylation profiling of 21 pediatric GBM cases was done and compared with adult GBM data (GSE22867). We performed gene mutation analysis of IDH1 and H3 histone family 3A (H3F3A), status evaluation of glioma cytosine–phosphate–guanine island methylator phenotype (G-CIMP), and Gene Ontology analysis. Experimental evaluation of reactive oxygen species (ROS) association was also done. Results Distinct differences were noted between methylomes of pediatric and adult GBM. Pediatric GBM was characterized by 94 hypermethylated and 1206 hypomethylated cytosine–phosphate–guanine (CpG) islands, with 3 distinct clusters, having a trend to prognostic correlation. Interestingly, none of the pediatric GBM cases showed G-CIMP/IDH1 mutation. Gene Ontology analysis identified ROS association in pediatric GBM, which was experimentally validated. H3F3A mutants (36.4%; all K27M) harbored distinct methylomes and showed enrichment of processes related to neuronal development, differentiation, and cell-fate commitment. Conclusions Our study confirms that pediatric GBM has a distinct methylome compared with that of adults. Presence of distinct clusters and an H3F3A mutation–specific methylome indicate existence of epigenetic subgroups within pediatric GBM. Absence of IDH1/G-CIMP status further indicates that findings in adult GBM cannot be simply extrapolated to pediatric GBM and that there is a strong need for identification of separate prognostic markers. A possible role of ROS in pediatric GBM pathogenesis is demonstrated for the first time and needs further evaluation. PMID:24997139

  1. The reliability of diagnostic coding and laboratory data to identify tuberculosis and nontuberculous mycobacterial disease among rheumatoid arthritis patients using anti-tumor necrosis factor therapy.

    PubMed

    Winthrop, Kevin L; Baxter, Roger; Liu, Liyan; McFarland, Bentson; Austin, Donald; Varley, Cara; Radcliffe, LeAnn; Suhler, Eric; Choi, Dongsoek; Herrinton, Lisa J

    2011-03-01

    Anti-tumor necrosis factor-alpha (anti-TNF) therapies are associated with severe mycobacterial infections in rheumatoid arthritis patients. We developed and validated electronic record search algorithms for these serious infections. The study used electronic clinical, microbiologic, and pharmacy records from Kaiser Permanente Northern California (KPNC) and the Portland Veterans Affairs Medical Center (PVAMC). We identified suspect tuberculosis and nontuberculous mycobacteria (NTM) cases using inpatient and outpatient diagnostic codes, culture results, and anti-tuberculous medication dispensing. We manually reviewed records to validate our case-finding algorithms. We identified 64 tuberculosis and 367 NTM potential cases, respectively. For tuberculosis, diagnostic code positive predictive value (PPV) was 54% at KPNC and 9% at PVAMC. Adding medication dispensings improved these to 87% and 46%, respectively. Positive tuberculosis cultures had a PPV of 100% with sensitivities of 79% (KPNC) and 55% (PVAMC). For NTM, the PPV of diagnostic codes was 91% (KPNC) and 76% (PVAMC). At KPNC, ≥ 1 positive NTM culture was sensitive (100%) and specific (PPV, 74%) if non-pathogenic species were excluded; at PVAMC, ≥1 positive NTM culture identified 76% of cases with PPV of 41%. Application of the American Thoracic Society NTM microbiology criteria yielded the highest PPV (100% KPNC, 78% PVAMC). The sensitivity and predictive value of electronic microbiologic data for tuberculosis and NTM infections is generally high, but varies with different facilities or models of care. Unlike NTM, tuberculosis diagnostic codes have poor PPV, and in the absence of laboratory data, should be combined with anti-tuberculous therapy dispensings for pharmacoepidemiologic research. Copyright © 2010 John Wiley & Sons, Ltd.

  2. Fine mapping of genetic polymorphisms of pulmonary tuberculosis within chromosome 18q11.2 in the Chinese population: a case-control study.

    PubMed

    Dai, Yaoyao; Zhang, Xia; Pan, Hongqiu; Tang, Shaowen; Shen, Hongbing; Wang, Jianming

    2011-10-22

    Recently, one genome-wide association study identified a susceptibility locus of rs4331426 on chromosome 18q11.2 for tuberculosis in the African population. To validate the significance of this susceptibility locus in other areas, we conducted a case-control study in the Chinese population. The present study consisted of 578 cases and 756 controls. The SNP rs4331426 and other six tag SNPs in the 100 Kbp up and down stream of rs4331426 on chromosome 18q11.2 were genotyped by using the Taqman-based allelic discrimination system. As compared with the findings from the African population, genetic variation of the SNP rs4331426 was rare among the Chinese. No significant differences were observed in genotypes or allele frequencies of the tag SNPs between cases and controls either before or after adjusting for age, sex, education, smoking, and drinking history. However, we observed strong linkage disequilibrium of SNPs. Constructed haplotypes within this block were linked the altered risks of tuberculosis. For example, in comparison with the common haplotype AA(rs8087945-rs12456774), haplotypes AG(rs8087945-rs12456774) and GA(rs8087945-rs12456774) were associated with a decreased risk of tuberculosis, with the adjusted odds ratio(95% confidence interval) of 0.34(0.27-0.42) and 0.22(0.16-0.29), respectively. Susceptibility locus of rs4331426 discovered in the African population could not be validated in the Chinese population. None of genetic polymorphisms we genotyped were related to tuberculosis in the single-point analysis. However, haplotypes on chromosome 18q11.2 might contribute to an individual's susceptibility. More work is necessary to identify the true causative variants of tuberculosis.

  3. A combined disease management and process modeling approach for assessing and improving care processes: a fall management case-study.

    PubMed

    Askari, Marjan; Westerhof, Richard; Eslami, Saied; Medlock, Stephanie; de Rooij, Sophia E; Abu-Hanna, Ameen

    2013-10-01

    To propose a combined disease management and process modeling approach for evaluating and improving care processes, and demonstrate its usability and usefulness in a real-world fall management case study. We identified essential disease management related concepts and mapped them into explicit questions meant to expose areas for improvement in the respective care processes. We applied the disease management oriented questions to a process model of a comprehensive real world fall prevention and treatment program covering primary and secondary care. We relied on interviews and observations to complete the process models, which were captured in UML activity diagrams. A preliminary evaluation of the usability of our approach by gauging the experience of the modeler and an external validator was conducted, and the usefulness of the method was evaluated by gathering feedback from stakeholders at an invitational conference of 75 attendees. The process model of the fall management program was organized around the clinical tasks of case finding, risk profiling, decision making, coordination and interventions. Applying the disease management questions to the process models exposed weaknesses in the process including: absence of program ownership, under-detection of falls in primary care, and lack of efficient communication among stakeholders due to missing awareness about other stakeholders' workflow. The modelers experienced the approach as usable and the attendees of the invitational conference found the analysis results to be valid. The proposed disease management view of process modeling was usable and useful for systematically identifying areas of improvement in a fall management program. Although specifically applied to fall management, we believe our case study is characteristic of various disease management settings, suggesting the wider applicability of the approach. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. [Is there a place for the Glasgow-Blatchford score in the management of upper gastrointestinal bleeding?].

    PubMed

    Jerraya, Hichem; Bousslema, Amine; Frikha, Foued; Dziri, Chadli

    2011-12-01

    Upper gastrointestinal bleeding is a frequent cause for emergency hospital admission. Most severity scores include in their computation the endoscopic findings. The Glasgow-Blatchford score is a validated score that is easy to calculate based on simple clinical and biological variables that can identify patients with a low or a high risk of needing a therapeutic (interventional endoscopy, surgery and/ or transfusions). To validate retrospectively the Glasgow-Blatchford Score (GBS). The study examined all patients admitted in both the general surgery department as of Anesthesiology of the Regional Hospital of Sidi Bouzid. There were 50 patients, which the mean age was 58 years and divided into 35 men and 15 women. In all these patients, we calculated the GBS. Series were divided into 2 groups, 26 cases received only medical treatment and 24 cases required transfusion and / or surgery. Univariate analysis was performed for comparison of these two groups then the ROC curve was used to identify the 'Cut off point' of GBS. Sensitivity (Se), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) with confidence interval 95% were calculated. The SGB was significantly different between the two groups (p <0.0001). Using the ROC curve, it was determined that for the threshold of GBS ³ 7, Se = 96% (88-100%), Sp = 69% (51-87%), PPV = 74% (59 -90%) and NPV = 95% (85-100%). This threshold is interesting as to its VPN. Indeed, if GBS <7, we must opt for medical treatment to the risk of being wrong in only 5% of cases. The Glasgow-Blatchford score is based on simple clinical and laboratory variables. It can recognize in the emergency department the cases that require medical treatment and those whose support could need blood transfusions and / or surgical treatment.

  5. Development and validation of an administrative case definition for inflammatory bowel diseases

    PubMed Central

    Rezaie, Ali; Quan, Hude; Fedorak, Richard N; Panaccione, Remo; Hilsden, Robert J

    2012-01-01

    BACKGROUND: A population-based database of inflammatory bowel disease (IBD) patients is invaluable to explore and monitor the epidemiology and outcome of the disease. In this context, an accurate and validated population-based case definition for IBD becomes critical for researchers and health care providers. METHODS: IBD and non-IBD individuals were identified through an endoscopy database in a western Canadian health region (Calgary Health Region, Calgary, Alberta). Subsequently, using a novel algorithm, a series of case definitions were developed to capture IBD cases in the administrative databases. In the second stage of the study, the criteria were validated in the Capital Health Region (Edmonton, Alberta). RESULTS: A total of 150 IBD case definitions were developed using 1399 IBD patients and 15,439 controls in the development phase. In the validation phase, 318,382 endoscopic procedures were searched and 5201 IBD patients were identified. After consideration of sensitivity, specificity and temporal stability of each validated case definition, a diagnosis of IBD was assigned to individuals who experienced at least two hospitalizations or had four physician claims, or two medical contacts in the Ambulatory Care Classification System database with an IBD diagnostic code within a two-year period (specificity 99.8%; sensitivity 83.4%; positive predictive value 97.4%; negative predictive value 98.5%). An alternative case definition was developed for regions without access to the Ambulatory Care Classification System database. A novel scoring system was developed that detected Crohn disease and ulcerative colitis patients with a specificity of >99% and a sensitivity of 99.1% and 86.3%, respectively. CONCLUSION: Through a robust methodology, a reproducible set of criteria to capture IBD patients through administrative databases was developed. The methodology may be used to develop similar administrative definitions for chronic diseases. PMID:23061064

  6. Use of applied theatre in health research dissemination and data validation: a pilot study from South Africa

    PubMed Central

    Stuttaford, Maria; Bryanston, Claudette; Hundt, Gillian Lewando; Connor, Myles; Thorogood, Margaret; Tollman, Steve

    2010-01-01

    This article reports on a pilot study of the use of applied theatre in the dissemination of health research findings and validation of data. The study took place in South Africa, as part of the Southern Africa Stroke Prevention Initiative (SASPI) and was based at the University/Medical Research Council Rural Public Health and Health Transitions Research Unit (also known as the Agincourt Unit). The aim of SASPI was to investigate the prevalence of stroke and understand the social context of stroke. It was decided to use an applied theatre approach for validating the data and disseminating findings from the anthropological component of the study. The pilot study found that applied theatre worked better in smaller community groups. It allowed data validation and it elicited ideas for future interventions resulting from the health research findings. Evaluation methods of the impact of applied theatre as a vehicle for the dissemination and communication of research findings require further development. PMID:16322042

  7. Cross-Cultural Applicability of the Montreal Cognitive Assessment (MoCA): A Systematic Review.

    PubMed

    O'Driscoll, Ciarán; Shaikh, Madiha

    2017-01-01

    The Montreal Cognitive Assessment (MoCA) is widely used to screen for mild cognitive impairment (MCI). While there are many available versions, the cross-cultural validity of the assessment has not been explored sufficiently. We aimed to interrogate the validity of the MoCA in a cross-cultural context: in differentiating MCI from normal controls (NC); and identifying cut-offs and adjustments for age and education where possible. This review sourced a wide range of studies including case-control studies. In addition, we report findings for differentiating dementias from NC and MCI from dementias, however, these were not considered to be an appropriate use of the MoCA. The subject of the review assumes heterogeneity and therefore meta-analyses was not conducted. Quality ratings, forest plots of validated studies (sensitivity and specificity) with covariates (suggested cut-offs, age, education and country), and summary receiver operating characteristic curve are presented. The results showed a wide range in suggested cutoffs for MCI cross-culturally, with variability in levels of sensitivity and specificity ranging from low to high. Poor methodological rigor appears to have affected reported accuracy and validity of the MoCA. The review highlights the necessity for cross-cultural considerations when using the MoCA, and recognizing it as a screen and not a diagnostic tool. Appropriate cutoffs and point adjustments for education are suggested.

  8. Are Validity and Reliability "Relevant" in Qualitative Evaluation Research?

    ERIC Educational Resources Information Center

    Goodwin, Laura D.; Goodwin, William L.

    1984-01-01

    The views of prominant qualitative methodologists on the appropriateness of validity and reliability estimation for the measurement strategies employed in qualitative evaluations are summarized. A case is made for the relevance of validity and reliability estimation. Definitions of validity and reliability for qualitative measurement are presented…

  9. Further assessment of a method to estimate reliability and validity of qualitative research findings.

    PubMed

    Hinds, P S; Scandrett-Hibden, S; McAulay, L S

    1990-04-01

    The reliability and validity of qualitative research findings are viewed with scepticism by some scientists. This scepticism is derived from the belief that qualitative researchers give insufficient attention to estimating reliability and validity of data, and the differences between quantitative and qualitative methods in assessing data. The danger of this scepticism is that relevant and applicable research findings will not be used. Our purpose is to describe an evaluative strategy for use with qualitative data, a strategy that is a synthesis of quantitative and qualitative assessment methods. Results of the strategy and factors that influence its use are also described.

  10. Validation of intermediate end points in cancer research.

    PubMed

    Schatzkin, A; Freedman, L S; Schiffman, M H; Dawsey, S M

    1990-11-21

    Investigations using intermediate end points as cancer surrogates are quicker, smaller, and less expensive than studies that use malignancy as the end point. We present a strategy for determining whether a given biomarker is a valid intermediate end point between an exposure and incidence of cancer. Candidate intermediate end points may be selected from case series, ecologic studies, and animal experiments. Prospective cohort and sometimes case-control studies may be used to quantify the intermediate end point-cancer association. The most appropriate measure of this association is the attributable proportion. The intermediate end point is a valid cancer surrogate if the attributable proportion is close to 1.0, but not if it is close to 0. Usually, the attributable proportion is close to neither 1.0 nor 0; in this case, valid surrogacy requires that the intermediate end point mediate an established exposure-cancer relation. This would in turn imply that the exposure effect would vanish if adjusted for the intermediate end point. We discuss the relative advantages of intervention and observational studies for the validation of intermediate end points. This validation strategy also may be applied to intermediate end points for adverse reproductive outcomes and chronic diseases other than cancer.

  11. On the validity of cosmological Fisher matrix forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolz, Laura; Kilbinger, Martin; Weller, Jochen

    2012-09-01

    We present a comparison of Fisher matrix forecasts for cosmological probes with Monte Carlo Markov Chain (MCMC) posterior likelihood estimation methods. We analyse the performance of future Dark Energy Task Force (DETF) stage-III and stage-IV dark-energy surveys using supernovae, baryon acoustic oscillations and weak lensing as probes. We concentrate in particular on the dark-energy equation of state parameters w{sub 0} and w{sub a}. For purely geometrical probes, and especially when marginalising over w{sub a}, we find considerable disagreement between the two methods, since in this case the Fisher matrix can not reproduce the highly non-elliptical shape of the likelihood function.more » More quantitatively, the Fisher method underestimates the marginalized errors for purely geometrical probes between 30%-70%. For cases including structure formation such as weak lensing, we find that the posterior probability contours from the Fisher matrix estimation are in good agreement with the MCMC contours and the forecasted errors only changing on the 5% level. We then explore non-linear transformations resulting in physically-motivated parameters and investigate whether these parameterisations exhibit a Gaussian behaviour. We conclude that for the purely geometrical probes and, more generally, in cases where it is not known whether the likelihood is close to Gaussian, the Fisher matrix is not the appropriate tool to produce reliable forecasts.« less

  12. Hybrid Microgrid Configuration Optimization with Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Lopez, Nicolas

    This dissertation explores the Renewable Energy Integration Problem, and proposes a Genetic Algorithm embedded with a Monte Carlo simulation to solve large instances of the problem that are impractical to solve via full enumeration. The Renewable Energy Integration Problem is defined as finding the optimum set of components to supply the electric demand to a hybrid microgrid. The components considered are solar panels, wind turbines, diesel generators, electric batteries, connections to the power grid and converters, which can be inverters and/or rectifiers. The methodology developed is explained as well as the combinatorial formulation. In addition, 2 case studies of a single objective optimization version of the problem are presented, in order to minimize cost and to minimize global warming potential (GWP) followed by a multi-objective implementation of the offered methodology, by utilizing a non-sorting Genetic Algorithm embedded with a monte Carlo Simulation. The method is validated by solving a small instance of the problem with known solution via a full enumeration algorithm developed by NREL in their software HOMER. The dissertation concludes that the evolutionary algorithms embedded with Monte Carlo simulation namely modified Genetic Algorithms are an efficient form of solving the problem, by finding approximate solutions in the case of single objective optimization, and by approximating the true Pareto front in the case of multiple objective optimization of the Renewable Energy Integration Problem.

  13. The Competition Between a Localised and Distributed Source of Buoyancy

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie; Linden, Paul

    2012-11-01

    We propose a new mathematical model to study the competition between localised and distributed sources of buoyancy within a naturally ventilated filling box. The main controlling parameters in this configuration are the buoyancy fluxes of the distributed and local source, specifically their ratio Ψ. The steady state dynamics of the flow are heavily dependent on this parameter. For large Ψ, where the distributed source dominates, we find the space becomes well mixed as expected if driven by an distributed source alone. Conversely, for small Ψ we find the space reaches a stable two layer stratification. This is analogous to the classical case of a purely local source but here the lower layer is buoyant compared to the ambient, due to the constant flux of buoyancy emanating from the distributed source. The ventilation flow rate, buoyancy of the layers and also the location of the interface height, which separates the two layer stratification, are obtainable from the model. To validate the theoretical model, small scale laboratory experiments were carried out. Water was used as the working medium with buoyancy being driven directly by temperature differences. Theoretical results were compared with experimental data and overall good agreement was found. A CASE award project with Arup.

  14. Arsenic levels in drinking water and mortality of liver cancer in Taiwan.

    PubMed

    Lin, Hung-Jung; Sung, Tzu-I; Chen, Chi-Yi; Guo, How-Ran

    2013-11-15

    The carcinogenic effect of arsenic is well documented, but epidemiologic data on liver cancer were limited. To evaluate the dose-response relationship between arsenic in drinking water and mortality of liver cancer, we conducted a study in 138 villages in the southwest coast area of Taiwan. We assessed arsenic levels in drinking water using data from a survey conducted by the government and reviewed death certificates from 1971 to 1990 to identify liver cancer cases. Using village as the unit, we conducted multi-variate regression analyses and then performed post hoc analyses to validate the findings. During the 20-year period, 802 male and 301 female mortality cases of liver cancer were identified. After adjusting for age, arsenic levels above 0.64 mg/L were associated with an increase in the liver cancer mortality in both genders, but no significant effect was observed for lower exposure categories. Post hoc analyses and a review of literature supported these findings. We concluded that exposures to high arsenic levels in drinking water are associated with the occurrence of liver cancer, but such an effect is not prominent at exposure levels lower than 0.64 mg/L. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Halo-independent determination of the unmodulated WIMP signal in DAMA: the isotropic case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gondolo, Paolo; Scopel, Stefano, E-mail: paolo.gondolo@utah.edu, E-mail: scopel@sogang.ac.kr

    2017-09-01

    We present a halo-independent determination of the unmodulated signal corresponding to the DAMA modulation if interpreted as due to dark matter weakly interacting massive particles (WIMPs). First we show how a modulated signal gives information on the WIMP velocity distribution function in the Galactic rest frame from which the unmodulated signal descends. Then we describe a mathematically-sound profile likelihood analysis in which the likelihood is profiled over a continuum of nuisance parameters (namely, the WIMP velocity distribution). As a first application of the method, which is very general and valid for any class of velocity distributions, we restrict the analysismore » to velocity distributions that are isotropic in the Galactic frame. In this way we obtain halo-independent maximum-likelihood estimates and confidence intervals for the DAMA unmodulated signal. We find that the estimated unmodulated signal is in line with expectations for a WIMP-induced modulation and is compatible with the DAMA background+signal rate. Specifically, for the isotropic case we find that the modulated amplitude ranges between a few percent and about 25% of the unmodulated amplitude, depending on the WIMP mass.« less

  16. Validation of the trait anxiety scale for state-trait anxiety inventory in suicide victims and living controls of Chinese rural youths.

    PubMed

    Zhang, Jie; Gao, Qi

    2012-01-01

    This study evaluated the validation of STAI Trait-Anxiety Scale in suicide cases and community living controls in rural China. The participants were 392 suicides and 416 controls. Cronbach's Alpha was computed to evaluate the internal consistency. The Spearman Correlation Coefficient between Trait-Anxiety Scale and other instrument was calculated to evaluate the external validity, and the Exploratory Factor Analysis was used to evaluate the construct validity. The results showed the Cronbach's Alpha was .891 and .787 respectively in case and control groups. Most of the correlations between instruments were significant. We found 2 factors in cases and 3 factors in controls. We could cautiously infer that the Trait Anxiety Scale was an adequate tool to measure trait anxiety through proxy data in suicide victims and living controls in rural China.

  17. Use of Bayesian Networks to Probabilistically Model and Improve the Likelihood of Validation of Microarray Findings by RT-PCR

    PubMed Central

    English, Sangeeta B.; Shih, Shou-Ching; Ramoni, Marco F.; Smith, Lois E.; Butte, Atul J.

    2014-01-01

    Though genome-wide technologies, such as microarrays, are widely used, data from these methods are considered noisy; there is still varied success in downstream biological validation. We report a method that increases the likelihood of successfully validating microarray findings using real time RT-PCR, including genes at low expression levels and with small differences. We use a Bayesian network to identify the most relevant sources of noise based on the successes and failures in validation for an initial set of selected genes, and then improve our subsequent selection of genes for validation based on eliminating these sources of noise. The network displays the significant sources of noise in an experiment, and scores the likelihood of validation for every gene. We show how the method can significantly increase validation success rates. In conclusion, in this study, we have successfully added a new automated step to determine the contributory sources of noise that determine successful or unsuccessful downstream biological validation. PMID:18790084

  18. Reducing false positives of microcalcification detection systems by removal of breast arterial calcifications.

    PubMed

    Mordang, Jan-Jurre; Gubern-Mérida, Albert; den Heeten, Gerard; Karssemeijer, Nico

    2016-04-01

    In the past decades, computer-aided detection (CADe) systems have been developed to aid screening radiologists in the detection of malignant microcalcifications. These systems are useful to avoid perceptual oversights and can increase the radiologists' detection rate. However, due to the high number of false positives marked by these CADe systems, they are not yet suitable as an independent reader. Breast arterial calcifications (BACs) are one of the most frequent false positives marked by CADe systems. In this study, a method is proposed for the elimination of BACs as positive findings. Removal of these false positives will increase the performance of the CADe system in finding malignant microcalcifications. A multistage method is proposed for the removal of BAC findings. The first stage consists of a microcalcification candidate selection, segmentation and grouping of the microcalcifications, and classification to remove obvious false positives. In the second stage, a case-based selection is applied where cases are selected which contain BACs. In the final stage, BACs are removed from the selected cases. The BACs removal stage consists of a GentleBoost classifier trained on microcalcification features describing their shape, topology, and texture. Additionally, novel features are introduced to discriminate BACs from other positive findings. The CADe system was evaluated with and without BACs removal. Here, both systems were applied on a validation set containing 1088 cases of which 95 cases contained malignant microcalcifications. After bootstrapping, free-response receiver operating characteristics and receiver operating characteristics analyses were carried out. Performance between the two systems was compared at 0.98 and 0.95 specificity. At a specificity of 0.98, the sensitivity increased from 37% to 52% and the sensitivity increased from 62% up to 76% at a specificity of 0.95. Partial areas under the curve in the specificity range of 0.8-1.0 were significantly different between the system without BACs removal and the system with BACs removal, 0.129 ± 0.009 versus 0.144 ± 0.008 (p<0.05), respectively. Additionally, the sensitivity at one false positive per 50 cases and one false positive per 25 cases increased as well, 37% versus 51% (p<0.05) and 58% versus 67% (p<0.05) sensitivity, respectively. Additionally, the CADe system with BACs removal reduces the number of false positives per case by 29% on average. The same sensitivity at one false positive per 50 cases in the CADe system without BACs removal can be achieved at one false positive per 80 cases in the CADe system with BACs removal. By using dedicated algorithms to detect and remove breast arterial calcifications, the performance of CADe systems can be improved, in particular, at false positive rates representative for operating points used in screening.

  19. Preliminary Development and Validation of the Mindful Student Questionnaire

    ERIC Educational Resources Information Center

    Renshaw, Tyler L.

    2017-01-01

    Research validating mindfulness-based interventions with youths and in schools is growing, yet research validating measures of youths' mindfulness in schools has received far less empirical attention. The present study makes the case for and reports on the preliminary development and validation of a new, 15-item, multidimensional, self-report…

  20. Is there a step-wise migration in Nigeria? A case study of the migrational histories of migrants in Lagos.

    PubMed

    Afolayan, A A

    1985-09-01

    "The paper sets out to test whether or not the movement pattern of people in Nigeria is step-wise. It examines the spatial order in the country and the movement pattern of people. It then analyzes the survey data and tests for the validity of step-wise migration in the country. The findings show that step-wise migration cannot adequately describe all the patterns observed." The presence of large-scale circulatory migration between rural and urban areas is noted. Ways to decrease the pressure on Lagos by developing intermediate urban areas are considered. excerpt

  1. Using MicroCT to Assess Periodontal Regeneration Outcomes-Comparison of Image-Based and Histologic Results: A Case Report.

    PubMed

    Rebaudi, Alberto; Trisi, Paolo; Pagni, Giorgio; Wang, Hom-Lay

    The purpose of this study was to compare microcomputed tomography (microCT) and histologic analysis outcomes of a periodontal regeneration of a human defect treated with a polylactic- and polyglycolic-acid copolymer. At 11 months following the grafting procedure, the root with the surrounding periodontal tissues was removed and analyzed using microCT and histologic techniques. The results suggest that microCT three-dimensional analysis may be used in synergy with two-dimensional histologic sections to provide additional information for studying the regeneration outcomes normally reported by histologic biopsies in humans. Additional data is needed to validate these findings.

  2. Insights into horizontal canal benign paroxysmal positional vertigo from a human case report.

    PubMed

    Aron, Margaret; Bance, Manohar

    2013-12-01

    For horizontal canal benign paroxysmal positional vertigo, determination of the pathologic side is difficult and based on many physiological assumptions. This article reports findings on a patient who had one dysfunctional inner ear and who presented with horizontal canal benign paroxysmal positional vertigo, giving us a relatively pure model for observing nystagmus arising in a subject in whom the affected side is known a priori. It is an interesting human model corroborating theories of nystagmus generation in this pathology and also serves to validate Ewald's second law in a living human subject. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  3. Reliability and Validity of Instruments for Assessing Perinatal Depression in African Settings: Systematic Review and Meta-Analysis

    PubMed Central

    Tsai, Alexander C.; Scott, Jennifer A.; Hung, Kristin J.; Zhu, Jennifer Q.; Matthews, Lynn T.; Psaros, Christina; Tomlinson, Mark

    2013-01-01

    Background A major barrier to improving perinatal mental health in Africa is the lack of locally validated tools for identifying probable cases of perinatal depression or for measuring changes in depression symptom severity. We systematically reviewed the evidence on the reliability and validity of instruments to assess perinatal depression in African settings. Methods and Findings Of 1,027 records identified through searching 7 electronic databases, we reviewed 126 full-text reports. We included 25 unique studies, which were disseminated in 26 journal articles and 1 doctoral dissertation. These enrolled 12,544 women living in nine different North and sub-Saharan African countries. Only three studies (12%) used instruments developed specifically for use in a given cultural setting. Most studies provided evidence of criterion-related validity (20 [80%]) or reliability (15 [60%]), while fewer studies provided evidence of construct validity, content validity, or internal structure. The Edinburgh postnatal depression scale (EPDS), assessed in 16 studies (64%), was the most frequently used instrument in our sample. Ten studies estimated the internal consistency of the EPDS (median estimated coefficient alpha, 0.84; interquartile range, 0.71-0.87). For the 14 studies that estimated sensitivity and specificity for the EPDS, we constructed 2 x 2 tables for each cut-off score. Using a bivariate random-effects model, we estimated a pooled sensitivity of 0.94 (95% confidence interval [CI], 0.68-0.99) and a pooled specificity of 0.77 (95% CI, 0.59-0.88) at a cut-off score of ≥9, with higher cut-off scores yielding greater specificity at the cost of lower sensitivity. Conclusions The EPDS can reliably and validly measure perinatal depression symptom severity or screen for probable postnatal depression in African countries, but more validation studies on other instruments are needed. In addition, more qualitative research is needed to adequately characterize local understandings of perinatal depression-like syndromes in different African contexts. PMID:24340036

  4. Expression signature as a biomarker for prenatal diagnosis of trisomy 21.

    PubMed

    Volk, Marija; Maver, Aleš; Lovrečić, Luca; Juvan, Peter; Peterlin, Borut

    2013-01-01

    A universal biomarker panel with the potential to predict high-risk pregnancies or adverse pregnancy outcome does not exist. Transcriptome analysis is a powerful tool to capture differentially expressed genes (DEG), which can be used as biomarker-diagnostic-predictive tool for various conditions in prenatal setting. In search of biomarker set for predicting high-risk pregnancies, we performed global expression profiling to find DEG in Ts21. Subsequently, we performed targeted validation and diagnostic performance evaluation on a larger group of case and control samples. Initially, transcriptomic profiles of 10 cultivated amniocyte samples with Ts21 and 9 with normal euploid constitution were determined using expression microarrays. Datasets from Ts21 transcriptomic studies from GEO repository were incorporated. DEG were discovered using linear regression modelling and validated using RT-PCR quantification on an independent sample of 16 cases with Ts21 and 32 controls. The classification performance of Ts21 status based on expression profiling was performed using supervised machine learning algorithm and evaluated using a leave-one-out cross validation approach. Global gene expression profiling has revealed significant expression changes between normal and Ts21 samples, which in combination with data from previously performed Ts21 transcriptomic studies, were used to generate a multi-gene biomarker for Ts21, comprising of 9 gene expression profiles. In addition to biomarker's high performance in discriminating samples from global expression profiling, we were also able to show its discriminatory performance on a larger sample set 2, validated using RT-PCR experiment (AUC=0.97), while its performance on data from previously published studies reached discriminatory AUC values of 1.00. Our results show that transcriptomic changes might potentially be used to discriminate trisomy of chromosome 21 in the prenatal setting. As expressional alterations reflect both, causal and reactive cellular mechanisms, transcriptomic changes may thus have future potential in the diagnosis of a wide array of heterogeneous diseases that result from genetic disturbances.

  5. Validation of the 'United Registries for Clinical Assessment and Research' [UR-CARE], a European Online Registry for Clinical Care and Research in Inflammatory Bowel Disease.

    PubMed

    Burisch, Johan; Gisbert, Javier P; Siegmund, Britta; Bettenworth, Dominik; Thomsen, Sandra Bohn; Cleynen, Isabelle; Cremer, Anneline; Ding, Nik John Sheng; Furfaro, Federica; Galanopoulos, Michail; Grunert, Philip Christian; Hanzel, Jurij; Ivanovski, Tamara Knezevic; Krustins, Eduards; Noor, Nurulamin; O'Morain, Neil; Rodríguez-Lago, Iago; Scharl, Michael; Tua, Julia; Uzzan, Mathieu; Ali Yassin, Nuha; Baert, Filip; Langholz, Ebbe

    2018-04-27

    The 'United Registries for Clinical Assessment and Research' [UR-CARE] database is an initiative of the European Crohn's and Colitis Organisation [ECCO] to facilitate daily patient care and research studies in inflammatory bowel disease [IBD]. Herein, we sought to validate the database by using fictional case histories of patients with IBD that were to be entered by observers of varying experience in IBD. Nineteen observers entered five patient case histories into the database. After 6 weeks, all observers entered the same case histories again. For each case history, 20 key variables were selected to calculate the accuracy for each observer. We assumed that the database was such that ≥ 90% of the entered data would be correct. The overall proportion of correctly entered data was calculated using a beta-binomial regression model to account for inter-observer variation and compared to the expected level of validity. Re-test reliability was assessed using McNemar's test. For all case histories, the overall proportion of correctly entered items and their confidence intervals included the target of 90% (Case 1: 92% [88-94%]; Case 2: 87% [83-91%]; Case 3: 93% [90-95%]; Case 4: 97% [94-99%]; Case 5: 91% [87-93%]). These numbers did not differ significantly from those found 6 weeks later [NcNemar's test p > 0.05]. The UR-CARE database appears to be feasible, valid and reliable as a tool and easy to use regardless of prior user experience and level of clinical IBD experience. UR-CARE has the potential to enhance future European collaborations regarding clinical research in IBD.

  6. Plumes and Blooms: Observations, Analysis and Modeling for SIMBIOS

    NASA Technical Reports Server (NTRS)

    Maritorena, S.; Siegel, D. A.; Nelson, N. B.

    2004-01-01

    The goal of the Plumes and Blooms (PnB) project is to develop, validate and apply to imagery state-of-the-art ocean color algorithms for quantifying sediment plumes and phytoplankton blooms for the Case II environment of the Santa Barbara Channel. We conduct monthly to twice-monthly transect observations across the Santa Barbara Channel to develop an algorithm development and product validation data set. A primary goal is the use the PnB field data set to objectively tune semi-analytical models of ocean color for this site and apply them using available satellite imagery (SeaWiFS and MODIS). However, the comparison between PnB field observations and satellite estimates of primary products has been disappointing. We find that field estimates of water-leaving radiance correspond poorly to satellite estimates for both SeaWiFS and MODIS local area coverage imagery. We believe this is due to poor atmospheric correction due to complex mixtures of aerosol types found in these near-coastal regions.

  7. Reconstruction of the 1945 Wieringermeer Flood

    NASA Astrophysics Data System (ADS)

    Hoes, O. A. C.; Hut, R. W.; van de Giesen, N. C.; Boomgaard, M.

    2013-03-01

    The present state-of-the-art in flood risk assessment focuses on breach models, flood propagation models, and economic modelling of flood damage. However, models need to be validated with real data to avoid erroneous conclusions. Such reference data can either be historic data, or can be obtained from controlled experiments. The inundation of the Wieringermeer polder in the Netherlands in April 1945 is one of the few examples for which sufficient historical information is available. The objective of this article is to compare the flood simulation with flood data from 1945. The context, the breach growth process and the flood propagation are explained. Key findings for current flood risk management addresses the importance of the drainage canal network during the inundation of a polder, and the uncertainty that follows from not knowing the breach growth parameters. This case study shows that historical floods provide valuable data for the validation of models and reveal lessons that are applicable in current day flood risk management.

  8. Identification of appropriate reference genes for human mesenchymal stem cell analysis by quantitative real-time PCR.

    PubMed

    Li, Xiuying; Yang, Qiwei; Bai, Jinping; Xuan, Yali; Wang, Yimin

    2015-01-01

    Normalization to a reference gene is the method of choice for quantitative reverse transcription-PCR (RT-qPCR) analysis. The stability of reference genes is critical for accurate experimental results and conclusions. We have evaluated the expression stability of eight commonly used reference genes found in four different human mesenchymal stem cells (MSC). Using geNorm, NormFinder and BestKeeper algorithms, we show that beta-2-microglobulin and peptidyl-prolylisomerase A were the optimal reference genes for normalizing RT-qPCR data obtained from MSC, whereas the TATA box binding protein was not suitable due to its extensive variability in expression. Our findings emphasize the significance of validating reference genes for qPCR analyses. We offer a short list of reference genes to use for normalization and recommend some commercially-available software programs as a rapid approach to validate reference genes. We also demonstrate that the two reference genes, β-actin and glyceraldehyde-3-phosphate dehydrogenase, are frequently used are not always successful in many cases.

  9. Performance analysis of clustering techniques over microarray data: A case study

    NASA Astrophysics Data System (ADS)

    Dash, Rasmita; Misra, Bijan Bihari

    2018-03-01

    Handling big data is one of the major issues in the field of statistical data analysis. In such investigation cluster analysis plays a vital role to deal with the large scale data. There are many clustering techniques with different cluster analysis approach. But which approach suits a particular dataset is difficult to predict. To deal with this problem a grading approach is introduced over many clustering techniques to identify a stable technique. But the grading approach depends on the characteristic of dataset as well as on the validity indices. So a two stage grading approach is implemented. In this study the grading approach is implemented over five clustering techniques like hybrid swarm based clustering (HSC), k-means, partitioning around medoids (PAM), vector quantization (VQ) and agglomerative nesting (AGNES). The experimentation is conducted over five microarray datasets with seven validity indices. The finding of grading approach that a cluster technique is significant is also established by Nemenyi post-hoc hypothetical test.

  10. Development of short-form measures to assess four types of elder mistreatment: Findings from an evidence-based study of APS elder abuse substantiation decisions.

    PubMed

    Beach, Scott R; Liu, Pi-Ju; DeLiema, Marguerite; Iris, Madelyn; Howe, Melissa J K; Conrad, Kendon J

    2017-01-01

    Improving the standardization and efficiency of adult protective services (APS) investigations is a top priority in APS practice. Using data from the Elder Abuse Decision Support System (EADSS), we developed short-form measures of four types of elder abuse: financial, emotional/psychological, physical, and neglect. The EADSS data set contains 948 elder abuse cases (age 60+) with yes/no abuse substantiation decisions for each abuse type following a 30-day investigation. Item sensitivity/specificity analyses were conducted on long-form items with the substantiation decision for each abuse type as the criterion. Validity was further tested using receiver-operator characteristic (ROC) curve analysis, correlation with long forms and internal consistency. The four resulting short-form measures, containing 36 of the 82 original items, have validity similar to the original long forms. These short forms can be used to standardize and increase efficiency of APS investigations, and may also offer researchers new options for brief elder abuse assessments.

  11. Development of short-form measures to assess four types of elder mistreatment: Findings from an evidence-based study of APS elder abuse substantiation decisions

    PubMed Central

    Beach, Scott R.; Liu, Pi-Ju; DeLiema, Marguerite; Iris, Madelyn; Howe, Melissa J.K.; Conrad, Kendon J.

    2018-01-01

    Improving the standardization and efficiency of adult protective services (APS) investigations is a top priority in APS practice. Using data from the Elder Abuse Decision Support System (EADSS), we developed short-form measures of four types of elder abuse: financial, emotional/psychological, physical, and neglect. The EADSS data set contains 948 elder abuse cases (age 60+) with yes/no abuse substantiation decisions for each abuse type following a 30-day investigation. Item sensitivity/specificity analyses were conducted on long-form items with the substantiation decision for each abuse type as the criterion. Validity was further tested using receiver–operator characteristic (ROC) curve analysis, correlation with long forms and internal consistency. The four resulting short-form measures, containing 36 of the 82 original items, have validity similar to the original long forms. These short forms can be used to standardize and increase efficiency of APS investigations, and may also offer researchers new options for brief elder abuse assessments. PMID:28590799

  12. Quality control of colonoscopy procedures: a prospective validated method for the evaluation of professional practices applicable to all endoscopic units.

    PubMed

    Coriat, R; Pommaret, E; Chryssostalis, A; Viennot, S; Gaudric, M; Brezault, C; Lamarque, D; Roche, H; Verdier, D; Parlier, D; Prat, F; Chaussade, S

    2009-02-01

    To produce valid information, an evaluation of professional practices has to assess the quality of all practices before, during and after the procedure under study. Several auditing techniques have been proposed for colonoscopy. The purpose of this work is to describe a straightforward original validated method for the prospective evaluation of professional practices in the field of colonoscopy applicable in all endoscopy units without increasing the staff work load. Pertinent quality-control criteria (14 items) were identified by the endoscopists at the Cochin Hospital and were compatible with: findings in the available literature; guidelines proposed by the Superior Health Authority; and application in any endoscopy unit. Prospective routine data were collected and the methodology validated by evaluating 50 colonoscopies every quarter for one year. The relevance of the criteria was assessed using data collected during four separate periods. The standard checklist was complete for 57% of the colonoscopy procedures. The colonoscopy procedure was appropriate according to national guidelines in 94% of cases. These observations were particularly noteworthy: the quality of the colonic preparation was insufficient for 9% of the procedures; complete colonoscopy was achieved for 93% of patients; and 0.38 adenomas and 0.045 carcinomas were identified per colonoscopy. This simple and reproducible method can be used for valid quality-control audits in all endoscopy units. In France, unit-wide application of this method enables endoscopists to validate 100 of the 250 points required for continuous medical training. This is a quality-control tool that can be applied annually, using a random month to evaluate any changes in routine practices.

  13. Recognising and Validating Outcomes of Non-Accredited Learning: A Practical Approach.

    ERIC Educational Resources Information Center

    Greenwood, Maggie, Ed.; Hayes, Amanda, Ed.; Turner, Cheryl, Ed.; Vorhaus, John, Ed.

    A group of adult educators in England conducted seven case studies to identify strategies for recognizing adult students' learning progress in nonaccredited programs. The case studies identified the following elements of good practice in the process of recording and validating achievement: (1) initial identification of learning objectives; (2)…

  14. A Human Proximity Operations System test case validation approach

    NASA Astrophysics Data System (ADS)

    Huber, Justin; Straub, Jeremy

    A Human Proximity Operations System (HPOS) poses numerous risks in a real world environment. These risks range from mundane tasks such as avoiding walls and fixed obstacles to the critical need to keep people and processes safe in the context of the HPOS's situation-specific decision making. Validating the performance of an HPOS, which must operate in a real-world environment, is an ill posed problem due to the complexity that is introduced by erratic (non-computer) actors. In order to prove the HPOS's usefulness, test cases must be generated to simulate possible actions of these actors, so the HPOS can be shown to be able perform safely in environments where it will be operated. The HPOS must demonstrate its ability to be as safe as a human, across a wide range of foreseeable circumstances. This paper evaluates the use of test cases to validate HPOS performance and utility. It considers an HPOS's safe performance in the context of a common human activity, moving through a crowded corridor, and extrapolates (based on this) to the suitability of using test cases for AI validation in other areas of prospective application.

  15. The need for a paradigm shift in adherence research: The case of ADHD.

    PubMed

    Khan, Muhammad Umair; Kohn, Michael; Aslani, Parisa

    2018-04-30

    Nonadherence to long-term medications attenuates optimum health outcomes. There is an abundance of research on measuring and identifying factors affecting medication adherence in a range of chronic medical conditions. However, there is a lack of standardisation in adherence research, namely in the methods and measures used. In the case of attention deficit hyperactivity disorder, this lack of standardisation makes it difficult to compare and combine findings and to draw meaningful conclusions. Standardisation should commence with a universally accepted categorisation or taxonomy of adherence which takes into consideration the dynamic nature of medication-taking. This should then be followed by the use of valid and reliable measures of adherence which can accurately quantify adherence at any of its phases, and provide useful information which can be utilised in planning targeted interventions to improve adherence throughout the patient medication-taking journey. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. An Investigation of Agility Issues in Scrum Teams Using Agility Indicators

    NASA Astrophysics Data System (ADS)

    Pikkarainen, Minna; Wang, Xiaofeng

    Agile software development methods have emerged and become increasingly popular in recent years; yet the issues encountered by software development teams that strive to achieve agility using agile methods are yet to be explored systematically. Built upon a previous study that has established a set of indicators of agility, this study investigates what issues are manifested in software development teams using agile methods. It is focussed on Scrum teams particularly. In other words, the goal of the chapter is to evaluate Scrum teams using agility indicators and therefore to further validate previously presented agility indicators within the additional cases. A multiple case study research method is employed. The findings of the study reveal that the teams using Scrum do not necessarily achieve agility in terms of team autonomy, sharing, stability and embraced uncertainty. The possible reasons include previous organizational plan-driven culture, resistance towards the Scrum roles and changing resources.

  17. Research on simulation based material delivery system for an automobile company with multi logistics center

    NASA Astrophysics Data System (ADS)

    Luo, D.; Guan, Z.; Wang, C.; Yue, L.; Peng, L.

    2017-06-01

    Distribution of different parts to the assembly lines is significant for companies to improve production. Current research investigates the problem of distribution method optimization of a logistics system in a third party logistic company that provide professional services to an automobile manufacturing case company in China. Current research investigates the logistics leveling the material distribution and unloading platform of the automobile logistics enterprise and proposed logistics distribution strategy, material classification method, as well as logistics scheduling. Moreover, the simulation technology Simio is employed on assembly line logistics system which helps to find and validate an optimization distribution scheme through simulation experiments. Experimental results indicate that the proposed scheme can solve the logistic balance and levels the material problem and congestion of the unloading pattern in an efficient way as compared to the original method employed by the case company.

  18. Hyperbaric oxygen treatment for Parkinson's disease with severe depression and anxiety: A case report.

    PubMed

    Xu, Jin-Jin; Yang, Si-Tong; Sha, Ying; Ge, Yuan-Yuan; Wang, Jian-Meng

    2018-03-01

    Patients with Parkinson's disease (PD) frequently suffer from psychiatric disorders, and treating these symptom whereas managing the motor symptoms associated with PD can be a therapeutic challenge. We report a case of PD patient with severe depression and anxiety that refused to be treated with dopaminagonists or SSRIs, the most common treatments for PD patients suffering from psychiatric symptoms. Parkinson's disease with severe depression and anxiety. This man was treated with hyperbaric oxygen treatment for 30 days. Clinical assessment scores for depression and anxiety, including Unified Parkinson's Disease Rating ScaleI (UPDRS I), UPDRS II, Hanmilton Depression Rating Scale, and Hamiliton Anxiety Rating Scale, were improved following the hyperbaric oxygen treatment. Hyperbaric oxygen treatment may be a potential therapeutic method for PD patient suffering from depression and anxiety. Further research is needed to validate this finding and explore a potential mechanism.

  19. Numerical Study on Sensitivity of Pollutant Dispersion on Turbulent Schmidt Number in a Street Canyon

    NASA Astrophysics Data System (ADS)

    WANG, J.; Kim, J.

    2014-12-01

    In this study, sensitivity of pollutant dispersion on turbulent Schmidt number (Sct) was investigated in a street canyon using a computational fluid dynamics (CFD) model. For this, numerical simulations with systematically varied Sct were performed and the CFD model results were validated against a wind‒tunnel measurement data. The results showed that root mean square error (RMSE) was quite dependent on Sct and dispersion patterns of non‒reactive scalar pollutant with different Sct were quite different among the simulation results. The RMSE was lowest in the case of Sct = 0.35 and the apparent dispersion pattern was most similar to the wind‒tunnel data in the case of Sct = 0.35. Also, numerical simulations using spatially weighted Sct were additionally performed in order for the best reproduction of the wind‒tunnel data. Detailed method and procedure to find the best reproduction will be presented.

  20. Validation of the GreenLight™ Simulator and development of a training curriculum for photoselective vaporisation of the prostate.

    PubMed

    Aydin, Abdullatif; Muir, Gordon H; Graziano, Manuela E; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2015-06-01

    To assess face, content and construct validity, and feasibility and acceptability of the GreenLight™ Simulator as a training tool for photoselective vaporisation of the prostate (PVP), and to establish learning curves and develop an evidence-based training curriculum. This prospective, observational and comparative study, recruited novice (25 participants), intermediate (14) and expert-level urologists (seven) from the UK and Europe at the 28th European Association of Urological Surgeons Annual Meeting 2013. A group of novices (12 participants) performed 10 sessions of subtask training modules followed by a long operative case, whereas a second group (13) performed five sessions of a given case module. Intermediate and expert groups performed all training modules once, followed by one operative case. The outcome measures for learning curves and construct validity were time to task, coagulation time, vaporisation time, average sweep speed, average laser distance, blood loss, operative errors, and instrument cost. Face and content validity, feasibility and acceptability were addressed through a quantitative survey. Construct validity was demonstrated in two of five training modules (P = 0.038; P = 0.018) and in a considerable number of case metrics (P = 0.034). Learning curves were seen in all five training modules (P < 0.001) and significant reduction in case operative time (P < 0.001) and error (P = 0.017) were seen. An evidence-based training curriculum, to help trainees acquire transferable skills, was produced using the results. This study has shown the GreenLight Simulator to be a valid and useful training tool for PVP. It is hoped that by using the training curriculum for the GreenLight Simulator, novice trainees can acquire skills and knowledge to a predetermined level of proficiency. © 2014 The Authors. BJU International © 2014 BJU International.

  1. Reexamining the Writing Apprehension Measure

    ERIC Educational Resources Information Center

    Autman, Hamlet; Kelly, Stephanie

    2017-01-01

    This article contains two measurement development studies on writing apprehension. Study 1 reexamines the validity of the writing apprehension measure based on the finding from prior research that a second false factor was embedded. The findings from Study 1 support the validity of a reduced measure with 6 items versus the original 20-item…

  2. Cognitive—Motor Interference in an Ecologically Valid Street Crossing Scenario

    PubMed Central

    Janouch, Christin; Drescher, Uwe; Wechsler, Konstantin; Haeger, Mathias; Bock, Otmar; Voelcker-Rehage, Claudia

    2018-01-01

    Laboratory-based research revealed that gait involves higher cognitive processes, leading to performance impairments when executed with a concurrent loading task. Deficits are especially pronounced in older adults. Theoretical approaches like the multiple resource model highlight the role of task similarity and associated attention distribution problems. It has been shown that in cases where these distribution problems are perceived relevant to participant's risk of falls, older adults prioritize gait and posture over the concurrent loading task. Here we investigate whether findings on task similarity and task prioritization can be transferred to an ecologically valid scenario. Sixty-three younger adults (20–30 years of age) and 61 older adults (65–75 years of age) participated in a virtual street crossing simulation. The participants' task was to identify suitable gaps that would allow them to cross a simulated two way street safely. Therefore, participants walked on a manual treadmill that transferred their forward motion to forward displacements in a virtual city. The task was presented as a single task (crossing only) and as a multitask. In the multitask condition participants were asked, among others, to type in three digit numbers that were presented either visually or auditorily. We found that for both age groups, street crossing as well as typing performance suffered under multitasking conditions. Impairments were especially pronounced for older adults (e.g., longer crossing initiation phase, more missed opportunities). However, younger and older adults did not differ in the speed and success rate of crossing. Further, deficits were stronger in the visual compared to the auditory task modality for most parameters. Our findings conform to earlier studies that found an age-related decline in multitasking performance in less realistic scenarios. However, task similarity effects were inconsistent and question the validity of the multiple resource model within ecologically valid scenarios. PMID:29774001

  3. Hierarchical Clustering on the Basis of Inter-Job Similarity as a Tool in Validity Generalization

    ERIC Educational Resources Information Center

    Mobley, William H.; Ramsay, Robert S.

    1973-01-01

    The present research was stimulated by three related problems frequently faced in validation research: viable procedures for combining similar jobs in order to assess the validity of various predictors, for assessing groups of jobs represented in previous validity studies, and for assessing the applicability of validity findings between units.…

  4. Measuring stakeholder participation in evaluation: an empirical validation of the Participatory Evaluation Measurement Instrument (PEMI).

    PubMed

    Daigneault, Pierre-Marc; Jacob, Steve; Tremblay, Joël

    2012-08-01

    Stakeholder participation is an important trend in the field of program evaluation. Although a few measurement instruments have been proposed, they either have not been empirically validated or do not cover the full content of the concept. This study consists of a first empirical validation of a measurement instrument that fully covers the content of participation, namely the Participatory Evaluation Measurement Instrument (PEMI). It specifically examines (1) the intercoder reliability of scores derived by two research assistants on published evaluation cases; (2) the convergence between the scores of coders and those of key respondents (i.e., authors); and (3) the convergence between the authors' scores on the PEMI and the Evaluation Involvement Scale (EIS). A purposive sample of 40 cases drawn from the evaluation literature was used to assess reliability. One author per case in this sample was then invited to participate in a survey; 25 fully usable questionnaires were received. Stakeholder participation was measured on nominal and ordinal scales. Cohen's κ, the intraclass correlation coefficient, and Spearman's ρ were used to assess reliability and convergence. Reliability results ranged from fair to excellent. Convergence between coders' and authors' scores ranged from poor to good. Scores derived from the PEMI and the EIS were moderately associated. Evidence from this study is strong in the case of intercoder reliability and ranges from weak to strong in the case of convergent validation. Globally, this suggests that the PEMI can produce scores that are both reliable and valid.

  5. Results and current status of the NPARC alliance validation effort

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Jones, Ralph R.

    1996-01-01

    The NPARC Alliance is a partnership between the NASA Lewis Research Center (LeRC) and the USAF Arnold Engineering Development Center (AEDC) dedicated to the establishment of a national CFD capability, centered on the NPARC Navier-Stokes computer program. The three main tasks of the Alliance are user support, code development, and validation. The present paper is a status report on the validation effort. It describes the validation approach being taken by the Alliance. Representative results are presented for laminar and turbulent flat plate boundary layers, a supersonic axisymmetric jet, and a glancing shock/turbulent boundary layer interaction. Cases scheduled to be run in the future are also listed. The archive of validation cases is described, including information on how to access it via the Internet.

  6. Statistics of transmission eigenvalues in two-dimensional quantum cavities: Ballistic versus stochastic scattering

    NASA Astrophysics Data System (ADS)

    Rotter, Stefan; Aigner, Florian; Burgdörfer, Joachim

    2007-03-01

    We investigate the statistical distribution of transmission eigenvalues in phase-coherent transport through quantum dots. In two-dimensional ab initio simulations for both clean and disordered two-dimensional cavities, we find markedly different quantum-to-classical crossover scenarios for these two cases. In particular, we observe the emergence of “noiseless scattering states” in clean cavities, irrespective of sharp-edged entrance and exit lead mouths. We find the onset of these “classical” states to be largely independent of the cavity’s classical chaoticity, but very sensitive with respect to bulk disorder. Our results suggest that for weakly disordered cavities, the transmission eigenvalue distribution is determined both by scattering at the disorder potential and the cavity walls. To properly account for this intermediate parameter regime, we introduce a hybrid crossover scheme, which combines previous models that are valid in the ballistic and the stochastic limit, respectively.

  7. Laser Techniques in Conservation of Artworks:. Problems and Breakthroughs

    NASA Astrophysics Data System (ADS)

    Salimbeni, Renzo; Siano, Salvatore

    2010-04-01

    After more than thirty years since the first experiment in Venice, only in the last decade laser techniques have been widely recognised as one of the most important innovation introduced in the conservation of artworks for diagnostics, restoration and monitoring aims. Especially the use of laser ablation for the delicate phase of cleaning has been debated for many years, because of the problems encountered in finding an appropriate setting of the laser parameters. Many experimentations carried out on stone, metals and pigments put in evidence unacceptable side effects such as discoloration and yellowing after the treatment, or scarce cleaning productivity in respect of other techniques. Many research projects organised at European level have contributed to find breakthroughs in laser techniques that could avoid such problems. The choices of specific laser parameters better suited for cleaning of stone, metals and pigments are described. A series of validation case studies is reported.

  8. Who is an expert? Competency evaluations in mental retardation and borderline intelligence.

    PubMed

    Siegert, Mark; Weiss, Kenneth J

    2007-01-01

    Evaluations of competency to stand trial (CST) in defendants with mental retardation or borderline intellectual functioning can be difficult when deficits are masked by the type of adaptations seen in many with developmental disabilities. Accordingly, many evaluators have used validated test instruments, such as the CAST*MR (Competence Assessment to Stand Trial for Defendants with Mental Retardation) and tests measuring receptive and expressive language, to augment the clinical interview. The authors present a New Jersey case illustrating the need for clinicians to have adequate experience and training in some of the less known psychometric tests before presenting evidence in court. At the CST hearing, the judge disregarded the testimony of several psychologists while accepting that of a less experienced state's expert, we believe, to find the defendant competent. The finding was reversed on appeal. We encourage forensic professionals to be aware of the various instruments and minimum standards when employing specialized testing.

  9. Some Findings Concerning Requirements in Agile Methodologies

    NASA Astrophysics Data System (ADS)

    Rodríguez, Pilar; Yagüe, Agustín; Alarcón, Pedro P.; Garbajosa, Juan

    Agile methods have appeared as an attractive alternative to conventional methodologies. These methods try to reduce the time to market and, indirectly, the cost of the product through flexible development and deep customer involvement. The processes related to requirements have been extensively studied in literature, in most cases in the frame of conventional methods. However, conclusions of conventional methodologies could not be necessarily valid for Agile; in some issues, conventional and Agile processes are radically different. As recent surveys report, inadequate project requirements is one of the most conflictive issues in agile approaches and better understanding about this is needed. This paper describes some findings concerning requirements activities in a project developed under an agile methodology. The project intended to evolve an existing product and, therefore, some background information was available. The major difficulties encountered were related to non-functional needs and management of requirements dependencies.

  10. Ferritin levels predict severe dengue.

    PubMed

    Soundravally, R; Agieshkumar, B; Daisy, M; Sherin, J; Cleetus, C C

    2015-02-01

    Currently, no tests are available to monitor and predict severity and outcome of dengue. To find potential markers that predict dengue severity, the present study validated the serum level of three acute-phase proteins α-1 antitrypsin, ceruloplasmin and ferritin in a pool of severe dengue cases compared to non-severe forms and other febrile illness controls. A total of 96 patients were divided into two equal groups with group 'A' comprising dengue-infected cases and group 'B' with other febrile illness cases negative for dengue. Out of 48 dengue-infected cases, 13 had severe dengue and the remaining 35 were classified as non-severe dengue. Immunoassays were performed to evaluate the serum levels of acute-phase proteins both on the day of admission and on the day of defervescence. The efficiency of individual proteins in predicting the disease severity was assessed using receiver operating characteristic curve. The study did not find any significant difference in the levels of α-1 antitrypsin between the clinical groups. A significant increase in the levels of ceruloplasmin around defervescence in severe cases compared to non-severe and other febrile controls was observed and this is the first report describing the potential association of ceruloplasmin and dengue severity. Interestingly, a steady increase in the level of serum ferritin was recorded throughout the course of illness. Among all the three proteins, the elevated ferritin level could predict the disease severity with highest sensitivity and specificity of 76.9 and 83.3 %, respectively, on the day of admission and the same was found to be 90 and 91.6 % around defervescence. On the basis of this diagnostic efficiency, we propose that ferritin may serve as a potential biomarker for an early prediction of disease severity.

  11. Macular hole: 10 and 20-MHz ultrasound and spectral-domain optical coherence tomography.

    PubMed

    Bottós, Juliana Mantovani; Torres, Virginia Laura Lucas; Kanecadan, Liliane Andrade Almeida; Martinez, Andrea Alejandra Gonzalez; Moraes, Nilva Simeren Bueno; Maia, Mauricio; Allemann, Norma

    2012-01-01

    Optical coherence tomography (OCT) is valuable for macula evaluation. However, as this technique relies on light energy it cannot be performed in the presence of opaque media. In such cases, the ultrasound (US) may predict some macular features. The aim of this study was to characterize images obtained by ultrasound with 10 and 20-MHz transducers comparing to OCT, as well as to analyze the relationship between the vitreous and retina in eyes with macular hole (MH). 29 eyes of 22 patients with biomicroscopic evidence of MH at different stages were included. All patients were evaluated using ultrasonography with 10 and 20-MHz transducers and OCT. OCT identified signs of MH in 25 of 29 eyes. The remaining 4 cases not identified by US were pseudoholes caused by epiretinal membranes. In MH stages I (2 eyes) and II (1 eye), both transducers were not useful to analyze the macular thickening, but suggestive findings as macular irregularity, operculum or partial posterior vitreous detachment (PVD) were highlighted. In stages III (14 eyes) and IV (5 eyes), both transducers identified the double hump irregularity and thickening. US could measure the macular thickness and other suggestive findings for MH: operculum, vitreomacular traction and partial or complete PVD. In cases of pseudoholes, US identified irregularities macular contour and a discrete depression. 10-MHz US was useful for an overall assessment of the vitreous body as well as its relationship to the retina. The 20-MHz transducer allowed valuable information on the vitreomacular interface and macular contour. OCT provides superior quality for fine morphological study of macular area, except in cases of opaque media. In these cases, and even if OCT is not available, the combined US study is able to provide a valid evaluation of the macular area improving therapeutic approach.

  12. Low back related leg pain: an investigation of construct validity of a new classification system.

    PubMed

    Schäfer, Axel G M; Hall, Toby M; Rolke, Roman; Treede, Rolf-Detlef; Lüdtke, Kerstin; Mallwitz, Joachim; Briffa, Kathryn N

    2014-01-01

    Leg pain is associated with back pain in 25-65% of all cases and classified as somatic referred pain or radicular pain. However, distinction between the two may be difficult as different pathomechanisms may cause similar patterns of pain. Therefore a pathomechanism based classification system was proposed, with four distinct hierarchical and mutually exclusive categories: Neuropathic Sensitization (NS) comprising major features of neuropathic pain with sensory sensitization; Denervation (D) arising from significant axonal compromise; Peripheral Nerve Sensitization (PNS) with marked nerve trunk mechanosensitivity; and Musculoskeletal (M) with pain referred from musculoskeletal structures. To investigate construct validity of the classification system. Construct validity was investigated by determining the relationship of nerve functioning with subgroups of patients and asymptomatic controls. Thus somatosensory profiles of subgroups of patients with low back related leg pain (LBRLP) and healthy controls were determined by a comprehensive quantitative sensory test (QST) protocol. It was hypothesized that subgroups of patients and healthy controls would show differences in QST profiles relating to underlying pathomechanisms. 77 subjects with LBRLP were recruited and classified in one of the four groups. Additionally, 18 age and gender matched asymptomatic controls were measured. QST revealed signs of pain hypersensitivity in group NS and sensory deficits in group D whereas Groups PNS and M showed no significant differences when compared to the asymptomatic group. These findings support construct validity for two of the categories of the new classification system, however further research is warranted to achieve construct validation of the classification system as a whole.

  13. Feasibility and validity of the structured attention module among economically disadvantaged preschool-age children.

    PubMed

    Bush, Hillary H; Eisenhower, Abbey; Briggs-Gowan, Margaret; Carter, Alice S

    2015-01-01

    Rooted in the theory of attention put forth by Mirsky, Anthony, Duncan, Ahearn, and Kellam (1991), the Structured Attention Module (SAM) is a developmentally sensitive, computer-based performance task designed specifically to assess sustained selective attention among 3- to 6-year-old children. The current study addressed the feasibility and validity of the SAM among 64 economically disadvantaged preschool-age children (mean age = 58 months; 55% female); a population known to be at risk for attention problems and adverse math performance outcomes. Feasibility was demonstrated by high completion rates and strong associations between SAM performance and age. Principal Factor Analysis with rotation produced robust support for a three-factor model (Accuracy, Speed, and Endurance) of SAM performance, which largely corresponded with existing theorized models of selective and sustained attention. Construct validity was evidenced by positive correlations between SAM Composite scores and all three SAM factors and IQ, and between SAM Accuracy and sequential memory. Value-added predictive validity was not confirmed through main effects of SAM on math performance above and beyond age and IQ; however, significant interactions by child sex were observed: Accuracy and Endurance both interacted with child sex to predict math performance. In both cases, the SAM factors predicted math performance more strongly for girls than for boys. There were no overall sex differences in SAM performance. In sum, the current findings suggest that interindividual variation in sustained selective attention, and potentially other aspects of attention and executive function, among young, high-risk children can be captured validly with developmentally sensitive measures.

  14. 42 CFR 456.655 - Validation of showings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Administrator will not find an agency's showing satisfactory if the information obtained through his validation... 42 Public Health 4 2010-10-01 2010-10-01 false Validation of showings. 456.655 Section 456.655... Showing of an Effective Institutional Utilization Control Program § 456.655 Validation of showings. (a...

  15. A case of misdiagnosis of mild cognitive impairment: The utility of symptom validity testing in an outpatient memory clinic.

    PubMed

    Roor, Jeroen J; Dandachi-FitzGerald, Brechje; Ponds, Rudolf W H M

    2016-01-01

    Noncredible symptom reports hinder the diagnostic process. This fact is especially the case for medical conditions that rely on subjective report of symptoms instead of objective measures. Mild cognitive impairment (MCI) primarily relies on subjective report, which makes it potentially susceptible to erroneous diagnosis. In this case report, we describe a 59-year-old female patient diagnosed with MCI 10 years previously. The patient was referred to the neurology department for reexamination by her general practitioner because of cognitive complaints and persistent fatigue. This case study used information from the medical file, a new magnetic resonance imaging brain scan, and neuropsychological assessment. Current neuropsychological assessment, including symptom validity tests, clearly indicated noncredible test performance, thereby invalidating the obtained neuropsychological test data. We conclude that a blind spot for noncredible symptom reports existed in the previous diagnostic assessments. This case highlights the usefulness of formal symptom validity testing in the diagnostic assessment of MCI.

  16. Development and validation of the Salzburg COPD-screening questionnaire (SCSQ): a questionnaire development and validation study.

    PubMed

    Weiss, Gertraud; Steinacher, Ina; Lamprecht, Bernd; Kaiser, Bernhard; Mikes, Romana; Sator, Lea; Hartl, Sylvia; Wagner, Helga; Studnicka, M

    2017-01-26

    Chronic obstructive pulmonary disease prevalence rates are still high. However, the majority of subjects are not diagnosed. Strategies have to be implemented to overcome the problem of under-diagnosis. Questionnaires could be used to pre-select subjects for spirometry and thereby help reducing under-diagnosis. We report a brief, simple, self-administrable and validated chronic obstructive pulmonary disease questionnaire to increase the pre-test probability for chronic obstructive pulmonary disease diagnosis in subjects undergoing confirmatory spirometry. In 2005, we completed the Austrian Burden of Obstructive Lung Disease-study in 1258 subjects aged >40 years. Post-bronchodilator spirometry was performed, and non-reversible airflow limitation defined by FEV 1 /FVC ratio below the lower limit of normal. Questions from the Salzburg chronic obstructive pulmonary disease screening-questionnaire were selected using a logistic regression model, and risk scores were based on regression-coefficients. A training sub-sample (n = 800) was used to create the score, and a test sub-sample (n = 458) was used to test it. In 2008, an external validation study was done, using the same protocol in 775 patients from primary care. The Salzburg chronic obstructive pulmonary disease screening questionnaire was composed of items related to "breathing problems", "wheeze", "cough", "limitation of physical activity", and "smoking". At the >=2 points cut-off of the Salzburg chronic obstructive pulmonary disease screening questionnaire, sensitivity was 69.1% [95%CI: 56.6%; 79.5%], specificity 60.0% [95%CI: 54.9%; 64.9%], the positive predictive value 23.2% [95%CI: 17.7%; 29.7%] and the negative predictive value 91.8% [95%CI: 87.5%; 95.7%] to detect post bronchodilator airflow limitation. The external validation study in primary care confirmed these findings. The Salzburg chronic obstructive pulmonary disease screening questionnaire was derived from the highly standardized Burden of Obstructive Lung Disease study. This validated and easy to use questionnaire can help to increase the efficiency of chronic obstructive pulmonary disease case-finding. QUESTIONNAIRE FOR PRE-SCREENING POTENTIAL SUFFERERS: Scientists in Austria have developed a brief, simple questionnaire to identify patients likely to have early-stage chronic lung disease. Chronic obstructive pulmonary disease (COPD) is notoriously difficult to diagnose, and the condition often causes irreversible lung damage before it is identified. Finding a simple, cost-effective method of pre-screening patients with suspected early-stage COPD could potentially improve treatment responses and limit the burden of extensive lung function ('spirometry') tests on health services. Gertraud Weiss at Paracelsus Medical University, Austria, and co-workers have developed and validated an easy-to-use, self-administered questionnaire that could prove effective for pre-screening patients. The team trialed the five-point Salzburg COPD-screening questionnaire on 1258 patients. Patients scoring 2 points or above on the questionnaire underwent spirometry tests. The questionnaire seems to provide a sensitive and cost-effective way of pre-selecting patients for spirometry referral.

  17. Open-access programs for injury categorization using ICD-9 or ICD-10.

    PubMed

    Clark, David E; Black, Adam W; Skavdahl, David H; Hallagan, Lee D

    2018-04-09

    The article introduces Programs for Injury Categorization, using the International Classification of Diseases (ICD) and R statistical software (ICDPIC-R). Starting with ICD-8, methods have been described to map injury diagnosis codes to severity scores, especially the Abbreviated Injury Scale (AIS) and Injury Severity Score (ISS). ICDPIC was originally developed for this purpose using Stata, and ICDPIC-R is an open-access update that accepts both ICD-9 and ICD-10 codes. Data were obtained from the National Trauma Data Bank (NTDB), Admission Year 2015. ICDPIC-R derives CDC injury mechanism categories and an approximate ISS ("RISS") from either ICD-9 or ICD-10 codes. For ICD-9-coded cases, RISS is derived similar to the Stata package (with some improvements reflecting user feedback). For ICD-10-coded cases, RISS may be calculated in several ways: The "GEM" methods convert ICD-10 to ICD-9 (using General Equivalence Mapping tables from CMS) and then calculate ISS with options similar to the Stata package; a "ROCmax" method calculates RISS directly from ICD-10 codes, based on diagnosis-specific mortality in the NTDB, maximizing the C-statistic for predicting NTDB mortality while attempting to minimize the difference between RISS and ISS submitted by NTDB registrars (ISSAIS). Findings were validated using data from the National Inpatient Survey (NIS, 2015). NTDB contained 917,865 cases, of which 86,878 had valid ICD-10 injury codes. For a random 100,000 ICD-9-coded cases in NTDB, RISS using the GEM methods was nearly identical to ISS calculated by the Stata version, which has been previously validated. For ICD-10-coded cases in NTDB, categorized ISS using any version of RISS was similar to ISSAIS; for both NTDB and NIS cases, increasing ISS was associated with increasing mortality. Prediction of NTDB mortality was associated with C-statistics of 0.81 for ISSAIS, 0.75 for RISS using the GEM methods, and 0.85 for RISS using the ROCmax method; prediction of NIS mortality was associated with C-statistics of 0.75-0.76 for RISS using the GEM methods, and 0.78 for RISS using the ROCmax method. Instructions are provided for accessing ICDPIC-R at no cost. The ideal methods of injury categorization and injury severity scoring involve trained personnel with access to injured persons or their medical records. ICDPIC-R may be a useful substitute when this ideal cannot be obtained.

  18. Zig-zag tape influence in NREL Phase VI wind turbine

    NASA Astrophysics Data System (ADS)

    Gomez-Iradi, Sugoi; Munduate, Xabier

    2014-06-01

    Two bladed 10 metre diameter wind turbine was tested in the 24.4m × 36.6m NASA-Ames wind tunnel (Phase VI). These experiments have been extensively used for validation purposes for CFD and other engineering tools. The free transition case (S), has been, and is, the most employed one for validation purposes, and consist in a 3° pitch case with a rotational speed of 72rpm upwind configuration with and without yaw misalignment. However, there is another less visited case (M) where identical configuration was tested but with the inclusion of a zig-zag tape. This was called transition fixed sequence. This paper shows the differences between the free and the fix transition cases, that should be more appropriate for comparison with fully turbulent simulations. Steady k-ω SST fully turbulent computations performed with WMB CFD method are compared with the experiments showing, better predictions in the attached flow region when it is compared with the transition fixed experiments. This work wants to prove the utility of M case (transition fixed) and show its differences respect the S case (free transition) for validation purposes.

  19. Analytic Validation of Immunohistochemistry Assays: New Benchmark Data From a Survey of 1085 Laboratories.

    PubMed

    Stuart, Lauren N; Volmar, Keith E; Nowak, Jan A; Fatheree, Lisa A; Souers, Rhona J; Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - A cooperative agreement between the College of American Pathologists (CAP) and the United States Centers for Disease Control and Prevention was undertaken to measure laboratories' awareness and implementation of an evidence-based laboratory practice guideline (LPG) on immunohistochemical (IHC) validation practices published in 2014. - To establish new benchmark data on IHC laboratory practices. - A 2015 survey on IHC assay validation practices was sent to laboratories subscribed to specific CAP proficiency testing programs and to additional nonsubscribing laboratories that perform IHC testing. Specific questions were designed to capture laboratory practices not addressed in a 2010 survey. - The analysis was based on responses from 1085 laboratories that perform IHC staining. Ninety-six percent (809 of 844) always documented validation of IHC assays. Sixty percent (648 of 1078) had separate procedures for predictive and nonpredictive markers, 42.7% (220 of 515) had procedures for laboratory-developed tests, 50% (349 of 697) had procedures for testing cytologic specimens, and 46.2% (363 of 785) had procedures for testing decalcified specimens. Minimum case numbers were specified by 85.9% (720 of 838) of laboratories for nonpredictive markers and 76% (584 of 768) for predictive markers. Median concordance requirements were 95% for both types. For initial validation, 75.4% (538 of 714) of laboratories adopted the 20-case minimum for nonpredictive markers and 45.9% (266 of 579) adopted the 40-case minimum for predictive markers as outlined in the 2014 LPG. The most common method for validation was correlation with morphology and expected results. Laboratories also reported which assay changes necessitated revalidation and their minimum case requirements. - Benchmark data on current IHC validation practices and procedures may help laboratories understand the issues and influence further refinement of LPG recommendations.

  20. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.« less

  1. Assessing the Use of the Child Attachment Interview in a Sample of Israeli Jewish Children.

    PubMed

    Baumel, Amit; Wolmer, Leo; Laor, Nathaniel; Toren, Paz

    2016-01-01

    This manuscript assesses the use of the Child Attachment Interview (CAI) in a sample of Israeli Jewish children in middle childhood in order to add to empirical data on this measure. Forty-one children between the ages of 7 and 13 were consecutively recruited to the study. The clinical sample included 29 children diagnosed with anxiety disorder, major depression or ADHD. The Father Focused Referral (FFR) sample included 12 children whose father was unavailable to them. Participants were administered the CAI and coded by certified personnel. 81.4% concordance was found between maternal and paternal secure-insecure attachment classifications in the clinical sample; 100% of the children in the FFR group were classified as insecurely attached to their fathers suggesting convergent validity for the classification of father attachment; 45.4% of the children in the FFR sample were also classified as insecurely attached to their mothers, pointing to the difference that can be found between the two parental attachment classifications in relevant cases, and therefore to sufficient discriminant validity between the two classifications. The clinical sample concordance rate, which was lower than in previous studies, indicates that parental concordance rates should be further investigated using different samples and countries. The study's findings regarding the difference that can be found between parental attachment classifications show the instrument's relevance in cases which the parental representations may differ. In these cases, using an instrument that does not examine the attachment toward both parents might not suffice. Study limitations and further implications are discussed.

  2. Assessment of land use factors associated with dengue cases in Malaysia using Boosted Regression Trees.

    PubMed

    Cheong, Yoon Ling; Leitão, Pedro J; Lakes, Tobia

    2014-07-01

    The transmission of dengue disease is influenced by complex interactions among vector, host and virus. Land use such as water bodies or certain agricultural practices have been identified as likely risk factors for dengue because of the provision of suitable habitats for the vector. Many studies have focused on the land use factors of dengue vector abundance in small areas but have not yet studied the relationship between land use factors and dengue cases for large regions. This study aims to clarify if land use factors other than human settlements, e.g. different types of agricultural land use, water bodies and forest are associated with reported dengue cases from 2008 to 2010 in the state of Selangor, Malaysia. From the correlative relationship, we aim to generate a prediction risk map. We used Boosted Regression Trees (BRT) to account for nonlinearities and interactions between the factors with high predictive accuracies. Our model with a cross-validated performance score (Area Under the Receiver Operator Characteristic Curve, ROC AUC) of 0.81 showed that the most important land use factors are human settlements (model importance of 39.2%), followed by water bodies (16.1%), mixed horticulture (8.7%), open land (7.5%) and neglected grassland (6.7%). A risk map after 100 model runs with a cross-validated ROC AUC mean of 0.81 (±0.001 s.d.) is presented. Our findings may be an important asset for improving surveillance and control interventions for dengue. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Code-based Diagnostic Algorithms for Idiopathic Pulmonary Fibrosis. Case Validation and Improvement.

    PubMed

    Ley, Brett; Urbania, Thomas; Husson, Gail; Vittinghoff, Eric; Brush, David R; Eisner, Mark D; Iribarren, Carlos; Collard, Harold R

    2017-06-01

    Population-based studies of idiopathic pulmonary fibrosis (IPF) in the United States have been limited by reliance on diagnostic code-based algorithms that lack clinical validation. To validate a well-accepted International Classification of Diseases, Ninth Revision, code-based algorithm for IPF using patient-level information and to develop a modified algorithm for IPF with enhanced predictive value. The traditional IPF algorithm was used to identify potential cases of IPF in the Kaiser Permanente Northern California adult population from 2000 to 2014. Incidence and prevalence were determined overall and by age, sex, and race/ethnicity. A validation subset of cases (n = 150) underwent expert medical record and chest computed tomography review. A modified IPF algorithm was then derived and validated to optimize positive predictive value. From 2000 to 2014, the traditional IPF algorithm identified 2,608 cases among 5,389,627 at-risk adults in the Kaiser Permanente Northern California population. Annual incidence was 6.8/100,000 person-years (95% confidence interval [CI], 6.1-7.7) and was higher in patients with older age, male sex, and white race. The positive predictive value of the IPF algorithm was only 42.2% (95% CI, 30.6 to 54.6%); sensitivity was 55.6% (95% CI, 21.2 to 86.3%). The corrected incidence was estimated at 5.6/100,000 person-years (95% CI, 2.6-10.3). A modified IPF algorithm had improved positive predictive value but reduced sensitivity compared with the traditional algorithm. A well-accepted International Classification of Diseases, Ninth Revision, code-based IPF algorithm performs poorly, falsely classifying many non-IPF cases as IPF and missing a substantial proportion of IPF cases. A modification of the IPF algorithm may be useful for future population-based studies of IPF.

  4. Summary of EASM Turbulence Models in CFL3D With Validation Test Cases

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.

    2003-01-01

    This paper summarizes the Explicit Algebraic Stress Model in k-omega form (EASM-ko) and in k-epsilon form (EASM-ke) in the Reynolds-averaged Navier-Stokes code CFL3D. These models have been actively used over the last several years in CFL3D, and have undergone some minor modifications during that time. Details of the equations and method for coding the latest versions of the models are given, and numerous validation cases are presented. This paper serves as a validation archive for these models.

  5. Contrast enhanced dual energy spectral mammogram, an emerging addendum in breast imaging

    PubMed Central

    Gnanaprakasam, Francis; Anand, Subhapradha; Krishnaswami, Murali; Ramachandran, Madan

    2016-01-01

    Objective: To assess the role of contrast-enhanced dual-energy spectral mammogram (CEDM) as a problem-solving tool in equivocal cases. Methods: 44 consenting females with equivocal findings on full-field digital mammogram underwent CEDM. All the images were interpreted by two radiologists independently. Confidence of presence was plotted on a three-point Likert scale and probability of cancer was assigned on Breast Imaging Reporting and Data System scoring. Histopathology was taken as the gold standard. Statistical analyses of all variables were performed. Results: 44 breast lesions were included in the study, among which 77.3% lesions were malignant or precancerous and 22.7% lesions were benign or inconclusive. 20% of lesions were identified only on CEDM. True extent of the lesion was made out in 15.9% of cases, multifocality was established in 9.1% of cases and ductal extension was demonstrated in 6.8% of cases. Statistical significance for CEDM was p-value <0.05. Interobserver kappa value was 0.837. Conclusion: CEDM has a useful role in identifying occult lesions in dense breasts and in triaging lesions. In a mammographically visible lesion, CEDM characterizes the lesion, affirms the finding and better demonstrates response to treatment. Hence, we conclude that CEDM is a useful complementary tool to standard mammogram. Advances in knowledge: CEDM can detect and demonstrate lesions even in dense breasts with the advantage of feasibility of stereotactic biopsy in the same setting. Hence, it has the potential to be a screening modality with need for further studies and validation. PMID:27610475

  6. [Case finding in early prevention networks - a heuristic for ambulatory care settings].

    PubMed

    Barth, Michael; Belzer, Florian

    2016-06-01

    One goal of early prevention is the support of families with small children up to three years who are exposed to psychosocial risks. The identification of these cases is often complex and not well-directed, especially in the ambulatory care setting. Development of a model of a feasible and empirical based strategy for case finding in ambulatory care. Based on the risk factors of postpartal depression, lack of maternal responsiveness, parental stress with regulation disorders and poverty a lexicographic and non-compensatory heuristic model with simple decision rules, will be constructed and empirically tested. Therefore the original data set from an evaluation of the pediatric documentary form on psychosocial issues of families with small children in well-child visits will be used and reanalyzed. The first diagnostic step in the non-compensatory and hierarchical classification process is the assessment of postpartal depression followed by maternal responsiveness, parental stress and poverty. The classification model identifies 89.0 % cases from the original study. Compared to the original study the decision process becomes clearer and more concise. The evidence-based and data-driven model exemplifies a strategy for the assessment of psychosocial risk factors in ambulatory care settings. It is based on four evidence-based risk factors and offers a quick and reliable classification. A further advantage of this model is that after a risk factor is identified the diagnostic procedure will be stopped and the counselling process can commence. For further validation of the model studies, in well suited early prevention networks are needed.

  7. Connectedness among Taiwanese Middle School Students: A Validation Study of the Hemingway Measure of Adolescent Connectedness.

    ERIC Educational Resources Information Center

    Karcher, Michael J.; Lee, Yun

    2002-01-01

    Examines the psychometric properties of the Hemingway Measure of Adolescent Connectedness among 320 Taiwanese junior high school students. Finds that connectedness measure subscales and composite scales demonstrated acceptable reliability and concurrent validity. Also finds, among other things, that girls report more connectedness to school than…

  8. Development and Validation of the Evidence-Based Practice Process Assessment Scale: Preliminary Findings

    ERIC Educational Resources Information Center

    Rubin, Allen; Parrish, Danielle E.

    2010-01-01

    Objective: This report describes the development and preliminary findings regarding the reliability, validity, and sensitivity of a scale that has been developed to assess practitioners' perceived familiarity with, attitudes about, and implementation of the phases of the evidence-based practice (EBP) process. Method: After a panel of national…

  9. Finding Kids with Special Needs: the Background, Development, Field Test and Validation.

    ERIC Educational Resources Information Center

    Resource Management Systems, Inc., Carmel, CA.

    Described are the development of "Findings Kids with Special Needs" (FKSN), a instrument to identify children's learning problems and gifted students; results of field testing with 24,825 children, kindergarten through grade 8, in 110 schools; and validation procedures. Discussed is test construction, including incorporation of 12…

  10. A Multilayer Network Approach for Guiding Drug Repositioning in Neglected Diseases

    PubMed Central

    Chernomoretz, Ariel; Agüero, Fernán

    2016-01-01

    Drug development for neglected diseases has been historically hampered due to lack of market incentives. The advent of public domain resources containing chemical information from high throughput screenings is changing the landscape of drug discovery for these diseases. In this work we took advantage of data from extensively studied organisms like human, mouse, E. coli and yeast, among others, to develop a novel integrative network model to prioritize and identify candidate drug targets in neglected pathogen proteomes, and bioactive drug-like molecules. We modeled genomic (proteins) and chemical (bioactive compounds) data as a multilayer weighted network graph that takes advantage of bioactivity data across 221 species, chemical similarities between 1.7 105 compounds and several functional relations among 1.67 105 proteins. These relations comprised orthology, sharing of protein domains, and shared participation in defined biochemical pathways. We showcase the application of this network graph to the problem of prioritization of new candidate targets, based on the information available in the graph for known compound-target associations. We validated this strategy by performing a cross validation procedure for known mouse and Trypanosoma cruzi targets and showed that our approach outperforms classic alignment-based approaches. Moreover, our model provides additional flexibility as two different network definitions could be considered, finding in both cases qualitatively different but sensible candidate targets. We also showcase the application of the network to suggest targets for orphan compounds that are active against Plasmodium falciparum in high-throughput screens. In this case our approach provided a reduced prioritization list of target proteins for the query molecules and showed the ability to propose new testable hypotheses for each compound. Moreover, we found that some predictions highlighted by our network model were supported by independent experimental validations as found post-facto in the literature. PMID:26735851

  11. Developing a dengue forecast model using machine learning: A case study in China.

    PubMed

    Guo, Pi; Liu, Tao; Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun

    2017-10-01

    In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011-2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics.

  12. Navigating a ship with a broken compass: evaluating standard algorithms to measure patient safety.

    PubMed

    Hefner, Jennifer L; Huerta, Timothy R; McAlearney, Ann Scheck; Barash, Barbara; Latimer, Tina; Moffatt-Bruce, Susan D

    2017-03-01

    Agency for Healthcare Research and Quality (AHRQ) software applies standardized algorithms to hospital administrative data to identify patient safety indicators (PSIs). The objective of this study was to assess the validity of PSI flags and report reasons for invalid flagging. At a 6-hospital academic medical center, a retrospective analysis was conducted of all PSIs flagged in fiscal year 2014. A multidisciplinary PSI Quality Team reviewed each flagged PSI based on quarterly reports. The positive predictive value (PPV, the percent of clinically validated cases) was calculated for 12 PSI categories. The documentation for each reversed case was reviewed to determine the reasons for PSI reversal. Of 657 PSI flags, 185 were reversed. Seven PSI categories had a PPV below 75%. Four broad categories of reasons for reversal were AHRQ algorithm limitations (38%), coding misinterpretations (45%), present upon admission (10%), and documentation insufficiency (7%). AHRQ algorithm limitations included 2 subcategories: an "incident" was inherent to the procedure, or highly likely (eg, vascular tumor bleed), or an "incident" was nonsignificant, easily controlled, and/or no intervention was needed. These findings support previous research highlighting administrative data problems. Additionally, AHRQ algorithm limitations was an emergent category not considered in previous research. Herein we present potential solutions to address these issues. If, despite poor validity, US policy continues to rely on PSIs for incentive and penalty programs, improvements are needed in the quality of administrative data and the standardized PSI algorithms. These solutions require national motivation, research attention, and dissemination support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. Validation of Heart Failure Events in the Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) Participants Assigned to Doxazosin and Chlorthalidone

    PubMed Central

    Piller, Linda B; Davis, Barry R; Cutler, Jeffrey A; Cushman, William C; Wright, Jackson T; Williamson, Jeff D; Leenen, Frans HH; Einhorn, Paula T; Randall, Otelio S; Golden, John S; Haywood, L Julian

    2002-01-01

    Background The Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) is a randomized, double-blind, active-controlled trial designed to compare the rate of coronary heart disease events in high-risk hypertensive participants initially randomized to a diuretic (chlorthalidone) versus each of three alternative antihypertensive drugs: alpha-adrenergic blocker (doxazosin), ACE-inhibitor (lisinopril), and calcium-channel blocker (amlodipine). Combined cardiovascular disease risk was significantly increased in the doxazosin arm compared to the chlorthalidone arm (RR 1.25; 95% CI, 1.17–1.33; P < .001), with a doubling of heart failure (fatal, hospitalized, or non-hospitalized but treated) (RR 2.04; 95% CI, 1.79–2.32; P < .001). Questions about heart failure diagnostic criteria led to steps to validate these events further. Methods and Results Baseline characteristics (age, race, sex, blood pressure) did not differ significantly between treatment groups (P < .05) for participants with heart failure events. Post-event pharmacologic management was similar in both groups and generally conformed to accepted heart failure therapy. Central review of a small sample of cases showed high adherence to ALLHAT heart failure criteria. Of 105 participants with quantitative ejection fraction measurements provided, (67% by echocardiogram, 31% by catheterization), 29/46 (63%) from the chlorthalidone group and 41/59 (70%) from the doxazosin group were at or below 40%. Two-year heart failure case-fatalities (22% and 19% in the doxazosin and chlorthalidone groups, respectively) were as expected and did not differ significantly (RR 0.96; 95% CI, 0.67–1.38; P = 0.83). Conclusion Results of the validation process supported findings of increased heart failure in the ALLHAT doxazosin treatment arm compared to the chlorthalidone treatment arm. PMID:12459039

  14. An Administrative Claims Model for Profiling Hospital 30-Day Mortality Rates for Pneumonia Patients

    PubMed Central

    Bratzler, Dale W.; Normand, Sharon-Lise T.; Wang, Yun; O'Donnell, Walter J.; Metersky, Mark; Han, Lein F.; Rapp, Michael T.; Krumholz, Harlan M.

    2011-01-01

    Background Outcome measures for patients hospitalized with pneumonia may complement process measures in characterizing quality of care. We sought to develop and validate a hierarchical regression model using Medicare claims data that produces hospital-level, risk-standardized 30-day mortality rates useful for public reporting for patients hospitalized with pneumonia. Methodology/Principal Findings Retrospective study of fee-for-service Medicare beneficiaries age 66 years and older with a principal discharge diagnosis of pneumonia. Candidate risk-adjustment variables included patient demographics, administrative diagnosis codes from the index hospitalization, and all inpatient and outpatient encounters from the year before admission. The model derivation cohort included 224,608 pneumonia cases admitted to 4,664 hospitals in 2000, and validation cohorts included cases from each of years 1998–2003. We compared model-derived state-level standardized mortality estimates with medical record-derived state-level standardized mortality estimates using data from the Medicare National Pneumonia Project on 50,858 patients hospitalized from 1998–2001. The final model included 31 variables and had an area under the Receiver Operating Characteristic curve of 0.72. In each administrative claims validation cohort, model fit was similar to the derivation cohort. The distribution of standardized mortality rates among hospitals ranged from 13.0% to 23.7%, with 25th, 50th, and 75th percentiles of 16.5%, 17.4%, and 18.3%, respectively. Comparing model-derived risk-standardized state mortality rates with medical record-derived estimates, the correlation coefficient was 0.86 (Standard Error = 0.032). Conclusions/Significance An administrative claims-based model for profiling hospitals for pneumonia mortality performs consistently over several years and produces hospital estimates close to those using a medical record model. PMID:21532758

  15. A Multilayer Network Approach for Guiding Drug Repositioning in Neglected Diseases.

    PubMed

    Berenstein, Ariel José; Magariños, María Paula; Chernomoretz, Ariel; Agüero, Fernán

    2016-01-01

    Drug development for neglected diseases has been historically hampered due to lack of market incentives. The advent of public domain resources containing chemical information from high throughput screenings is changing the landscape of drug discovery for these diseases. In this work we took advantage of data from extensively studied organisms like human, mouse, E. coli and yeast, among others, to develop a novel integrative network model to prioritize and identify candidate drug targets in neglected pathogen proteomes, and bioactive drug-like molecules. We modeled genomic (proteins) and chemical (bioactive compounds) data as a multilayer weighted network graph that takes advantage of bioactivity data across 221 species, chemical similarities between 1.7 105 compounds and several functional relations among 1.67 105 proteins. These relations comprised orthology, sharing of protein domains, and shared participation in defined biochemical pathways. We showcase the application of this network graph to the problem of prioritization of new candidate targets, based on the information available in the graph for known compound-target associations. We validated this strategy by performing a cross validation procedure for known mouse and Trypanosoma cruzi targets and showed that our approach outperforms classic alignment-based approaches. Moreover, our model provides additional flexibility as two different network definitions could be considered, finding in both cases qualitatively different but sensible candidate targets. We also showcase the application of the network to suggest targets for orphan compounds that are active against Plasmodium falciparum in high-throughput screens. In this case our approach provided a reduced prioritization list of target proteins for the query molecules and showed the ability to propose new testable hypotheses for each compound. Moreover, we found that some predictions highlighted by our network model were supported by independent experimental validations as found post-facto in the literature.

  16. Stromal mast cells in invasive breast cancer are a marker of favourable prognosis: a study of 4,444 cases.

    PubMed

    Rajput, Ashish B; Turbin, Dmitry A; Cheang, Maggie Cu; Voduc, David K; Leung, Sam; Gelmon, Karen A; Gilks, C Blake; Huntsman, David G

    2008-01-01

    We have previously demonstrated in a pilot study of 348 invasive breast cancers that mast cell (MC) infiltrates within primary breast cancers are associated with a good prognosis. Our aim was to verify this finding in a larger cohort of invasive breast cancer patients and examine the relationship between the presence of MCs and other clinical and pathological features. Clinically annotated tissue microarrays (TMAs) containing 4,444 cases were constructed and stained with c-Kit (CD-117) using standard immunoperoxidase techniques to identify and quantify MCs. For statistical analysis, we applied a split-sample validation technique. Breast cancer specific survival was analyzed by Kaplan-Meier [KM] method and log rank test was used to compare survival curves. Survival analysis by KM method showed that the presence of stromal MCs was a favourable prognostic factor in the training set (P = 0.001), and the validation set group (P = 0.006). X-tile plot generated to define the optimal number of MCs showed that the presence of any number of stromal MCs predicted good prognosis. Multivariate analysis showed that the MC effect in the training set (Hazard ratio [HR] = 0.804, 95% Confidence interval [CI], 0.653-0.991, P = 0.041) and validation set analysis (HR = 0.846, 95% CI, 0.683-1.049, P = 0.128) was independent of age, tumor grade, tumor size, lymph node, ER and Her2 status. This study concludes that stromal MC infiltration in invasive breast cancer is an independent good prognostic marker and reiterates the critical role of local inflammatory responses in breast cancer progression.

  17. Stromal mast cells in invasive breast cancer are a marker of favourable prognosis: a study of 4,444 cases

    PubMed Central

    Rajput, Ashish B.; Turbin, Dmitry A.; Cheang, Maggie CU; Voduc, David K.; Leung, Sam; Gelmon, Karen A.; Gilks, C. Blake

    2007-01-01

    Purpose We have previously demonstrated in a pilot study of 348 invasive breast cancers that mast cell (MC) infiltrates within primary breast cancers are associated with a good prognosis. Our aim was to verify this finding in a larger cohort of invasive breast cancer patients and examine the relationship between the presence of MCs and other clinical and pathological features. Experimental design Clinically annotated tissue microarrays (TMAs) containing 4,444 cases were constructed and stained with c-Kit (CD-117) using standard immunoperoxidase techniques to identify and quantify MCs. For statistical analysis, we applied a split-sample validation technique. Breast cancer specific survival was analyzed by Kaplan–Meier [KM] method and log rank test was used to compare survival curves. Results Survival analysis by KM method showed that the presence of stromal MCs was a favourable prognostic factor in the training set (P = 0.001), and the validation set group (P = 0.006). X-tile plot generated to define the optimal number of MCs showed that the presence of any number of stromal MCs predicted good prognosis. Multivariate analysis showed that the MC effect in the training set (Hazard ratio [HR] = 0.804, 95% Confidence interval [CI], 0.653–0.991, P = 0.041) and validation set analysis (HR = 0.846, 95% CI, 0.683–1.049, P = 0.128) was independent of age, tumor grade, tumor size, lymph node, ER and Her2 status. Conclusions This study concludes that stromal MC infiltration in invasive breast cancer is an independent good prognostic marker and reiterates the critical role of local inflammatory responses in breast cancer progression. PMID:17431762

  18. A naive Bayes algorithm for tissue origin diagnosis (TOD-Bayes) of synchronous multifocal tumors in the hepatobiliary and pancreatic system.

    PubMed

    Jiang, Weiqin; Shen, Yifei; Ding, Yongfeng; Ye, Chuyu; Zheng, Yi; Zhao, Peng; Liu, Lulu; Tong, Zhou; Zhou, Linfu; Sun, Shuo; Zhang, Xingchen; Teng, Lisong; Timko, Michael P; Fan, Longjiang; Fang, Weijia

    2018-01-15

    Synchronous multifocal tumors are common in the hepatobiliary and pancreatic system but because of similarities in their histological features, oncologists have difficulty in identifying their precise tissue clonal origin through routine histopathological methods. To address this problem and assist in more precise diagnosis, we developed a computational approach for tissue origin diagnosis based on naive Bayes algorithm (TOD-Bayes) using ubiquitous RNA-Seq data. Massive tissue-specific RNA-Seq data sets were first obtained from The Cancer Genome Atlas (TCGA) and ∼1,000 feature genes were used to train and validate the TOD-Bayes algorithm. The accuracy of the model was >95% based on tenfold cross validation by the data from TCGA. A total of 18 clinical cancer samples (including six negative controls) with definitive tissue origin were subsequently used for external validation and 17 of the 18 samples were classified correctly in our study (94.4%). Furthermore, we included as cases studies seven tumor samples, taken from two individuals who suffered from synchronous multifocal tumors across tissues, where the efforts to make a definitive primary cancer diagnosis by traditional diagnostic methods had failed. Using our TOD-Bayes analysis, the two clinical test cases were successfully diagnosed as pancreatic cancer (PC) and cholangiocarcinoma (CC), respectively, in agreement with their clinical outcomes. Based on our findings, we believe that the TOD-Bayes algorithm is a powerful novel methodology to accurately identify the tissue origin of synchronous multifocal tumors of unknown primary cancers using RNA-Seq data and an important step toward more precision-based medicine in cancer diagnosis and treatment. © 2017 UICC.

  19. Clinical endpoint adjudication in a contemporary all-comers coronary stent investigation: methodology and external validation.

    PubMed

    Vranckx, Pascal; McFadden, Eugene; Cutlip, Donald E; Mehran, Roxana; Swart, Michael; Kint, P P; Zijlstra, Felix; Silber, Sigmund; Windecker, Stephan; Serruys, Patrick W C J

    2013-01-01

    Globalisation in coronary stent research calls for harmonization of clinical endpoint definitions and event adjudication. Little has been published about the various processes used for event adjudication or their impact on outcome reporting. We performed a validation of the clinical event committee (CEC) adjudication process on 100 suspected events in the RESOLUTE All-comers trial (Resolute-AC). Two experienced Clinical Research Organisations (CRO) that had already extensive internal validation processes in place, participated in the study. After initial adjudication by the primary-CEC, events were cross-adjudicated by an external-CEC using the same definitions. Major discrepancies affecting the primary end point of target-lesion failure (TLF), a composite of cardiac death, target vessel myocardial infarction (TV-MI), or clinically-indicated target-lesion revascularization (CI-TLR), were analysed by an independent oversight committee who provided recommendations for harmonization. Discordant adjudications were reconsidered by the primary CEC. Subsequently, the RAC database was interrogated for cases that based on these recommendations merited re-adjudication and these cases were also re-adjudicated by the primary CEC. Final discrepancies in adjudication of individual components of TLF occurred in 7 out of 100 events in 5 patients. Discrepancies for the (hierarchical) primary endpoint occurred in 5 events (2 cardiac deaths and 3 TV-MI). After application of harmonization recommendations to the overall RAC population (n=2292), the primary CEC adjudicated 3 additional clinical-TLRs and considered 1 TV-MI as no event. A harmonization process provided a high level of concordance for event adjudication and improved accuracy for final event reporting. These findings suggest it is feasible to pool clinical event outcome data across clinical trials even when different CECs are responsible for event adjudication. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. A Scotland-wide pilot programme of smoking cessation services for young people: process and outcome evaluation.

    PubMed

    Gnich, Wendy; Sheehy, Christine; Amos, Amanda; Bitel, Mark; Platt, Stephen

    2008-11-01

    To conduct an independent, external evaluation of a Scotland-wide youth cessation pilot programme, focusing upon service uptake and effectiveness. National Health Service (NHS) Health Scotland and Action on Smoking and Health (ASH) Scotland funded a 3-year (2002-2005) national pilot programme comprising eight projects which aimed to engage with and support young smokers (aged 12-25 years) to quit. Process evaluation was undertaken via detailed case studies comprising qualitative interviews, observation and documentary analysis. Outcomes were assessed by following project participants (n=470 at baseline) at 3 and 12 months and measuring changes in smoking behaviour, including carbon monoxide (CO)-validated quit status. Recruitment proved difficult. Considerable time and effort were needed to attract young smokers. Advertising and recruitment had to be tailored to project settings and educational activities proved essential to raise the profile of smoking as an issue. Thirty-nine participants [8.6%, 95% confidence interval (CI) 5.0-11.2%] were CO-validated quitters at 3 months and 11 of these (2.4%, 95% CI 1.90-3.8%) were also validated quitters at 12 months. Older participants were more likely to be abstinent at 3 months. The overall quit rate was disappointing. As a result of low participant numbers, it was impossible to draw conclusions about the relative effectiveness of different project approaches. These findings give little support to the case for developing dedicated youth cessation services in Scotland. They also highlight the difficulties of undertaking 'real-world' evaluations of pilot youth cessation projects. More action is needed to develop environments which enhance young smokers' motivation to quit and their ability to sustain quit attempts.

  1. Risk assessment for juvenile justice: a meta-analysis.

    PubMed

    Schwalbe, Craig S

    2007-10-01

    Risk assessment instruments are increasingly employed by juvenile justice settings to estimate the likelihood of recidivism among delinquent juveniles. In concert with their increased use, validation studies documenting their predictive validity have increased in number. The purpose of this study was to assess the average predictive validity of juvenile justice risk assessment instruments and to identify risk assessment characteristics that are associated with higher predictive validity. A search of the published and grey literature yielded 28 studies that estimated the predictive validity of 28 risk assessment instruments. Findings of the meta-analysis were consistent with effect sizes obtained in larger meta-analyses of criminal justice risk assessment instruments and showed that brief risk assessment instruments had smaller effect sizes than other types of instruments. However, this finding is tentative owing to limitations of the literature.

  2. Proposed epidemiological case definition for serious skin infection in children.

    PubMed

    O'Sullivan, Cathryn E; Baker, Michael G

    2010-04-01

    Researching the rising incidence of serious skin infections in children is limited by the lack of a consistent and valid case definition. We aimed to develop and evaluate a good quality case definition, for use in future research and surveillance of these infections. We tested the validity of the existing case definition, and then of 11 proposed alternative definitions, by assessing their screening performance when applied to a population of paediatric skin infection cases identified by a chart review of 4 years of admissions to a New Zealand hospital. Previous studies have largely used definitions based on the International Classification of Diseases skin infection subchapter. This definition is highly specific (100%) but poorly sensitive (61%); it fails to capture skin infections of atypical anatomical sites, those secondary to primary skin disease and trauma, and those recorded as additional diagnoses. Including these groups produced a new case definition with 98.9% sensitivity and 98.8% specificity. Previous analyses of serious skin infection in children have underestimated the true burden of disease. Using this proposed broader case definition should allow future researchers to produce more valid and comparable estimates of the true burden of these important and increasing infections.

  3. The 2018 Definition of Periprosthetic Hip and Knee Infection: An Evidence-Based and Validated Criteria.

    PubMed

    Parvizi, Javad; Tan, Timothy L; Goswami, Karan; Higuera, Carlos; Della Valle, Craig; Chen, Antonia F; Shohat, Noam

    2018-05-01

    The introduction of the Musculoskeletal Infection Society (MSIS) criteria for periprosthetic joint infection (PJI) in 2011 resulted in improvements in diagnostic confidence and research collaboration. The emergence of new diagnostic tests and the lessons we have learned from the past 7 years using the MSIS definition, prompted us to develop an evidence-based and validated updated version of the criteria. This multi-institutional study of patients undergoing revision total joint arthroplasty was conducted at 3 academic centers. For the development of the new diagnostic criteria, PJI and aseptic patient cohorts were stringently defined: PJI cases were defined using only major criteria from the MSIS definition (n = 684) and aseptic cases underwent one-stage revision for a noninfective indication and did not fail within 2 years (n = 820). Serum C-reactive protein (CRP), D-dimer, erythrocyte sedimentation rate were investigated, as well as synovial white blood cell count, polymorphonuclear percentage, leukocyte esterase, alpha-defensin, and synovial CRP. Intraoperative findings included frozen section, presence of purulence, and isolation of a pathogen by culture. A stepwise approach using random forest analysis and multivariate regression was used to generate relative weights for each diagnostic marker. Preoperative and intraoperative definitions were created based on beta coefficients. The new definition was then validated on an external cohort of 222 patients with PJI who subsequently failed with reinfection and 200 aseptic patients. The performance of the new criteria was compared to the established MSIS and the prior International Consensus Meeting definitions. Two positive cultures or the presence of a sinus tract were considered as major criteria and diagnostic of PJI. The calculated weights of an elevated serum CRP (>1 mg/dL), D-dimer (>860 ng/mL), and erythrocyte sedimentation rate (>30 mm/h) were 2, 2, and 1 points, respectively. Furthermore, elevated synovial fluid white blood cell count (>3000 cells/μL), alpha-defensin (signal-to-cutoff ratio >1), leukocyte esterase (++), polymorphonuclear percentage (>80%), and synovial CRP (>6.9 mg/L) received 3, 3, 3, 2, and 1 points, respectively. Patients with an aggregate score of greater than or equal to 6 were considered infected, while a score between 2 and 5 required the inclusion of intraoperative findings for confirming or refuting the diagnosis. Intraoperative findings of positive histology, purulence, and single positive culture were assigned 3, 3, and 2 points, respectively. Combined with the preoperative score, a total of greater than or equal to 6 was considered infected, a score between 4 and 5 was inconclusive, and a score of 3 or less was not infected. The new criteria demonstrated a higher sensitivity of 97.7% compared to the MSIS (79.3%) and International Consensus Meeting definition (86.9%), with a similar specificity of 99.5%. This study offers an evidence-based definition for diagnosing hip and knee PJI, which has shown excellent performance on formal external validation. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Combining Advanced Turbulent Mixing and Combustion Models with Advanced Multi-Phase CFD Code to Simulate Detonation and Post-Detonation Bio-Agent Mixing and Destruction

    DTIC Science & Technology

    2017-10-01

    perturbations in the energetic material to study their effects on the blast wave formation. The last case also makes use of the same PBX, however, the...configuration, Case A: Spore cloud located on the top of the charge at an angle 45 degree, Case B: Spore cloud located at an angle 45 degree from the charge...theoretical validation. The first is the Sedov case where the pressure decay and blast wave front are validated based on analytical solutions. In this test

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Marshall, William BJ J

    In the course of criticality code validation, outlier cases are frequently encountered. Historically, the causes of these unexpected results could be diagnosed only through comparison with other similar cases or through the known presence of a unique component of the critical experiment. The sensitivity and uncertainty (S/U) analysis tools available in the SCALE 6.1 code system provide a much broader range of options to examine underlying causes of outlier cases. This paper presents some case studies performed as a part of the recent validation of the KENO codes in SCALE 6.1 using S/U tools to examine potential causes of biases.

  6. Genomic analysis identifies masqueraders of full-term cerebral palsy.

    PubMed

    Takezawa, Yusuke; Kikuchi, Atsuo; Haginoya, Kazuhiro; Niihori, Tetsuya; Numata-Uematsu, Yurika; Inui, Takehiko; Yamamura-Suzuki, Saeko; Miyabayashi, Takuya; Anzai, Mai; Suzuki-Muromoto, Sato; Okubo, Yukimune; Endo, Wakaba; Togashi, Noriko; Kobayashi, Yasuko; Onuma, Akira; Funayama, Ryo; Shirota, Matsuyuki; Nakayama, Keiko; Aoki, Yoko; Kure, Shigeo

    2018-05-01

    Cerebral palsy is a common, heterogeneous neurodevelopmental disorder that causes movement and postural disabilities. Recent studies have suggested genetic diseases can be misdiagnosed as cerebral palsy. We hypothesized that two simple criteria, that is, full-term births and nonspecific brain MRI findings, are keys to extracting masqueraders among cerebral palsy cases due to the following: (1) preterm infants are susceptible to multiple environmental factors and therefore demonstrate an increased risk of cerebral palsy and (2) brain MRI assessment is essential for excluding environmental causes and other particular disorders. A total of 107 patients-all full-term births-without specific findings on brain MRI were identified among 897 patients diagnosed with cerebral palsy who were followed at our center. DNA samples were available for 17 of the 107 cases for trio whole-exome sequencing and array comparative genomic hybridization. We prioritized variants in genes known to be relevant in neurodevelopmental diseases and evaluated their pathogenicity according to the American College of Medical Genetics guidelines. Pathogenic/likely pathogenic candidate variants were identified in 9 of 17 cases (52.9%) within eight genes: CTNNB1 , CYP2U1 , SPAST , GNAO1 , CACNA1A , AMPD2 , STXBP1 , and SCN2A . Five identified variants had previously been reported. No pathogenic copy number variations were identified. The AMPD2 missense variant and the splice-site variants in CTNNB1 and AMPD2 were validated by in vitro functional experiments. The high rate of detecting causative genetic variants (52.9%) suggests that patients diagnosed with cerebral palsy in full-term births without specific MRI findings may include genetic diseases masquerading as cerebral palsy.

  7. The Applicability of Emerging Quantum Computing Capabilities to Exo-Planet Research

    NASA Astrophysics Data System (ADS)

    Correll, Randall; Worden, S.

    2014-01-01

    In conjunction with the Universities Space Research Association and Google, Inc. NASA Ames has acquired a quantum computing device built by DWAVE Systems with approximately 512 “qubits.” Quantum computers have the feature that their capabilities to find solutions to problems with large numbers of variables scale linearly with the number of variables rather than exponentially with that number. These devices may have significant applicability to detection of exoplanet signals in noisy data. We have therefore explored the application of quantum computing to analyse stellar transiting exoplanet data from NASA’s Kepler Mission. The analysis of the case studies was done using the DWAVE Systems’s BlackBox compiler software emulator, although one dataset was run successfully on the DWAVE Systems’s 512 qubit Vesuvius machine. The approach first extracts a list of candidate transits from the photometric lightcurve of a given Kepler target, and then applies a quantum annealing algorithm to find periodicity matches between subsets of the candidate transit list. We examined twelve case studies and were successful in reproducing the results of the Kepler science pipeline in finding validated exoplanets, and matched the results for a pair of candidate exoplanets. We conclude that the current implementation of the algorithm is not sufficiently challenging to require a quantum computer as opposed to a conventional computer. We are developing more robust algorithms better tailored to the quantum computer and do believe that our approach has the potential to extract exoplanet transits in some cases where a conventional approach would not in Kepler data. Additionally, we believe the new quantum capabilities may have even greater relevance for new exoplanet data sets such as that contemplated for NASA’s Transiting Exoplanet Survey Satellite (TESS) and other astrophysics data sets.

  8. Empirical validation of an agent-based model of wood markets in Switzerland

    PubMed Central

    Hilty, Lorenz M.; Lemm, Renato; Thees, Oliver

    2018-01-01

    We present an agent-based model of wood markets and show our efforts to validate this model using empirical data from different sources, including interviews, workshops, experiments, and official statistics. Own surveys closed gaps where data was not available. Our approach to model validation used a variety of techniques, including the replication of historical production amounts, prices, and survey results, as well as a historical case study of a large sawmill entering the market and becoming insolvent only a few years later. Validating the model using this case provided additional insights, showing how the model can be used to simulate scenarios of resource availability and resource allocation. We conclude that the outcome of the rigorous validation qualifies the model to simulate scenarios concerning resource availability and allocation in our study region. PMID:29351300

  9. Validity of Cognitive Load Measures in Simulation-Based Training: A Systematic Review.

    PubMed

    Naismith, Laura M; Cavalcanti, Rodrigo B

    2015-11-01

    Cognitive load theory (CLT) provides a rich framework to inform instructional design. Despite the applicability of CLT to simulation-based medical training, findings from multimedia learning have not been consistently replicated in this context. This lack of transferability may be related to issues in measuring cognitive load (CL) during simulation. The authors conducted a review of CLT studies across simulation training contexts to assess the validity evidence for different CL measures. PRISMA standards were followed. For 48 studies selected from a search of MEDLINE, EMBASE, PsycInfo, CINAHL, and ERIC databases, information was extracted about study aims, methods, validity evidence of measures, and findings. Studies were categorized on the basis of findings and prevalence of validity evidence collected, and statistical comparisons between measurement types and research domains were pursued. CL during simulation training has been measured in diverse populations including medical trainees, pilots, and university students. Most studies (71%; 34) used self-report measures; others included secondary task performance, physiological indices, and observer ratings. Correlations between CL and learning varied from positive to negative. Overall validity evidence for CL measures was low (mean score 1.55/5). Studies reporting greater validity evidence were more likely to report that high CL impaired learning. The authors found evidence that inconsistent correlations between CL and learning may be related to issues of validity in CL measures. Further research would benefit from rigorous documentation of validity and from triangulating measures of CL. This can better inform CLT instructional design for simulation-based medical training.

  10. Oncocytic change in pleomorphic adenoma: molecular evidence in support of an origin in neoplastic cells

    PubMed Central

    Palma, Silvana Di; Lambros, Maryou B K; Savage, Kay; Jones, Chris; Mackay, Alan; Dexter, Tim; Iravani, Marjan; Fenwick, Kerry; Ashworth, Alan; Reis‐Filho, Jorge S

    2007-01-01

    Background Cells with oncocytic change (OC) are a common finding in salivary glands (SGs) and in SG tumours. When found within pleomorphic adenomas (PAs), cells with OC may be perceived as evidence of malignancy, and lead to a misdiagnosis of carcinoma ex pleomorphic adenoma (CaExPa). Aim To describe a case of PA with atypical OC, resembling a CaExPa. A genomewide molecular analysis was carried out to compare the molecular genetic features of the two components and to determine whether the oncocytic cells originated from PA cells, entrapped normal cells, or whether these cells constitute an independent tumour. Materials and methods Representative blocks were immunohistochemically analysed with antibodies raised against cytokeratin (Ck) 5/6, Ck8/18, Ck14, vimentin, p63, α‐smooth muscle actin (ASMA), S100 protein, anti‐mitochondria antibody, β‐catenin, HER2, Ki67, p53 and epidermal growth factor receptor. Typical areas of PA and OC were microdissected and subjected to microarray‐based comparative genomic hybridisation (aCGH). Chromogenic in situ hybridisation (CISH) was performed with in‐house generated probes to validate the aCGH findings. Results PA cells showed the typical immunohistochemical profile, including positivity for Ck5/6, Ck8/18, Ck14, vimentin, ASMA, S100 protein, p63, epidermal growth factor receptor and β‐catenin, whereas oncocytic cells showed a luminal phenotype, expression of anti‐mitochondria antibody and reduced β‐catenin staining. Both components showed low proliferation rates and lacked p53 reactivity. aCGH revealed a similar amplification in both components, mapping to 12q13.3–q21.1, which was further validated by CISH. No HER2 gene amplification or overexpression was observed. The foci of oncocytic metaplasia showed an additional low‐level gain of 6p25.2–p21.31. Conclusion The present data demonstrate that the bizarre atypical cells of the present case show evidence of clonality but no features of malignancy. In addition, owing to the presence of a similar genome amplification pattern in both components, it is proposed that at least in some cases, OC may originate from PA cells. PMID:16467165

  11. Toward a 3D dynamic model of a faulty duplex ball bearing

    NASA Astrophysics Data System (ADS)

    Kogan, Gideon; Klein, Renata; Kushnirsky, Alex; Bortman, Jacob

    2015-03-01

    Bearings are vital components for safe and proper operation of machinery. Increasing efficiency of bearing diagnostics usually requires training of health and usage monitoring systems via expensive and time-consuming ground calibration tests. The main goal of this research, therefore, is to improve bearing dynamics modeling tools in order to reduce the time and budget needed to implement the health and usage monitoring approach. The proposed three-dimensional ball bearing dynamic model is based on the classic dynamic and kinematic equations. Interactions between the bodies are simulated using non-linear springs combined with dampers described by Hertz-type contact relation. The force friction is simulated using the hyperbolic-tangent function. The model allows simulation of a wide range of mechanical faults. It is validated by comparison to known bearing behavior and to experimental results. The model results are verified by demonstrating numerical convergence. The model results for the two cases of single and duplex angular ball bearings with axial deformation in the outer ring are presented. The qualitative investigation provides insight into bearing dynamics, the sensitivity study generalizes the qualitative findings for similar cases, and the comparison to the test results validates model reliability. The article demonstrates the variety of the cases that the 3D bearing model can simulate and the findings to which it may lead. The research allowed the identification of new patterns generated by single and duplex bearings with axially deformed outer race. It also enlightened the difference between single and duplex bearing manifestation. In the current research the dynamic model enabled better understanding of the physical behavior of the faulted bearings. Therefore, it is expected that the modeling approach has the potential to simplify and improve the development process of diagnostic algorithms. • A deformed outer race of a single axially loaded bearing is simulated. • The model results are subjected to a sensitivity study. • Duplex bearing with deformed outer race is simulated as well as tested. • The simulation results are in a good agreement with the experimental results.

  12. Concurrent Validity of Holland's Theory for College-Degreed Black Women.

    ERIC Educational Resources Information Center

    Bingham, Rosie P.; Walsh, W. Bruce

    1978-01-01

    This study, using the Vocational Preference Inventory and the Self-Directed Search, explored the concurrent validity of Holland's theory for employed college-degreed Black women. The findings support the validity of Holland's theory for this population. (Author)

  13. Do College Student Surveys Have Any Validity?

    ERIC Educational Resources Information Center

    Porter, Stephen R.

    2011-01-01

    Using standards established for validation research, I review the theory and evidence underlying the validity argument of the National Survey of Student Engagement (NSSE). I use the NSSE because it is the preeminent survey of college students, arguing that if it lacks validity, then so do almost all other college student surveys. I find that it…

  14. Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI

    ERIC Educational Resources Information Center

    Forer, Barry; Zumbo, Bruno D.

    2011-01-01

    The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…

  15. Optimal Policy of Cross-Layer Design for Channel Access and Transmission Rate Adaptation in Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    He, Hao; Wang, Jun; Zhu, Jiang; Li, Shaoqian

    2010-12-01

    In this paper, we investigate the cross-layer design of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP), which can be solved by standard linear programming (LP) method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.

  16. International validation study for interim PET in ABVD-treated, advanced-stage hodgkin lymphoma: interpretation criteria and concordance rate among reviewers.

    PubMed

    Biggi, Alberto; Gallamini, Andrea; Chauvie, Stephane; Hutchings, Martin; Kostakoglu, Lale; Gregianin, Michele; Meignan, Michel; Malkowski, Bogdan; Hofman, Michael S; Barrington, Sally F

    2013-05-01

    At present, there are no standard criteria that have been validated for interim PET reporting in lymphoma. In 2009, an international workshop attended by hematologists and nuclear medicine experts in Deauville, France, proposed to develop simple and reproducible rules for interim PET reporting in lymphoma. Accordingly, an international validation study was undertaken with the primary aim of validating the prognostic role of interim PET using the Deauville 5-point score to evaluate images and with the secondary aim of measuring concordance rates among reviewers using the same 5-point score. This paper focuses on the criteria for interpretation of interim PET and on concordance rates. A cohort of advanced-stage Hodgkin lymphoma patients treated with doxorubicin, bleomycin, vinblastine, and dacarbazine (ABVD) were enrolled retrospectively from centers worldwide. Baseline and interim scans were reviewed by an international panel of 6 nuclear medicine experts using the 5-point score. Complete scan datasets of acceptable diagnostic quality were available for 260 of 440 (59%) enrolled patients. Independent agreement among reviewers was reached on 252 of 260 patients (97%), for whom at least 4 reviewers agreed the findings were negative (score of 1-3) or positive (score of 4-5). After discussion, consensus was reached in all cases. There were 45 of 260 patients (17%) with positive interim PET findings and 215 of 260 patients (83%) with negative interim PET findings. Thirty-three interim PET-positive scans were true-positive, and 12 were false-positive. Two hundred three interim PET-negative scans were true-negative, and 12 were false-negative. Sensitivity, specificity, and accuracy were 0.73, 0.94, and 0.91, respectively. Negative predictive value and positive predictive value were 0.94 and 0.73, respectively. The 3-y failure-free survival was 83%, 28%, and 95% for the entire population and for interim PET-positive and -negative patients, respectively (P < 0.0001). The agreement between pairs of reviewers was good or very good, ranging from 0.69 to 0.84 as measured with the Cohen kappa. Overall agreement was good at 0.76 as measured with the Krippendorf α. The 5-point score proposed at Deauville for reviewing interim PET scans in advanced Hodgkin lymphoma is accurate and reproducible enough to be accepted as a standard reporting criterion in clinical practice and for clinical trials.

  17. Analyzing self-controlled case series data when case confirmation rates are estimated from an internal validation sample.

    PubMed

    Xu, Stanley; Clarke, Christina L; Newcomer, Sophia R; Daley, Matthew F; Glanz, Jason M

    2018-05-16

    Vaccine safety studies are often electronic health record (EHR)-based observational studies. These studies often face significant methodological challenges, including confounding and misclassification of adverse event. Vaccine safety researchers use self-controlled case series (SCCS) study design to handle confounding effect and employ medical chart review to ascertain cases that are identified using EHR data. However, for common adverse events, limited resources often make it impossible to adjudicate all adverse events observed in electronic data. In this paper, we considered four approaches for analyzing SCCS data with confirmation rates estimated from an internal validation sample: (1) observed cases, (2) confirmed cases only, (3) known confirmation rate, and (4) multiple imputation (MI). We conducted a simulation study to evaluate these four approaches using type I error rates, percent bias, and empirical power. Our simulation results suggest that when misclassification of adverse events is present, approaches such as observed cases, confirmed case only, and known confirmation rate may inflate the type I error, yield biased point estimates, and affect statistical power. The multiple imputation approach considers the uncertainty of estimated confirmation rates from an internal validation sample, yields a proper type I error rate, largely unbiased point estimate, proper variance estimate, and statistical power. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Validation of the Italian Version of the Caregiver Abuse Screen among Family Caregivers of Older People with Alzheimer's Disease.

    PubMed

    Melchiorre, Maria Gabriella; Di Rosa, Mirko; Barbabella, Francesco; Barbini, Norma; Lattanzio, Fabrizia; Chiatti, Carlos

    2017-01-01

    Introduction . Elder abuse is often a hidden phenomenon and, in many cases, screening practices are difficult to implement among older people with dementia. The Caregiver Abuse Screen (CASE) is a useful tool which is administered to family caregivers for detecting their potential abusive behavior. Objectives . To validate the Italian version of the CASE tool in the context of family caregiving of older people with Alzheimer's disease (AD) and to identify risk factors for elder abuse in Italy. Methods . The CASE test was administered to 438 caregivers, recruited in the Up-Tech study. Validity and reliability were evaluated using Spearman's correlation coefficients, principal-component analysis, and Cronbach's alphas. The association between the CASE and other variables potentially associated with elder abuse was also analyzed. Results . The factor analysis suggested the presence of a single factor, with a strong internal consistency (Cronbach's alpha = 0.86). CASE score was strongly correlated with well-known risk factors of abuse. At multivariate level, main factors associated with CASE total score were caregiver burden and AD-related behavioral disturbances. Conclusions . The Italian version of the CASE is a reliable and consistent screening tool for tackling the risk of being or becoming perpetrators of abuse by family caregivers of people with AD.

  19. Validation of the Italian Version of the Caregiver Abuse Screen among Family Caregivers of Older People with Alzheimer's Disease

    PubMed Central

    Di Rosa, Mirko; Barbabella, Francesco; Barbini, Norma; Chiatti, Carlos

    2017-01-01

    Introduction. Elder abuse is often a hidden phenomenon and, in many cases, screening practices are difficult to implement among older people with dementia. The Caregiver Abuse Screen (CASE) is a useful tool which is administered to family caregivers for detecting their potential abusive behavior. Objectives. To validate the Italian version of the CASE tool in the context of family caregiving of older people with Alzheimer's disease (AD) and to identify risk factors for elder abuse in Italy. Methods. The CASE test was administered to 438 caregivers, recruited in the Up-Tech study. Validity and reliability were evaluated using Spearman's correlation coefficients, principal-component analysis, and Cronbach's alphas. The association between the CASE and other variables potentially associated with elder abuse was also analyzed. Results. The factor analysis suggested the presence of a single factor, with a strong internal consistency (Cronbach's alpha = 0.86). CASE score was strongly correlated with well-known risk factors of abuse. At multivariate level, main factors associated with CASE total score were caregiver burden and AD-related behavioral disturbances. Conclusions. The Italian version of the CASE is a reliable and consistent screening tool for tackling the risk of being or becoming perpetrators of abuse by family caregivers of people with AD. PMID:28265571

  20. Comprehensive genomic analysis identifies pathogenic variants in maturity-onset diabetes of the young (MODY) patients in South India.

    PubMed

    Mohan, Viswanathan; Radha, Venkatesan; Nguyen, Thong T; Stawiski, Eric W; Pahuja, Kanika Bajaj; Goldstein, Leonard D; Tom, Jennifer; Anjana, Ranjit Mohan; Kong-Beltran, Monica; Bhangale, Tushar; Jahnavi, Suresh; Chandni, Radhakrishnan; Gayathri, Vijay; George, Paul; Zhang, Na; Murugan, Sakthivel; Phalke, Sameer; Chaudhuri, Subhra; Gupta, Ravi; Zhang, Jingli; Santhosh, Sam; Stinson, Jeremy; Modrusan, Zora; Ramprasad, V L; Seshagiri, Somasekar; Peterson, Andrew S

    2018-02-13

    Maturity-onset diabetes of the young (MODY) is an early-onset, autosomal dominant form of non-insulin dependent diabetes. Genetic diagnosis of MODY can transform patient management. Earlier data on the genetic predisposition to MODY have come primarily from familial studies in populations of European origin. In this study, we carried out a comprehensive genomic analysis of 289 individuals from India that included 152 clinically diagnosed MODY cases to identify variants in known MODY genes. Further, we have analyzed exome data to identify putative MODY relevant variants in genes previously not implicated in MODY. Functional validation of MODY relevant variants was also performed. We found MODY 3 (HNF1A; 7.2%) to be most frequently mutated followed by MODY 12 (ABCC8; 3.3%). They together account for ~ 11% of the cases. In addition to known MODY genes, we report the identification of variants in RFX6, WFS1, AKT2, NKX6-1 that may contribute to development of MODY. Functional assessment of the NKX6-1 variants showed that they are functionally impaired. Our findings showed HNF1A and ABCC8 to be the most frequently mutated MODY genes in south India. Further we provide evidence for additional MODY relevant genes, such as NKX6-1, and these require further validation.

  1. Prevalence of rheumatoid arthritis in Dublin, Ireland: a population based survey.

    PubMed

    Power, D; Codd, M; Ivers, L; Sant, S; Barry, M

    1999-01-01

    The prevalence of Rheumatoid Arthritis (RA) in Ireland has never been established. Studies from different countries show varying rates, being almost 100 per cent greater in the highlands of Scotland (10/1,000) than in rural Lesotho (6/1,000). A recent study also suggests a fall in the prevalence of RA among women in the London urban area. Given these variations the validity of extrapolating prevalence rates established for other countries to Ireland is questionable. This study aimed to establish a prevalence rate for RA in a defined Dublin population. A self-administered questionnaire was sent to 2,500 people chosen at random from the electoral register. The questionnaire was designed to select out both undiagnosed patients and those with definite arthritis. Respondents whose replies indicated an arthritic process, but in whom no diagnosis had been made, were asked to attend for further assessment and investigations as appropriate. Those who responded that they had been diagnosed with arthritis were asked for consent to inspect their hospital or general practitioner records. A diagnosis of RA was based on American Rheumatism Association (ARA) criteria. Valid responses were received from 1,227 people surveyed (response rate = 49 per cent). Six cases of RA were identified including 2 previously undiagnosed cases. A prevalence rate of 5/1,000 has been estimated based on these findings.

  2. Validity of Simpson-Angus Scale (SAS) in a naturalistic schizophrenia population.

    PubMed

    Janno, Sven; Holi, Matti M; Tuisku, Katinka; Wahlbeck, Kristian

    2005-03-17

    Simpson-Angus Scale (SAS) is an established instrument for neuroleptic-induced parkinsonism (NIP), but its statistical properties have been studied insufficiently. Some shortcomings concerning its content have been suggested as well. According to a recent report, the widely used SAS mean score cut-off value 0.3 of for NIP detection may be too low. Our aim was to evaluate SAS against DSM-IV diagnostic criteria for NIP and objective motor assessment (actometry). Ninety-nine chronic institutionalised schizophrenia patients were evaluated during the same interview by standardised actometric recording and SAS. The diagnosis of NIP was based on DSM-IV criteria. Internal consistency measured by Cronbach's alpha, convergence to actometry and the capacity for NIP case detection were assessed. Cronbach's alpha for the scale was 0.79. SAS discriminated between DSM-IV NIP and non-NIP patients. The actometric findings did not correlate with SAS. ROC-analysis yielded a good case detection power for SAS mean score. The optimal threshold value of SAS mean score was between 0.65 and 0.95, i.e. clearly higher than previously suggested threshold value. We conclude that SAS seems a reliable and valid instrument. The previously commonly used cut-off mean score of 0.3 has been too low resulting in low specificity, and we suggest a new cut-off value of 0.65, whereby specificity could be doubled without loosing sensitivity.

  3. Validity of Simpson-Angus Scale (SAS) in a naturalistic schizophrenia population

    PubMed Central

    Janno, Sven; Holi, Matti M; Tuisku, Katinka; Wahlbeck, Kristian

    2005-01-01

    Background Simpson-Angus Scale (SAS) is an established instrument for neuroleptic-induced parkinsonism (NIP), but its statistical properties have been studied insufficiently. Some shortcomings concerning its content have been suggested as well. According to a recent report, the widely used SAS mean score cut-off value 0.3 of for NIP detection may be too low. Our aim was to evaluate SAS against DSM-IV diagnostic criteria for NIP and objective motor assessment (actometry). Methods Ninety-nine chronic institutionalised schizophrenia patients were evaluated during the same interview by standardised actometric recording and SAS. The diagnosis of NIP was based on DSM-IV criteria. Internal consistency measured by Cronbach's α, convergence to actometry and the capacity for NIP case detection were assessed. Results Cronbach's α for the scale was 0.79. SAS discriminated between DSM-IV NIP and non-NIP patients. The actometric findings did not correlate with SAS. ROC-analysis yielded a good case detection power for SAS mean score. The optimal threshold value of SAS mean score was between 0.65 and 0.95, i.e. clearly higher than previously suggested threshold value. Conclusion We conclude that SAS seems a reliable and valid instrument. The previously commonly used cut-off mean score of 0.3 has been too low resulting in low specificity, and we suggest a new cut-off value of 0.65, whereby specificity could be doubled without loosing sensitivity. PMID:15774006

  4. Childhood risk factors in Korean women with anorexia nervosa: two sets of case-control studies with retrospective comparisons.

    PubMed

    Kim, Youl-Ri; Heo, Si Young; Kang, Heechan; Song, Ki Jun; Treasure, Janet

    2010-11-01

    The aim of this study was to investigate the characteristics of the risk factors for anorexia nervosa (AN) in Korean women. Two sets of case-control comparisons were conducted, in which 52 women with lifetime AN from Seoul, S. Korea, were compared with 108 Korean healthy controls and also with 42 women with lifetime AN from the UK in terms of their childhood risk factors. A questionnaire designed to conduct a retrospective assessment of the childhood risk factors was administered to all participants. The Korean AN women were more likely to report premorbid anxiety, perfectionism, and emotional undereating and were less likely to report having supportive figures in their childhood than the Korean healthy controls. There were no overall differences in the childhood risk factors between the Korean and British women with AN. Premorbid anxiety, perfectionism, less social support, and emotional undereating merit attention as risk factors in Korean AN. The current results are informative, but an epidemiologically robust prospective case-control study would be needed to validate these findings. © 2009 by Wiley Periodicals, Inc.

  5. A temporal interestingness measure for drug interaction signal detection in post-marketing surveillance.

    PubMed

    Ji, Yanqing; Ying, Hao; Tran, John; Dews, Peter; Mansour, Ayman; Massanari, R Michael

    2014-01-01

    Drug-drug interactions (DDIs) can result in serious consequences, including death. Existing methods for identifying potential DDIs in post-marketing surveillance primarily rely on the FDA's (Food and Drug Administration) spontaneous reporting system. However, this system suffers from severe underreporting, which makes it difficult to timely collect enough valid cases for statistical analysis. In this paper, we study how to signal potential DDIs using patient electronic health data. Specifically, we focus on discovery of potential DDIs by analyzing the temporal relationships between the concurrent use of two drugs of interest and the occurrences of various symptoms using novel temporal association mining techniques we developed. A new interestingness measure called functional temporal interest was proposed to assess the degrees of temporal association between two drugs of interest and each symptom. The measure was employed to screen potential DDIs from 21,405 electronic patient cases retrieved from the Veterans Affairs Medical Center in Detroit, Michigan. The preliminary results indicate the usefulness of our method in finding potential DDIs for further analysis (e.g., epidemiology study) and investigation (e.g., case review) by drug safety professionals.

  6. Comparative assessment of three standardized robotic surgery training methods.

    PubMed

    Hung, Andrew J; Jayaratna, Isuru S; Teruya, Kara; Desai, Mihir M; Gill, Inderbir S; Goh, Alvin C

    2013-10-01

    To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience <30 cases (n = 38) and 'experts': faculty surgeons, previous experience ≥30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001). We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool. © 2013 BJU International.

  7. Validity of administrative data claim-based methods for identifying individuals with diabetes at a population level.

    PubMed

    Southern, Danielle A; Roberts, Barbara; Edwards, Alun; Dean, Stafford; Norton, Peter; Svenson, Lawrence W; Larsen, Erik; Sargious, Peter; Lau, David C W; Ghali, William A

    2010-01-01

    This study assessed the validity of a widely-accepted administrative data surveillance methodology for identifying individuals with diabetes relative to three laboratory data reference standard definitions for diabetes. We used a combination of linked regional data (hospital discharge abstracts and physician data) and laboratory data to test the validity of administrative data surveillance definitions for diabetes relative to a laboratory data reference standard. The administrative discharge data methodology includes two definitions for diabetes: a strict administrative data definition of one hospitalization code or two physician claims indicating diabetes; and a more liberal definition of one hospitalization code or a single physician claim. The laboratory data, meanwhile, produced three reference standard definitions based on glucose levels +/- HbA1c levels. Sensitivities ranged from 68.4% to 86.9% for the administrative data definitions tested relative to the three laboratory data reference standards. Sensitivities were higher for the more liberal administrative data definition. Positive predictive values (PPV), meanwhile, ranged from 53.0% to 88.3%, with the liberal administrative data definition producing lower PPVs. These findings demonstrate the trade-offs of sensitivity and PPV for selecting diabetes surveillance definitions. Centralized laboratory data may be of value to future surveillance initiatives that use combined data sources to optimize case detection.

  8. What makes an accurate and reliable subject-specific finite element model? A case study of an elephant femur

    PubMed Central

    Panagiotopoulou, O.; Wilshin, S. D.; Rayfield, E. J.; Shefelbine, S. J.; Hutchinson, J. R.

    2012-01-01

    Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form–function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reliable model. We detail how choices of element size, type and material properties in FEA influence the results of simulations. We also present an empirical model for estimating heterogeneous material properties throughout an elephant femur (but of broad applicability to FEA). We then use an ex vivo experimental validation test of a cadaveric femur to check our FEA results and find that the heterogeneous model matches the experimental results extremely well, and far better than the homogeneous model. We emphasize how considering heterogeneous material properties in FEA may be critical, so this should become standard practice in comparative FEA studies along with convergence analyses, consideration of element size, type and experimental validation. These steps may be required to obtain accurate models and derive reliable conclusions from them. PMID:21752810

  9. Development and validation of a new kind of coupling element for wheel-hub motors

    NASA Astrophysics Data System (ADS)

    Perekopskiy, Sergey; Kasper, Roland

    2018-05-01

    For the automotive industry, electric powered vehicles are becoming an increasingly relevant factor in the competition against climate change. Application of one special example - a wheel-hub motor, for electric powered vehicle can support this challenge. Patented slotless air gap winding invented at the chair of mechatronics of the Otto von Guericke University Magdeburg has great application potential in constantly growing e-mobility field, especially for wheel-hub motors based on this technology due to its advantages, such as a high gravimetric power density and high efficiency. However, advantages of this technology are decreased by its sensibility to the loads out of driving maneuvers by dimensional variations of air gap consistency. This article describes the development and validation of a coupling element for the designed wheel-hub motor. To find a suitable coupling concept first the assembly structure of the motor was analyzed and developed design of the coupling element was checked. Based on the geometry of the motor and wheel a detailed design of the coupling element was generated. The analytical approach for coupling element describes a potential of the possible loads on the coupling element. The FEM simulation of critical load cases for the coupling element validated results of the analytical approach.

  10. A National Study of the Validity and Utility of the Comprehensive Assessment of School Environment (CASE) Survey

    ERIC Educational Resources Information Center

    McGuffey, Amy R.

    2016-01-01

    A healthy school climate is necessary for improvement. The purpose of this study was to evaluate the construct validity and usability of the Comprehensive Assessment of School Environment (CASE) as it was purportedly realigned to the three dimensions of the Breaking Ranks Framework developed by the National Association of Secondary School…

  11. A Case for Transforming the Criterion of a Predictive Validity Study

    ERIC Educational Resources Information Center

    Patterson, Brian F.; Kobrin, Jennifer L.

    2011-01-01

    This study presents a case for applying a transformation (Box and Cox, 1964) of the criterion used in predictive validity studies. The goals of the transformation were to better meet the assumptions of the linear regression model and to reduce the residual variance of fitted (i.e., predicted) values. Using data for the 2008 cohort of first-time,…

  12. Repetitive deliberate fires: Development and validation of a methodology to detect series.

    PubMed

    Bruenisholz, Eva; Delémont, Olivier; Ribaux, Olivier; Wilson-Wilde, Linzi

    2017-08-01

    The detection of repetitive deliberate fire events is challenging and still often ineffective due to a case-by-case approach. A previous study provided a critical review of the situation and analysis of the main challenges. This study suggested that the intelligence process, integrating forensic data, could be a valid framework to provide a follow-up and systematic analysis provided it is adapted to the specificities of repetitive deliberate fires. In this current manuscript, a specific methodology to detect deliberate fires series, i.e. set by the same perpetrators, is presented and validated. It is based on case profiles relying on specific elements previously identified. The method was validated using a dataset of approximately 8000 deliberate fire events collected over 12 years in a Swiss state. Twenty possible series were detected, including 6 of 9 known series. These results are very promising and lead the way to a systematic implementation of this methodology in an intelligence framework, whilst demonstrating the need and benefit of increasing the collection of forensic specific information to strengthen the value of links between cases. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  13. Validation of a Syndromic Case Definition for Detecting Emergency Department Visits Potentially Related to Marijuana.

    PubMed

    DeYoung, Kathryn; Chen, Yushiuan; Beum, Robert; Askenazi, Michele; Zimmerman, Cali; Davidson, Arthur J

    Reliable methods are needed to monitor the public health impact of changing laws and perceptions about marijuana. Structured and free-text emergency department (ED) visit data offer an opportunity to monitor the impact of these changes in near-real time. Our objectives were to (1) generate and validate a syndromic case definition for ED visits potentially related to marijuana and (2) describe a method for doing so that was less resource intensive than traditional methods. We developed a syndromic case definition for ED visits potentially related to marijuana, applied it to BioSense 2.0 data from 15 hospitals in the Denver, Colorado, metropolitan area for the period September through October 2015, and manually reviewed each case to determine true positives and false positives. We used the number of visits identified by and the positive predictive value (PPV) for each search term and field to refine the definition for the second round of validation on data from February through March 2016. Of 126 646 ED visits during the first period, terms in 524 ED visit records matched ≥1 search term in the initial case definition (PPV, 92.7%). Of 140 932 ED visits during the second period, terms in 698 ED visit records matched ≥1 search term in the revised case definition (PPV, 95.7%). After another revision, the final case definition contained 6 keywords for marijuana or derivatives and 5 diagnosis codes for cannabis use, abuse, dependence, poisoning, and lung disease. Our syndromic case definition and validation method for ED visits potentially related to marijuana could be used by other public health jurisdictions to monitor local trends and for other emerging concerns.

  14. Validation of a Case Definition for Pediatric Brain Injury Using Administrative Data.

    PubMed

    McChesney-Corbeil, Jane; Barlow, Karen; Quan, Hude; Chen, Guanmin; Wiebe, Samuel; Jette, Nathalie

    2017-03-01

    Health administrative data are a common population-based data source for traumatic brain injury (TBI) surveillance and research; however, before using these data for surveillance, it is important to develop a validated case definition. The objective of this study was to identify the optimal International Classification of Disease , edition 10 (ICD-10), case definition to ascertain children with TBI in emergency room (ER) or hospital administrative data. We tested multiple case definitions. Children who visited the ER were identified from the Regional Emergency Department Information System at Alberta Children's Hospital. Secondary data were collected for children with trauma, musculoskeletal, or central nervous system complaints who visited the ER between October 5, 2005, and June 6, 2007. TBI status was determined based on chart review. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated for each case definition. Of 6639 patients, 1343 had a TBI. The best case definition was, "1 hospital or 1 ER encounter coded with an ICD-10 code for TBI in 1 year" (sensitivity 69.8% [95% confidence interval (CI), 67.3-72.2], specificity 96.7% [95% CI, 96.2-97.2], PPV 84.2% [95% CI 82.0-86.3], NPV 92.7% [95% CI, 92.0-93.3]). The nonspecific code S09.9 identified >80% of TBI cases in our study. The optimal ICD-10-based case definition for pediatric TBI in this study is valid and should be considered for future pediatric TBI surveillance studies. However, external validation is recommended before use in other jurisdictions, particularly because it is plausible that a larger proportion of patients in our cohort had milder injuries.

  15. Evaluation of load flow and grid expansion in a unit-commitment and expansion optimization model SciGRID International Conference on Power Grid Modelling

    NASA Astrophysics Data System (ADS)

    Senkpiel, Charlotte; Biener, Wolfgang; Shammugam, Shivenes; Längle, Sven

    2018-02-01

    Energy system models serve as a basis for long term system planning. Joint optimization of electricity generating technologies, storage systems and the electricity grid leads to lower total system cost compared to an approach in which the grid expansion follows a given technology portfolio and their distribution. Modelers often face the problem of finding a good tradeoff between computational time and the level of detail that can be modeled. This paper analyses the differences between a transport model and a DC load flow model to evaluate the validity of using a simple but faster transport model within the system optimization model in terms of system reliability. The main findings in this paper are that a higher regional resolution of a system leads to better results compared to an approach in which regions are clustered as more overloads can be detected. An aggregation of lines between two model regions compared to a line sharp representation has little influence on grid expansion within a system optimizer. In a DC load flow model overloads can be detected in a line sharp case, which is therefore preferred. Overall the regions that need to reinforce the grid are identified within the system optimizer. Finally the paper recommends the usage of a load-flow model to test the validity of the model results.

  16. Addressing the Lack of Measurement Invariance for the Measure of Acceptance of the Theory of Evolution

    NASA Astrophysics Data System (ADS)

    Wagler, Amy; Wagler, Ron

    2013-09-01

    The Measure of Acceptance of the Theory of Evolution (MATE) was constructed to be a single-factor instrument that assesses an individual's overall acceptance of evolutionary theory. The MATE was validated and the scores resulting from the MATE were found to be reliable for the population of inservice high school biology teachers. However, many studies have utilized the MATE for different populations, such as university students enrolled in a biology or genetics course, high school students, and preservice teachers. This is problematic because the dimensionality and reliability of the MATE may not be consistent across populations. It is not uncommon in science education research to find examples where scales are applied to novel populations without proper assessment of the validity and reliability. In order to illustrate this issue, a case study is presented where the dimensionality of the MATE is evaluated for a population of non-science major preservice elementary teachers. With this objective in mind, factor analytic and item response models are fit to the observed data to provide evidence for or against a one-dimensional latent structure and to detect which items do not conform to the theoretical construct for this population. The results of this study call into question any findings and conclusions made using the MATE for a Hispanic population of preservice teachers and point out the error of assuming invariance across substantively different populations.

  17. Validation of automatic wheeze detection in patients with obstructed airways and in healthy subjects.

    PubMed

    Guntupalli, Kalpalatha K; Alapat, Philip M; Bandi, Venkata D; Kushnir, Igal

    2008-12-01

    Computerized lung-sound analysis is a sensitive and quantitative method to identify wheezing by its typical pattern on spectral analysis. We evaluated the accuracy of the VRI, a multi-sensor, computer-based device with an automated technique of wheeze detection. The method was validated in 100 sound files from seven subjects with asthma or chronic obstructive pulmonary disease and seven healthy subjects by comparison of auscultation findings, examination of audio files, and computer detection of wheezes. Three blinded physicians identified 40 sound files with wheezes and 60 sound files without wheezes. Sensitivity and specificity were 83% and 85%, respectively. Negative predictive value and positive predictive value were 89% and 79%, respectively. Overall inter-rater agreement was 84%. False positive cases were found to contain sounds that simulate wheezes, such as background noises with high frequencies or strong noises from the throat that could be heard and identified without a stethoscope. The present findings demonstrate that the wheeze detection algorithm has good accuracy, sensitivity, specificity, negative predictive value and positive predictive value for wheeze detection in regional analyses with a single sensor and multiple sensors. Results are similar to those reported in the literature. The device is user-friendly, requires minimal patient effort, and, distinct from other devices, it provides a dynamic image of breath sound distribution with wheeze detection output in less than 1 minute.

  18. [Validation of SHI Claims Data Exemplified by Gender-specific Diagnoses].

    PubMed

    Hartmann, J; Weidmann, C; Biehle, R

    2016-10-01

    Aim: Use of statutory health insurance (SHI) data in health services research is increasing steadily and questions of validity are gaining importance. Using gender-specific diagnosis as an example, the aim of this study was to estimate the prevalence of implausible diagnosis and demonstrate an internal validation strategy. Method: The analysis is based on the SHI data from Baden-Württemberg for 2012. Subject of validation are gender-specific outpatient diagnoses that mismatch with the gender of the insured. To uncover this implausibility, it is necessary to clarify whether the diagnosis or the gender is wrong. The validation criteria used were the presence of further gender-specific diagnoses, the presence of gender-specific settlement items, the specialization of the physician in charge and the gender assignment of the first name of the insured. To review the quality of the validation, it was verified if the gender was changed during the following year. Results: Around 5.1% of all diagnoses were gender-specific and there was a mismatch between diagnosis and gender in 0.04% of these cases. All validation criteria were useful to sort out implausibility, whereas the last one was the most effective. Only 14% remained unsolved. From the total of 1 145 insured with implausible gender-specific diagnoses, one year later 128 had a new gender (in the data). 119 of these cases were rightly classified as insured with wrong gender and 9 cases were in the unsolved group. This confirms that the validation works well. Conclusion: Implausibility in SHI data is relatively small and can be solved with appropriate validation criteria. When validating SHI data, it is advisable to question all data used critically, to use multiple validation criteria instead of just one and to abandon the idea that reality and the associated data conform to standardized norms. Keeping these aspects in mind, analysis of SHI data is a good starting point for research in health services. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Identification of depression in women during pregnancy and the early postnatal period using the Whooley questions and the Edinburgh Postnatal Depression Scale: protocol for the Born and Bred in Yorkshire: PeriNatal Depression Diagnostic Accuracy (BaBY PaNDA) study.

    PubMed

    Littlewood, Elizabeth; Ali, Shehzad; Ansell, Pat; Dyson, Lisa; Gascoyne, Samantha; Hewitt, Catherine; Keding, Ada; Mann, Rachel; McMillan, Dean; Morgan, Deborah; Swan, Kelly; Waterhouse, Bev; Gilbody, Simon

    2016-06-13

    Perinatal depression is well recognised as a mental health condition but <50% of cases are identified by healthcare professionals in routine clinical practice. The Edinburgh Postnatal Depression Scale (EPDS) is often used to detect symptoms of postnatal depression in maternity and child services. The National Institute for Health and Care Excellence (NICE) recommends 2 'ultra-brief' case-finding questions (the Whooley questions) to aid identification of depression during the perinatal period, but this recommendation was made in the absence of any validation studies in a perinatal population. Limited research exists on the acceptability of these depression case-finding instruments and the cost-effectiveness of routine screening for perinatal depression. The diagnostic accuracy of the Whooley questions and the EPDS will be determined against a reference standard (the Client Interview Schedule-Revised) during pregnancy (around 20 weeks) and the early postnatal period (around 3-4 months post partum) in a sample of 379 women. Further outcome measures will assess a range of psychological comorbidities, health-related quality of life and resource utilisation. Women will be followed up 12 months postnatally. The sensitivity, specificity and predictive values of the Whooley questions and the EPDS will be calculated against the reference standard at 20 weeks pregnancy and 3-4 months post partum. Acceptability of the depression case-finding instruments to women and healthcare professionals will involve in-depth qualitative interviews. An existing decision analytic model will be adapted to determine the cost-effectiveness of routine screening for perinatal depression. This study is considered low risk for participants. Robust protocols will deal with cases where risk of depression, self-harm or suicide is identified. The protocol received favourable ethical opinion from the North East-York Research Ethics Committee (reference: 11/NE/0022). The study findings will be published in peer-reviewed journals and presented at relevant conferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  20. Myalgic encephalomyelitis, chronic fatigue syndrome: An infectious disease.

    PubMed

    Underhill, R A

    2015-12-01

    The etiology of myalgic encephalomyelitis also known as chronic fatigue syndrome or ME/CFS has not been established. Controversies exist over whether it is an organic disease or a psychological disorder and even the existence of ME/CFS as a disease entity is sometimes denied. Suggested causal hypotheses have included psychosomatic disorders, infectious agents, immune dysfunctions, autoimmunity, metabolic disturbances, toxins and inherited genetic factors. Clinical, immunological and epidemiological evidence supports the hypothesis that: ME/CFS is an infectious disease; the causal pathogen persists in patients; the pathogen can be transmitted by casual contact; host factors determine susceptibility to the illness; and there is a population of healthy carriers, who may be able to shed the pathogen. ME/CFS is endemic globally as sporadic cases and occasional cluster outbreaks (epidemics). Cluster outbreaks imply an infectious agent. An abrupt flu-like onset resembling an infectious illness occurs in outbreak patients and many sporadic patients. Immune responses in sporadic patients resemble immune responses in other infectious diseases. Contagion is shown by finding secondary cases in outbreaks, and suggested by a higher prevalence of ME/CFS in sporadic patients' genetically unrelated close contacts (spouses/partners) than the community. Abortive cases, sub-clinical cases, and carrier state individuals were found in outbreaks. The chronic phase of ME/CFS does not appear to be particularly infective. Some healthy patient-contacts show immune responses similar to patients' immune responses, suggesting exposure to the same antigen (a pathogen). The chronicity of symptoms and of immune system changes and the occurrence of secondary cases suggest persistence of a causal pathogen. Risk factors which predispose to developing ME/CFS are: a close family member with ME/CFS; inherited genetic factors; female gender; age; rest/activity; previous exposure to stress or toxins; various infectious diseases preceding the onset of ME/CFS; and occupational exposure of health care professionals. The hypothesis implies that ME/CFS patients should not donate blood or tissue and usual precautions should be taken when handling patients' blood and tissue. No known pathogen has been shown to cause ME/CFS. Confirmation of the hypothesis requires identification of a causal pathogen. Research should focus on a search for unknown and known pathogens. Finding a causal pathogen could assist with diagnosis; help find a biomarker; enable the development of anti-microbial treatments; suggest preventive measures; explain pathophysiological findings; and reassure patients about the validity of their symptoms.

  1. A need for an augmented review when reviewing rehabilitation research.

    PubMed

    Gerber, Lynn H; Nava, Andrew; Garfinkel, Steven; Goel, Divya; Weinstein, Ali A; Cai, Cindy

    2016-10-01

    There is a need for additional strategies for performing systematic reviews (SRs) to improve translation of findings into practice and to influence health policy. SRs critically appraise research methodology and determine level of evidence of research findings. The standard type of SR identifies randomized controlled trials (RCTs) as providing the most valid data and highest level of evidence. RCTs are not among the most frequently used research design in disability and health research. RCTs usually measure impairments for the primary research outcome rather than improved function, participation or societal integration. It forces a choice between "validity" and "utility/relevance." Other approaches have effectively been used to assess the validity of alternative research designs, whose outcomes focus on function and patient-reported outcomes. We propose that utilizing existing evaluation tools that measure knowledge, dissemination and utility of findings, may help improve the translation of findings into practice and health policy. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Translation and validation of the Greek version of the hypertension knowledge-level scale.

    PubMed

    Chatziefstratiou, Anastasia A; Giakoumidakis, Konstantinos; Fotos, Nikolaos V; Baltopoulos, George; Brokalaki-Pananoudaki, Hero

    2015-12-01

    To translate and validate a Greek version of the Hypertension Knowledge-Level Scale. The major barrier in the management of hypertension is the lack of adherence to medications and lifestyle adjustments. Patients' knowledge of the nature of hypertension and cardiovascular risk factors is a significant factor affecting individuals' adherence. However, few instruments have been developed to assess patients' knowledge level and no one has been translated into Greek. This study used a case control study design. Data collection for this research occurred between February 7, 2013 and March 10, 2013. The sample included both hypertensives and non-hypertensives. Participants simultaneously completed the version of the Hypertension Knowledge-Level Scale. A total of 68 individuals completed the questionnaire. Coefficient alpha was 0·66 for hypertensives and 0·79 for non-hypertensives. The difference for the mean scores in the entire scale between the two samples was statistically significant. In addition, significant differences were observed in many sub-dimensions and no correlation was found between level, knowledge and age, gender and education level. Findings provide support for the validity of the Greek version of the Hypertension Knowledge-Level Scale. The translation and validation of an instrument evaluating the level of knowledge of hypertension contribute to assessing the provided educational intervention. Low knowledge level should lead to the development of new methods of education, therefore nurses will have the opportunity to amplify their role in patients' education and develop relationships based on honesty and respect. © 2015 John Wiley & Sons Ltd.

  3. Laser scanning cytometry as a tool for biomarker validation

    NASA Astrophysics Data System (ADS)

    Mittag, Anja; Füldner, Christiane; Lehmann, Jörg; Tarnok, Attila

    2013-03-01

    Biomarkers are essential for diagnosis, prognosis, and therapy. As diverse is the range of diseases the broad is the range of biomarkers and the material used for analysis. Whereas body fluids can be relatively easily obtained and analyzed, the investigation of tissue is in most cases more complicated. The same applies for the screening and the evaluation of new biomarkers and the estimation of the binding of biomarkers found in animal models which need to be transferred into applications in humans. The latter in particular is difficult if it recognizes proteins or cells in tissue. A better way to find suitable cellular biomarkers for immunoscintigraphy or PET analyses may be therefore the in situ analysis of the cells in the respective tissue. In this study we present a method for biomarker validation using Laser Scanning Cytometry which allows the emulation of future in vivo analysis. The biomarker validation is exemplarily shown for rheumatoid arthritis (RA) on synovial membrane. Cryosections were scanned and analyzed by phantom contouring. Adequate statistical methods allowed the identification of suitable markers and combinations. The fluorescence analysis of the phantoms allowed the discrimination between synovial membrane of RA patients and non-RA control sections by using median fluorescence intensity and the "affected area". As intensity and area are relevant parameters of in vivo imaging (e.g. PET scan) too, the presented method allows emulation of a probable outcome of in vivo imaging, i.e. the binding of the target protein and hence, the validation of the potential of the respective biomarker.

  4. Digital pathology for the primary diagnosis of breast histopathological specimens: an innovative validation and concordance study on digital pathology validation and training.

    PubMed

    Williams, Bethany Jill; Hanby, Andrew; Millican-Slater, Rebecca; Nijhawan, Anju; Verghese, Eldo; Treanor, Darren

    2018-03-01

    To train and individually validate a group of breast pathologists in specialty-specific digital primary diagnosis by using a novel protocol endorsed by the Royal College of Pathologists' new guideline for digital pathology. The protocol allows early exposure to live digital reporting, in a risk-mitigated environment, and focuses on patient safety and professional development. Three specialty breast pathologists completed training in the use of a digital microscopy system, and were exposed to a training set of 20 challenging cases, designed to help them identify personal digital diagnostic pitfalls. Following this, the three pathologists viewed a total of 694 live, entire breast cases. All primary diagnoses were made on digital slides, with immediate glass slide review and reconciliation before final case sign-out. There was complete clinical concordance between the glass and digital impression of the case in 98.8% of cases. Only 1.2% of cases had a clinically significant difference in diagnosis/prognosis on glass and digital slide reads. All pathologists elected to continue using the digital microscope as the standard for breast histopathology specimens, with deferral to glass for a limited number of clinical/histological scenarios as a safety net. Individual training and validation for digital primary diagnosis allows pathologists to develop competence and confidence in their digital diagnostic skills, and aids safe and responsible transition from the light microscope to the digital microscope. © 2017 John Wiley & Sons Ltd.

  5. Nursing diagnosis of grieving: content validity in perinatal loss situations.

    PubMed

    Paloma-Castro, Olga; Romero-Sánchez, José Manuel; Paramio-Cuevas, Juan Carlos; Pastor-Montero, Sonia María; Castro-Yuste, Cristina; Frandsen, Anna J; Albar-Marín, María Jesús; Bas-Sarmiento, Pilar; Moreno-Corral, Luis Javier

    2014-06-01

    To validate the content of the NANDA-I nursing diagnosis of grieving in situations of perinatal loss. Using the Fehring's model, 208 Spanish experts were asked to assess the adequacy of the defining characteristics and other manifestations identified in the literature for cases of perinatal loss. The content validity index was 0.867. Twelve of the 18 defining characteristics were validated, seven as major and five as minor. From the manifestations proposed, "empty inside" was considered as major. The nursing diagnosis of grieving fits in content to the cases of perinatal loss according to experts. The results have provided evidence to support the use of the diagnosis in care plans for said clinical situation. © 2013 NANDA International.

  6. Reliability and validity: Part II.

    PubMed

    Davis, Debora Winders

    2004-01-01

    Determining measurement reliability and validity involves complex processes. There is usually room for argument about most instruments. It is important that the researcher clearly describes the processes upon which she made the decision to use a particular instrument, and presents the evidence available showing that the instrument is reliable and valid for the current purposes. In some cases, the researcher may need to conduct pilot studies to obtain evidence upon which to decide whether the instrument is valid for a new population or a different setting. In all cases, the researcher must present a clear and complete explanation for the choices, she has made regarding reliability and validity. The consumer must then judge the degree to which the researcher has provided adequate and theoretically sound rationale. Although I have tried to touch on most of the important concepts related to measurement reliability and validity, it is beyond the scope of this column to be exhaustive. There are textbooks devoted entirely to specific measurement issues if readers require more in-depth knowledge.

  7. Assessing reliability and validity measures in managed care studies.

    PubMed

    Montoya, Isaac D

    2003-01-01

    To review the reliability and validity literature and develop an understanding of these concepts as applied to managed care studies. Reliability is a test of how well an instrument measures the same input at varying times and under varying conditions. Validity is a test of how accurately an instrument measures what one believes is being measured. A review of reliability and validity instructional material was conducted. Studies of managed care practices and programs abound. However, many of these studies utilize measurement instruments that were developed for other purposes or for a population other than the one being sampled. In other cases, instruments have been developed without any testing of the instrument's performance. The lack of reliability and validity information may limit the value of these studies. This is particularly true when data are collected for one purpose and used for another. The usefulness of certain studies without reliability and validity measures is questionable, especially in cases where the literature contradicts itself

  8. Humidifier Disinfectants Are a Cause of Lung Injury among Adults in South Korea: A Community-Based Case-Control Study

    PubMed Central

    Kwon, Geun-Yong; Gwack, Jin; Park, Young-Joon; Youn, Seung-Ki; Kwon, Jun-Wook; Yang, Byung-Guk; Lee, Moo-Song; Jung, Miran; Lee, Hanyi; Jun, Byung-Yool; Lim, Hyun-Sul

    2016-01-01

    Backgrounds An outbreak of lung injury among South Korean adults was examined in a hospital-based case-control study, and the suspected cause was exposure to humidifier disinfectant (HD). However, a case-control study with community-dwelling controls was needed to validate the previous study’s findings, and to confirm the exposure-response relationship between HD and lung injury. Methods Each case of lung injury was matched with four community-dwelling controls, according to age (±3 years), sex, residence, and history of childbirth since 2006 (for women). Environmental risk factors, which included type and use of humidifier and HD, were investigated using a structured questionnaire during August 2011. The exposure to HD was calculated for both cases and controls, and the corresponding risks of lung injury were compared. Results Among 28 eligible cases, 16 patients agreed to participate, and 60 matched controls were considered eligible for this study. The cases were more likely to have been exposed to HD (odds ratio: 116.1, 95% confidence interval: 6.5–2,063.7). All cases were exposed to HDs containing polyhexamethyleneguanidine phosphate, and the risk of lung injury increased with the cumulative exposure, duration of exposure, and exposure per day. Conclusions This study revealed a statistically significant exposure-response relationship between HD and lung injury. Therefore, continuous monitoring and stricter evaluation of environmental chemicals’ safety should be conducted. PMID:26990641

  9. Can Findings from Randomized Controlled Trials of Social Skills Training in Autism Spectrum Disorder Be Generalized? The Neglected Dimension of External Validity

    ERIC Educational Resources Information Center

    Jonsson, Ulf; Olsson, Nora Choque; Bölte, Sven

    2016-01-01

    Systematic reviews have traditionally focused on internal validity, while external validity often has been overlooked. In this study, we systematically reviewed determinants of external validity in the accumulated randomized controlled trials of social skills group interventions for children and adolescents with autism spectrum disorder. We…

  10. The validation index: a new metric for validation of segmentation algorithms using two or more expert outlines with application to radiotherapy planning.

    PubMed

    Juneja, Prabhjot; Evans, Philp M; Harris, Emma J

    2013-08-01

    Validation is required to ensure automated segmentation algorithms are suitable for radiotherapy target definition. In the absence of true segmentation, algorithmic segmentation is validated against expert outlining of the region of interest. Multiple experts are used to overcome inter-expert variability. Several approaches have been studied in the literature, but the most appropriate approach to combine the information from multiple expert outlines, to give a single metric for validation, is unclear. None consider a metric that can be tailored to case-specific requirements in radiotherapy planning. Validation index (VI), a new validation metric which uses experts' level of agreement was developed. A control parameter was introduced for the validation of segmentations required for different radiotherapy scenarios: for targets close to organs-at-risk and for difficult to discern targets, where large variation between experts is expected. VI was evaluated using two simulated idealized cases and data from two clinical studies. VI was compared with the commonly used Dice similarity coefficient (DSCpair - wise) and found to be more sensitive than the DSCpair - wise to the changes in agreement between experts. VI was shown to be adaptable to specific radiotherapy planning scenarios.

  11. Experimental study of isolas in nonlinear systems featuring modal interactions

    PubMed Central

    Noël, Jean-Philippe; Virgin, Lawrence N.; Kerschen, Gaëtan

    2018-01-01

    The objective of the present paper is to provide experimental evidence of isolated resonances in the frequency response of nonlinear mechanical systems. More specifically, this work explores the presence of isolas, which are periodic solutions detached from the main frequency response, in the case of a nonlinear set-up consisting of two masses sliding on a horizontal guide. A careful experimental investigation of isolas is carried out using responses to swept-sine and stepped-sine excitations. The experimental findings are validated with advanced numerical simulations combining nonlinear modal analysis and bifurcation monitoring. In particular, the interactions between two nonlinear normal modes are shown to be responsible for the creation of the isolas. PMID:29584758

  12. Diagnosis Related Groups as a Casemix/Management Tool for Hospice Patients

    PubMed Central

    Johnson-Hürzeler, R.; Leary, Robert J.; Hill, Claire L.

    1983-01-01

    to control the costs of care, and to remain prepared for changes in reimbursement methodologies, health care organizations are beginning to analyze their casemix and their costs per case of providing care. Increasing importance is thus assigned to the search for valid casemix measures and to the construction of information systems which will support casemix investigations. After two years of information systems development, The Connecticut Hospice has begun its search for casemix measures that are applicable to the care of the dying. In this paper, we present our findings on the application of one casemix measure - the DRG - in the specialized area of nonsurgical care of the terminally ill.

  13. Possible Experiment for the Demonstration of Neutron Waves Interaction with Spatially Oscillating Potential

    NASA Astrophysics Data System (ADS)

    Miloi, Mădălina Mihaela; Goryunov, Semyon; Kulin, German

    2018-04-01

    A wide range of problems in neutron optics is well described by a theory based on application of the effective potential model. It was assumed that the concept of the effective potential in neutron optics have a limited region of validity and ceases to be correct in the case of the giant acceleration of a matter. To test this hypothesis a new Ultra Cold neutron experiment for the observation neutron interaction with potential structure oscillating in space was proposed. The report is focused on the model calculations of the topography of sample surface that oscillate in space. These calculations are necessary to find an optimal parameters and geometry of the planned experiment.

  14. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  15. Author Response to Sabour (2018), "Comment on Hall et al. (2017), 'How to Choose Between Measures of Tinnitus Loudness for Clinical Research? A Report on the Reliability and Validity of an Investigator-Administered Test and a Patient-Reported Measure Using Baseline Data Collected in a Phase IIa Drug Trial'".

    PubMed

    Hall, Deborah A; Mehta, Rajnikant L; Fackrell, Kathryn

    2018-03-08

    The authors respond to a letter to the editor (Sabour, 2018) concerning the interpretation of validity in the context of evaluating treatment-related change in tinnitus loudness over time. The authors refer to several landmark methodological publications and an international standard concerning the validity of patient-reported outcome measurement instruments. The tinnitus loudness rating performed better against our reported acceptability criteria for (face and convergent) validity than did the tinnitus loudness matching test. It is important to distinguish between tests that evaluate the validity of measuring treatment-related change over time and tests that quantify the accuracy of diagnosing tinnitus as a case and non-case.

  16. The cost of cancer registry operations: Impact of volume on cost per case for core and enhanced registry activities

    PubMed Central

    Subramanian, Sujha; Tangka, Florence K.L.; Beebe, Maggie Cole; Trebino, Diana; Weir, Hannah K.; Babcock, Frances

    2016-01-01

    Background Cancer registration data is vital for creating evidence-based policies and interventions. Quantifying the resources needed for cancer registration activities and identifying potential efficiencies are critically important to ensure sustainability of cancer registry operations. Methods Using a previously validated web-based cost assessment tool, we collected activity-based cost data and report findings using 3 years of data from 40 National Program of Cancer Registry grantees. We stratified registries by volume: low-volume included fewer than 10,000 cases, medium-volume included 10,000–50,000 cases, and high-volume included >50,000 cases. Results Low-volume cancer registries incurred an average of $93.11 to report a case (without in-kind contributions) compared with $27.70 incurred by high-volume registries. Across all registries, the highest cost per case was incurred for data collection and abstraction ($8.33), management ($6.86), and administration ($4.99). Low- and medium-volume registries have higher costs than high-volume registries for all key activities. Conclusions Some cost differences by volume can be explained by the large fixed costs required for administering and performing registration activities, but other reasons may include the quality of the data initially submitted to the registries from reporting sources such as hospitals and pathology laboratories. Automation or efficiency improvements in data collection can potentially reduce overall costs. PMID:26702880

  17. Model Validation Status Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E.L. Hardin

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified,more » and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and engineered barriers, plus the TSPA model itself Description of the model areas is provided in Section 3, and the documents reviewed are described in Section 4. The responsible manager for the Model Validation Status Review was the Chief Science Officer (CSO) for Bechtel-SAIC Co. (BSC). The team lead was assigned by the CSO. A total of 32 technical specialists were engaged to evaluate model validation status in the 21 model areas. The technical specialists were generally independent of the work reviewed, meeting technical qualifications as discussed in Section 5.« less

  18. Applying the food technology neophobia scale in a developing country context. A case-study on processed matooke (cooking banana) flour in Central Uganda.

    PubMed

    De Steur, Hans; Odongo, Walter; Gellynck, Xavier

    2016-01-01

    The success of new food technologies largely depends on consumers' behavioral responses to the innovation. In Eastern Africa, and Uganda in particular, a technology to process matooke into flour has been introduced with limited success. We measure and apply the Food technology Neophobia Scale (FTNS) to this specific case. This technique has been increasingly used in consumer research to determine consumers' fear for foods produced by novel technologies. Although it has been successful in developed countries, the low number and limited scope of past studies underlines the need for testing its applicability in a developing country context. Data was collected from 209 matooke consumers from Central Uganda. In general, respondents are relatively neophobic towards the new technology, with an average FTNS score of 58.7%, which hampers the success of processed matooke flour. Besides socio-demographic indicators, 'risk perception', 'healthiness' and the 'necessity of technologies' were key factors that influenced consumer's preference of processed matooke flour. Benchmarking the findings against previous FTNS surveys allows to evaluate factor solutions, compare standardized FTNS scores and further lends support for the multidimensionality of the FTNS. Being the first application in a developing country context, this study provides a case for examining food technology neophobia for processed staple crops in various regions and cultures. Nevertheless, research is needed to replicate this method and evaluate the external validity of our findings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. A case-control study of hormonal exposures as etiologic factors for ALS in women: Euro-MOTOR.

    PubMed

    Rooney, James P K; Visser, Anne E; D'Ovidio, Fabrizio; Vermeulen, Roel; Beghi, Ettore; Chio, Adriano; Veldink, Jan H; Logroscino, Giancarlo; van den Berg, Leonard H; Hardiman, Orla

    2017-09-19

    To investigate the role of hormonal risk factors for amyotrophic lateral sclerosis (ALS) among women from 3 European countries. ALS cases and matched controls were recruited over 4 years in Ireland, Italy, and the Netherlands. Hormonal exposures, including reproductive history, breastfeeding, contraceptive use, hormonal replacement therapy, and gynecologic surgical history, were recorded with a validated questionnaire. Logistic regression models adjusted for age, education, study site, smoking, alcohol, and physical activity were used to determine the association between female hormones and ALS risk. We included 653 patients and 1,217 controls. Oral contraceptive use was higher among controls (odds ratio [OR] 0.65, 95% confidence interval [CI] 0.51-0.84), and a dose-response effect was apparent. Hormone replacement therapy (HRT) was associated with a reduced risk of ALS only in the Netherlands (OR = 0.57, 95% CI 0.37-0.85). These findings were robust to sensitivity analysis, but there was some heterogeneity across study sites. This large case-control study across 3 different countries has demonstrated an association between exogenous estrogens and progestogens and reduced odds of ALS in women. These results are at variance with previous findings, which may be partly explained by differential regulatory, social, and cultural attitudes toward pregnancy, birth control, and HRT across the countries included. Our results indicate that hormonal factors may be important etiologic factors in ALS; however, a full understanding requires further investigation. © 2017 American Academy of Neurology.

  20. CHEK2 1100delC, IVS2+1G>A and I157T mutations are not present in colorectal cancer cases from Turkish population.

    PubMed

    Bayram, Süleyman; Topaktaş, Mehmet; Akkız, Hikmet; Bekar, Aynur; Akgöllü, Ersin

    2012-10-01

    The cell cycle checkpoint kinase 2 (CHEK2) protein participates in the DNA damage response in many cell types. Germline mutations in CHEK2 (1100delC, IVS2+1G>A and I157T) have been impaired serine/threonine kinase activity and associated with a range of cancer types. This hospital-based case-control study aimed to investigate whether CHEK2 1100delC, IVS2+1G>A and I157T mutations play an important role in the development of colorectal cancer (CRC) in Turkish population. A total of 210 CRC cases and 446 cancer-free controls were genotyped for CHEK2 mutations by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) and allele specific-polymerase chain reaction (AS-PCR) methods. We did not find the CHEK2 1100delC, IVS2+1G>A and I157T mutations in any of the Turkish subjects. Our result demonstrate for the first time that CHEK2 1100delC, IVS2+1G>A and I157T mutations have not been agenetic susceptibility factor for CRC in the Turkish population. Overall, our data suggest that genotyping of CHEK2 mutations in clinical settings in the Turkish population should not be recommended. However, independent studies are need to validate our findings in a larger series, as well as in patients of different ethnic origins. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Validation of an improved abnormality insertion method for medical image perception investigations

    NASA Astrophysics Data System (ADS)

    Madsen, Mark T.; Durst, Gregory R.; Caldwell, Robert T.; Schartz, Kevin M.; Thompson, Brad H.; Berbaum, Kevin S.

    2009-02-01

    The ability to insert abnormalities in clinical tomographic images makes image perception studies with medical images practical. We describe a new insertion technique and its experimental validation that uses complementary image masks to select an abnormality from a library and place it at a desired location. The method was validated using a 4-alternative forced-choice experiment. For each case, four quadrants were simultaneously displayed consisting of 5 consecutive frames of a chest CT with a pulmonary nodule. One quadrant was unaltered, while the other 3 had the nodule from the unaltered quadrant artificially inserted. 26 different sets were generated and repeated with order scrambling for a total of 52 cases. The cases were viewed by radiology staff and residents who ranked each quadrant by realistic appearance. On average, the observers were able to correctly identify the unaltered quadrant in 42% of cases, and identify the unaltered quadrant both times it appeared in 25% of cases. Consensus, defined by a majority of readers, correctly identified the unaltered quadrant in only 29% of 52 cases. For repeats, the consensus observer successfully identified the unaltered quadrant only once. We conclude that the insertion method can be used to reliably place abnormalities in perception experiments.

  2. Reducing false positives of microcalcification detection systems by removal of breast arterial calcifications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mordang, Jan-Jurre, E-mail: Jan-Jurre.Mordang@radboudumc.nl; Gubern-Mérida, Albert; Karssemeijer, Nico

    Purpose: In the past decades, computer-aided detection (CADe) systems have been developed to aid screening radiologists in the detection of malignant microcalcifications. These systems are useful to avoid perceptual oversights and can increase the radiologists’ detection rate. However, due to the high number of false positives marked by these CADe systems, they are not yet suitable as an independent reader. Breast arterial calcifications (BACs) are one of the most frequent false positives marked by CADe systems. In this study, a method is proposed for the elimination of BACs as positive findings. Removal of these false positives will increase the performancemore » of the CADe system in finding malignant microcalcifications. Methods: A multistage method is proposed for the removal of BAC findings. The first stage consists of a microcalcification candidate selection, segmentation and grouping of the microcalcifications, and classification to remove obvious false positives. In the second stage, a case-based selection is applied where cases are selected which contain BACs. In the final stage, BACs are removed from the selected cases. The BACs removal stage consists of a GentleBoost classifier trained on microcalcification features describing their shape, topology, and texture. Additionally, novel features are introduced to discriminate BACs from other positive findings. Results: The CADe system was evaluated with and without BACs removal. Here, both systems were applied on a validation set containing 1088 cases of which 95 cases contained malignant microcalcifications. After bootstrapping, free-response receiver operating characteristics and receiver operating characteristics analyses were carried out. Performance between the two systems was compared at 0.98 and 0.95 specificity. At a specificity of 0.98, the sensitivity increased from 37% to 52% and the sensitivity increased from 62% up to 76% at a specificity of 0.95. Partial areas under the curve in the specificity range of 0.8–1.0 were significantly different between the system without BACs removal and the system with BACs removal, 0.129 ± 0.009 versus 0.144 ± 0.008 (p<0.05), respectively. Additionally, the sensitivity at one false positive per 50 cases and one false positive per 25 cases increased as well, 37% versus 51% (p<0.05) and 58% versus 67% (p<0.05) sensitivity, respectively. Additionally, the CADe system with BACs removal reduces the number of false positives per case by 29% on average. The same sensitivity at one false positive per 50 cases in the CADe system without BACs removal can be achieved at one false positive per 80 cases in the CADe system with BACs removal. Conclusions: By using dedicated algorithms to detect and remove breast arterial calcifications, the performance of CADe systems can be improved, in particular, at false positive rates representative for operating points used in screening.« less

  3. Epidemiology and burden of systemic lupus erythematosus in a Southern European population: data from the community-based lupus registry of Crete, Greece.

    PubMed

    Gergianaki, Irini; Fanouriakis, Antonis; Repa, Argyro; Tzanakakis, Michalis; Adamichou, Christina; Pompieri, Alexandra; Spirou, Giorgis; Bertsias, Antonios; Kabouraki, Eleni; Tzanakis, Ioannis; Chatzi, Leda; Sidiropoulos, Prodromos; Boumpas, Dimitrios T; Bertsias, George K

    2017-12-01

    Several population-based studies on systemic lupus erythematosus (SLE) have been reported, yet community-based, individual-case validated, comprehensive reports are missing. We studied the SLE epidemiology and burden on the island of Crete during 1999-2013. Multisource case-finding included patients ≥15 years old. Cases were ascertained by the ACR 1997, SLICC 2012 criteria and rheumatologist diagnosis, and validated through synthesis of medical charts, administrative and patient-generated data. Overall age-adjusted/sex-adjusted incidence was 7.4 (95% CI 6.8 to 7.9) per 100 000 persons/year, with stabilising trends in women but increasing in men, and average (±SD) age of diagnosis at 43 (±15) years. Adjusted and crude prevalence (December 2013) was 123.4 (113.9 to 132.9) and 143 (133 to 154)/10 5 (165/10 5 in urban vs 123/10 5 in rural regions, p<0.001), respectively. Age-adjusted/sex-adjusted nephritis incidence was 0.6 (0.4 to 0.8) with stable trends, whereas that of neuropsychiatric SLE was 0.5 (0.4 to 0.7) per 100 000 persons/year and increasing. Although half of prevalent cases had mild manifestations, 30.5% developed organ damage after 7.2 (±6.6) years of disease duration, with the neuropsychiatric domain most frequently afflicted, and 4.4% of patients with nephritis developed end-stage renal disease. The ACR 1997 and SLICC 2012 classification criteria showed high concordance (87%), yet physician-based diagnosis occurred earlier than criteria-based in about 20% of cases. By the use of a comprehensive methodology, we describe the full spectrum of SLE from the community to tertiary care, with almost half of the cases having mild disease, yet with significant damage accrual. SLE is not rare, affects predominantly middle-aged women and is increasingly recognised in men. Neuropsychiatric disease is an emerging frontier in lupus prevention and care. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Development and External Validation of a Melanoma Risk Prediction Model Based on Self-assessed Risk Factors.

    PubMed

    Vuong, Kylie; Armstrong, Bruce K; Weiderpass, Elisabete; Lund, Eiliv; Adami, Hans-Olov; Veierod, Marit B; Barrett, Jennifer H; Davies, John R; Bishop, D Timothy; Whiteman, David C; Olsen, Catherine M; Hopper, John L; Mann, Graham J; Cust, Anne E; McGeechan, Kevin

    2016-08-01

    Identifying individuals at high risk of melanoma can optimize primary and secondary prevention strategies. To develop and externally validate a risk prediction model for incident first-primary cutaneous melanoma using self-assessed risk factors. We used unconditional logistic regression to develop a multivariable risk prediction model. Relative risk estimates from the model were combined with Australian melanoma incidence and competing mortality rates to obtain absolute risk estimates. A risk prediction model was developed using the Australian Melanoma Family Study (629 cases and 535 controls) and externally validated using 4 independent population-based studies: the Western Australia Melanoma Study (511 case-control pairs), Leeds Melanoma Case-Control Study (960 cases and 513 controls), Epigene-QSkin Study (44 544, of which 766 with melanoma), and Swedish Women's Lifestyle and Health Cohort Study (49 259 women, of which 273 had melanoma). We validated model performance internally and externally by assessing discrimination using the area under the receiver operating curve (AUC). Additionally, using the Swedish Women's Lifestyle and Health Cohort Study, we assessed model calibration and clinical usefulness. The risk prediction model included hair color, nevus density, first-degree family history of melanoma, previous nonmelanoma skin cancer, and lifetime sunbed use. On internal validation, the AUC was 0.70 (95% CI, 0.67-0.73). On external validation, the AUC was 0.66 (95% CI, 0.63-0.69) in the Western Australia Melanoma Study, 0.67 (95% CI, 0.65-0.70) in the Leeds Melanoma Case-Control Study, 0.64 (95% CI, 0.62-0.66) in the Epigene-QSkin Study, and 0.63 (95% CI, 0.60-0.67) in the Swedish Women's Lifestyle and Health Cohort Study. Model calibration showed close agreement between predicted and observed numbers of incident melanomas across all deciles of predicted risk. In the external validation setting, there was higher net benefit when using the risk prediction model to classify individuals as high risk compared with classifying all individuals as high risk. The melanoma risk prediction model performs well and may be useful in prevention interventions reliant on a risk assessment using self-assessed risk factors.

  5. Non-Alcoholic Steatohepatitis (NASH): Risk Factors in Morbidly Obese Patients

    PubMed Central

    Losekann, Alexandre; Weston, Antonio C.; de Mattos, Angelo A.; Tovo, Cristiane V.; de Carli, Luis A.; Espindola, Marilia B.; Pioner, Sergio R.; Coral, Gabriela P.

    2015-01-01

    The aim was to investigate the prevalence of non-alcoholic steatohepatitis (NASH) and risk factors for hepatic fibrosis in morbidly obese patients submitted to bariatric surgery. This retrospective study recruited all patients submitted to bariatric surgery from January 2007 to December 2012 at a reference attendance center of Southern Brazil. Clinical and biochemical data were studied as a function of the histological findings of liver biopsies done during the surgery. Steatosis was present in 226 (90.4%) and NASH in 176 (70.4%) cases. The diagnosis of cirrhosis was established in four cases (1.6%) and fibrosis in 108 (43.2%). Risk factors associated with NASH at multivariate analysis were alanine aminotransferase (ALT) >1.5 times the upper limit of normal (ULN); glucose ≥ 126 mg/dL and triglycerides ≥ 150 mg/dL. All patients with ALT ≥1.5 times the ULN had NASH. When the presence of fibrosis was analyzed, ALT > 1.5 times the ULN and triglycerides ≥ 150 mg/dL were risk factors, furthermore, there was an increase of 1% in the prevalence of fibrosis for each year of age increase. Not only steatosis, but NASH is a frequent finding in MO patients. In the present study, ALT ≥ 1.5 times the ULN identifies all patients with NASH, this finding needs to be further validated in other studies. Moreover, the presence of fibrosis was associated with ALT, triglycerides and age, identifying a subset of patients with more severe disease. PMID:26512661

  6. Job Embeddedness Demonstrates Incremental Validity When Predicting Turnover Intentions for Australian University Employees

    PubMed Central

    Heritage, Brody; Gilbert, Jessica M.; Roberts, Lynne D.

    2016-01-01

    Job embeddedness is a construct that describes the manner in which employees can be enmeshed in their jobs, reducing their turnover intentions. Recent questions regarding the properties of quantitative job embeddedness measures, and their predictive utility, have been raised. Our study compared two competing reflective measures of job embeddedness, examining their convergent, criterion, and incremental validity, as a means of addressing these questions. Cross-sectional quantitative data from 246 Australian university employees (146 academic; 100 professional) was gathered. Our findings indicated that the two compared measures of job embeddedness were convergent when total scale scores were examined. Additionally, job embeddedness was capable of demonstrating criterion and incremental validity, predicting unique variance in turnover intention. However, this finding was not readily apparent with one of the compared job embeddedness measures, which demonstrated comparatively weaker evidence of validity. We discuss the theoretical and applied implications of these findings, noting that job embeddedness has a complementary place among established determinants of turnover intention. PMID:27199817

  7. Farmer responses to multiple stresses in the face of global change: Assessing five case studies to enhance adaptation

    NASA Astrophysics Data System (ADS)

    Nicholas, K. A.; Feola, G.; Lerner, A. M.; Jain, M.; Montefrio, M.

    2013-12-01

    The global challenge of sustaining agricultural livelihoods and yields in the face of growing populations and increasing climate change is the topic of intense research. The role of on-the-ground decision-making by individual farmers actually producing food, fuel, and fiber is often studied in individual cases to determine its environmental, economic, and social effects. However, there are few efforts to link across studies in a way that provides opportunities to better understand empirical farmer behavior, design effective policies, and be able to aggregate from case studies to a broader scale. Here we synthesize existing literature to identify four general factors affecting farmer decision-making: local technical and socio-cultural contexts; actors and institutions involved in decision-making; multiple stressors at broader scales; and the temporal gradient of decision-making. We use these factors to compare five cases that illustrate agricultural decision-making and its impacts: cotton and castor farming in Gujarat, India; swidden cultivation of upland rice in the Philippines; potato cultivation in Andean Colombia; winegrowing in Northern California; and maize production in peri-urban central Mexico. These cases span a geographic and economic range of production systems, but we find that we are able to make valid comparisons and draw lessons common across all cases by using the four factors as an organizing principle. We also find that our understanding of why farmers make the decisions they do changes if we neglect to examine even one of the four general factors guiding decision-making. This suggests that these four factors are important to understanding farmer decision-making, and can be used to guide the design and interpretation of future studies, as well as be the subject of further research in and of themselves to promote an agricultural system that is resilient to climate and other global environmental changes.

  8. Cross-cultural adaptation, reliability and validity of the Spanish version of the Quality of Life in Adult Cancer Survivors (QLACS) questionnaire: application in a sample of short-term survivors.

    PubMed

    Escobar, Antonio; Trujillo-Martín, Maria del Mar; Rueda, Antonio; Pérez-Ruiz, Elisabeth; Avis, Nancy E; Bilbao, Amaia

    2015-11-16

    The aim of this study was to validate the Quality of Life in Adult Cancer Survivors (QLACS) in short-term Spanish cancer survivor's patients. Patients with breast, colorectal or prostate cancer that had finished their initial cancer treatment 3 years before the beginning of this study completed QLACS, WHOQOL, Short Form-36, Hospital Anxiety and Depression Scale, EORTC-QLQ-BR23 and EQ-5D. Cultural adaptation was made based on established guidelines. Reliability was evaluated using internal consistency and test-retest. Convergent validity was studied by mean of Pearson's correlation coefficient. Structural validity was determined by a second-order confirmatory factor analysis (CFA) and Rasch analysis was used to assess the unidimensionality of the Generic and Cancer-specific scales. Cronbach's alpha were above 0.7 in all domains and summary scales. Test-retest coefficients were 0.88 for Generic and 0.82 for Cancer-specific summary scales. QLACS generic summary scale was correlated with other generic criterion measures, SF-36 MCS (r = - 0.74) and EQ-VAS (r = - 0.63). QLACS cancer-specific scale had lower values with the same constructs. CFA provided satisfactory fit indices in all cases. The RMSEA value was 0.061 and CFI and TLI values were 0.929 and 0.925, respectively. All factor loadings were higher than 0.40 and statistically significant (P < 0.001). Generic summary scale had eight misfitting items. In the remaining 20 items, the unidimensionality was supported. Cancer Specific summary scale showed four misfitting items, the remaining showed unidimensionality. The findings support the validity and reliability of QLACS questionnaire to be used in short-term cancer survivors.

  9. Validity of Principal Diagnoses in Discharge Summaries and ICD-10 Coding Assessments Based on National Health Data of Thailand.

    PubMed

    Sukanya, Chongthawonsatid

    2017-10-01

    This study examined the validity of the principal diagnoses on discharge summaries and coding assessments. Data were collected from the National Health Security Office (NHSO) of Thailand in 2015. In total, 118,971 medical records were audited. The sample was drawn from government hospitals and private hospitals covered by the Universal Coverage Scheme in Thailand. Hospitals and cases were selected using NHSO criteria. The validity of the principal diagnoses listed in the "Summary and Coding Assessment" forms was established by comparing data from the discharge summaries with data obtained from medical record reviews, and additionally, by comparing data from the coding assessments with data in the computerized ICD (the data base used for reimbursement-purposes). The summary assessments had low sensitivities (7.3%-37.9%), high specificities (97.2%-99.8%), low positive predictive values (9.2%-60.7%), and high negative predictive values (95.9%-99.3%). The coding assessments had low sensitivities (31.1%-69.4%), high specificities (99.0%-99.9%), moderate positive predictive values (43.8%-89.0%), and high negative predictive values (97.3%-99.5%). The discharge summaries and codings often contained mistakes, particularly the categories "Endocrine, nutritional, and metabolic diseases", "Symptoms, signs, and abnormal clinical and laboratory findings not elsewhere classified", "Factors influencing health status and contact with health services", and "Injury, poisoning, and certain other consequences of external causes". The validity of the principal diagnoses on the summary and coding assessment forms was found to be low. The training of physicians and coders must be strengthened to improve the validity of discharge summaries and codings.

  10. A systematic review of validated methods for identifying transfusion-related ABO incompatibility reactions using administrative and claims data.

    PubMed

    Carnahan, Ryan M; Kee, Vicki R

    2012-01-01

    This paper aimed to systematically review algorithms to identify transfusion-related ABO incompatibility reactions in administrative data, with a focus on studies that have examined the validity of the algorithms. A literature search was conducted using PubMed, Iowa Drug Information Service database, and Embase. A Google Scholar search was also conducted because of the difficulty identifying relevant studies. Reviews were conducted by two investigators to identify studies using data sources from the USA or Canada because these data sources were most likely to reflect the coding practices of Mini-Sentinel data sources. One study was found that validated International Classification of Diseases (ICD-9-CM) codes representing transfusion reactions. None of these cases were ABO incompatibility reactions. Several studies consistently used ICD-9-CM code 999.6, which represents ABO incompatibility reactions, and a technical report identified the ICD-10 code for these reactions. One study included the E-code E8760 for mismatched blood in transfusion in the algorithm. Another study reported finding no ABO incompatibility reaction codes in the Healthcare Cost and Utilization Project Nationwide Inpatient Sample database, which contains data of 2.23 million patients who received transfusions, raising questions about the sensitivity of administrative data for identifying such reactions. Two studies reported perfect specificity, with sensitivity ranging from 21% to 83%, for the code identifying allogeneic red blood cell transfusions in hospitalized patients. There is no information to assess the validity of algorithms to identify transfusion-related ABO incompatibility reactions. Further information on the validity of algorithms to identify transfusions would also be useful. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Practical mental health assessment in primary care. Validity and utility of the Quick PsychoDiagnostics Panel.

    PubMed

    Shedler, J; Beck, A; Bensen, S

    2000-07-01

    Many case-finding instruments are available to help primary care physicians (PCPs) diagnose depression, but they are not widely used. Physicians often consider these instruments too time consuming or feel they do not provide sufficient diagnostic information. Our study examined the validity and utility of the Quick PsychoDiagnostics (QPD) Panel, an automated mental health test designed to meet the special needs of PCPs. The test screens for 9 common psychiatric disorders and requires no physician time to administer or score. We evaluated criterion validity relative to the Structured Clinical Interview for DSM-IV (SCID), and evaluated convergent validity by correlating QPD Panel scores with established mental health measures. Sensitivity to change was examined by readministering the test to patients pretreatment and posttreatment. Utility was evaluated through physician and patient satisfaction surveys. For major depression, sensitivity and specificity were 81% and 96%, respectively. For other disorders, sensitivities ranged from 69% to 98%, and specificities ranged from 90% to 97%. The depression severity score correlated highly with the Beck, Hamilton, Zung, and CES-D depression scales, and the anxiety score correlated highly with the Spielberger State-Trait Anxiety Inventory and the anxiety subscale of the Symptom Checklist 90 (Ps <.001). The test was sensitive to change. All PCPs agreed or strongly agreed that the QPD Panel "is convenient and easy to use," "can be used immediately by any physician," and "helps provide better patient care." Patients also rated the test favorably. The QPD Panel is a valid mental health assessment tool that can diagnose a range of common psychiatric disorders and is practical for routine use in primary care.

  12. Probability Density Functions of Observed Rainfall in Montana

    NASA Technical Reports Server (NTRS)

    Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.

    1995-01-01

    The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.

  13. Multimethod assessment of psychopathy in relation to factors of internalizing and externalizing from the Personality Assessment Inventory: the impact of method variance and suppressor effects.

    PubMed

    Blonigen, Daniel M; Patrick, Christopher J; Douglas, Kevin S; Poythress, Norman G; Skeem, Jennifer L; Lilienfeld, Scott O; Edens, John F; Krueger, Robert F

    2010-03-01

    Research to date has revealed divergent relations across factors of psychopathy measures with criteria of internalizing (INT; anxiety, depression) and externalizing (EXT; antisocial behavior, substance use). However, failure to account for method variance and suppressor effects has obscured the consistency of these findings across distinct measures of psychopathy. Using a large correctional sample, the current study employed a multimethod approach to psychopathy assessment (self-report, interview and file review) to explore convergent and discriminant relations between factors of psychopathy measures and latent criteria of INT and EXT derived from the Personality Assessment Inventory (Morey, 2007). Consistent with prediction, scores on the affective-interpersonal factor of psychopathy were negatively associated with INT and negligibly related to EXT, whereas scores on the social deviance factor exhibited positive associations (moderate and large, respectively) with both INT and EXT. Notably, associations were highly comparable across the psychopathy measures when accounting for method variance (in the case of EXT) and when assessing for suppressor effects (in the case of INT). Findings are discussed in terms of implications for clinical assessment and evaluation of the validity of interpretations drawn from scores on psychopathy measures. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  14. Multi-method Assessment of Psychopathy in Relation to Factors of Internalizing and Externalizing from the Personality Assessment Inventory: The Impact of Method Variance and Suppressor Effects

    PubMed Central

    Blonigen, Daniel M.; Patrick, Christopher J.; Douglas, Kevin S.; Poythress, Norman G.; Skeem, Jennifer L.; Lilienfeld, Scott O.; Edens, John F.; Krueger, Robert F.

    2010-01-01

    Research to date has revealed divergent relations across factors of psychopathy measures with criteria of internalizing (INT; anxiety, depression) and externalizing (EXT; antisocial behavior, substance use). However, failure to account for method variance and suppressor effects has obscured the consistency of these findings across distinct measures of psychopathy. Using a large correctional sample, the current study employed a multi-method approach to psychopathy assessment (self-report, interview/file review) to explore convergent and discriminant relations between factors of psychopathy measures and latent criteria of INT and EXT derived from the Personality Assessment Inventory (PAI; L. Morey, 2007). Consistent with prediction, scores on the affective-interpersonal factor of psychopathy were negatively associated with INT and negligibly related to EXT, whereas scores on the social deviance factor exhibited positive associations (moderate and large, respectively) with both INT and EXT. Notably, associations were highly comparable across the psychopathy measures when accounting for method variance (in the case of EXT) and when assessing for suppressor effects (in the case of INT). Findings are discussed in terms of implications for clinical assessment and evaluation of the validity of interpretations drawn from scores on psychopathy measures. PMID:20230156

  15. Finding cannabinoids in hair does not prove cannabis consumption.

    PubMed

    Moosmann, Bjoern; Roth, Nadine; Auwärter, Volker

    2015-10-07

    Hair analysis for cannabinoids is extensively applied in workplace drug testing and in child protection cases, although valid data on incorporation of the main analytical targets, ∆9-tetrahydrocannabinol (THC) and 11-nor-9-carboxy-THC (THC-COOH), into human hair is widely missing. Furthermore, ∆9-tetrahydrocannabinolic acid A (THCA-A), the biogenetic precursor of THC, is found in the hair of persons who solely handled cannabis material. In the light of the serious consequences of positive test results the mechanisms of drug incorporation into hair urgently need scientific evaluation. Here we show that neither THC nor THCA-A are incorporated into human hair in relevant amounts after systemic uptake. THC-COOH, which is considered an incontestable proof of THC uptake according to the current scientific doctrine, was found in hair, but was also present in older hair segments, which already grew before the oral THC intake and in sebum/sweat samples. Our studies show that all three cannabinoids can be present in hair of non-consuming individuals because of transfer through cannabis consumers, via their hands, their sebum/sweat, or cannabis smoke. This is of concern for e.g. child-custody cases as cannabinoid findings in a child's hair may be caused by close contact to cannabis consumers rather than by inhalation of side-stream smoke.

  16. MATES in Construction: Impact of a Multimodal, Community-Based Program for Suicide Prevention in the Construction Industry

    PubMed Central

    Gullestrup, Jorgen; Lequertier, Belinda; Martin, Graham

    2011-01-01

    A large-scale workplace-based suicide prevention and early intervention program was delivered to over 9,000 construction workers on building sites across Queensland. Intervention components included universal General Awareness Training (GAT; general mental health with a focus on suicide prevention); gatekeeper training provided to construction worker volunteer ‘Connectors’; Suicide First Aid (ASIST) training offered to key workers; outreach support provided by trained and supervised MIC staff; state-wide suicide prevention hotline; case management service; and postvention support provided in the event of a suicide. Findings from over 7,000 workers (April 2008 to November 2010) are reported, indicating strong construction industry support, with 67% building sites and employers approached agreeing to participate in MIC. GAT participants demonstrated significantly increased suicide prevention awareness compared with a comparison group. Connector training participants rated MIC as helpful and effective, felt prepared to intervene with a suicidal person, and knew where to seek help for a suicidal individual following the training. Workers engaged positively with the after-hours crisis support phone line and case management. MIC provided postvention support to 10 non-MIC sites and sites engaged with MIC, but not yet MIC-compliant. Current findings support the potential effectiveness and social validity of MIC for preventing suicide in construction workers. PMID:22163201

  17. Making the Case for Practice-Based Research and the Imperative Role of Design Practitioners.

    PubMed

    Freihoefer, Kara; Zborowsky, Terri

    2017-04-01

    The purpose of this article is to justify the need for evidence-based design (EBD) in a research-based architecture and design practice. This article examines the current state of practice-based research (PBR), supports the need for EBD, illustrates PBR methods that can be applied to design work, and explores how findings can be used as a decision-making tool during design and as a validation tool during postoccupancy. As a result, design professions' body of knowledge will advance and practitioners will be better informed to protect the health, safety, and welfare of the society. Furthermore, characteristics of Friedman's progressive research program are used as a framework to examine the current state of PBR in design practice. A modified EBD approach is proposed and showcased with a case study of a renovated inpatient unit. The modified approach demonstrates how a highly integrated project team, especially the role of design practitioners, contributed to the success of utilizing baseline findings and evidence in decision-making throughout the design process. Lastly, recommendations and resources for learning research concepts are provided for practitioners. It is the role of practitioners to pave the way for the next generation of design professionals, as the request and expectation for research become more prevalent in design practice.

  18. (Small) Resonant non-Gaussianities: Signatures of a Discrete Shift Symmetry in the Effective Field Theory of Inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behbahani, Siavosh R.; /SLAC /Stanford U., Phys. Dept. /Boston U.; Dymarsky, Anatoly

    2012-06-06

    We apply the Effective Field Theory of Inflation to study the case where the continuous shift symmetry of the Goldstone boson {pi} is softly broken to a discrete subgroup. This case includes and generalizes recently proposed String Theory inspired models of Inflation based on Axion Monodromy. The models we study have the property that the 2-point function oscillates as a function of the wavenumber, leading to oscillations in the CMB power spectrum. The non-linear realization of time diffeomorphisms induces some self-interactions for the Goldstone boson that lead to a peculiar non-Gaussianity whose shape oscillates as a function of the wavenumber.more » We find that in the regime of validity of the effective theory, the oscillatory signal contained in the n-point correlation functions, with n > 2, is smaller than the one contained in the 2-point function, implying that the signature of oscillations, if ever detected, will be easier to find first in the 2-point function, and only then in the higher order correlation functions. Still the signal contained in higher-order correlation functions, that we study here in generality, could be detected at a subleading level, providing a very compelling consistency check for an approximate discrete shift symmetry being realized during inflation.« less

  19. A Case Study: Follow-Up Assessment of Facilitated Communication.

    ERIC Educational Resources Information Center

    Simon, Elliott W.; And Others

    1996-01-01

    This study of an adolescent with multiple disabilities, including moderate mental retardation, who was reported to engage in validated facilitated communication (FC) found he did not engage in validated FC; performance was equivalent whether food or nonfood reinforcers were used; and the Picture Exchange Communication System was a valid and…

  20. A Framework for Mixing Methods in Quantitative Measurement Development, Validation, and Revision: A Case Study

    ERIC Educational Resources Information Center

    Luyt, Russell

    2012-01-01

    A framework for quantitative measurement development, validation, and revision that incorporates both qualitative and quantitative methods is introduced. It extends and adapts Adcock and Collier's work, and thus, facilitates understanding of quantitative measurement development, validation, and revision as an integrated and cyclical set of…

  1. Developing the Persian version of the homophone meaning generation test

    PubMed Central

    Ebrahimipour, Mona; Motamed, Mohammad Reza; Ashayeri, Hassan; Modarresi, Yahya; Kamali, Mohammad

    2016-01-01

    Background: Finding the right word is a necessity in communication, and its evaluation has always been a challenging clinical issue, suggesting the need for valid and reliable measurements. The Homophone Meaning Generation Test (HMGT) can measure the ability to switch between verbal concepts, which is required in word retrieval. The purpose of this study was to adapt and validate the Persian version of the HMGT. Methods: The first phase involved the adaptation of the HMGT to the Persian language. The second phase concerned the psychometric testing. The word-finding performance was assessed in 90 Persian-speaking healthy individuals (20-50 year old; 45 males and 45 females) through three naming tasks: Semantic Fluency, Phonemic Fluency, and Homophone Meaning Generation Test. The participants had no history of neurological or psychiatric diseases, alcohol abuse, severe depression, or history of speech, language, or learning problems. Results: The internal consistency coefficient was larger than 0.8 for all the items with a total Cronbach’s alpha of 0.80. Interrater and intrarater reliability were also excellent. The validity of all items was above 0.77, and the content validity index (0.99) was appropriate. The Persian HMGT had strong convergent validity with semantic and phonemic switching and adequate divergent validity with semantic and phonemic clustering. Conclusion: The Persian version of the Homophone Meaning Generation Test is an appropriate, valid, and reliable test to evaluate the ability to switch between verbal concepts in the assessment of word-finding performance. PMID:27390705

  2. Developing the Persian version of the homophone meaning generation test.

    PubMed

    Ebrahimipour, Mona; Motamed, Mohammad Reza; Ashayeri, Hassan; Modarresi, Yahya; Kamali, Mohammad

    2016-01-01

    Finding the right word is a necessity in communication, and its evaluation has always been a challenging clinical issue, suggesting the need for valid and reliable measurements. The Homophone Meaning Generation Test (HMGT) can measure the ability to switch between verbal concepts, which is required in word retrieval. The purpose of this study was to adapt and validate the Persian version of the HMGT. The first phase involved the adaptation of the HMGT to the Persian language. The second phase concerned the psychometric testing. The word-finding performance was assessed in 90 Persian-speaking healthy individuals (20-50 year old; 45 males and 45 females) through three naming tasks: Semantic Fluency, Phonemic Fluency, and Homophone Meaning Generation Test. The participants had no history of neurological or psychiatric diseases, alcohol abuse, severe depression, or history of speech, language, or learning problems. The internal consistency coefficient was larger than 0.8 for all the items with a total Cronbach's alpha of 0.80. Interrater and intrarater reliability were also excellent. The validity of all items was above 0.77, and the content validity index (0.99) was appropriate. The Persian HMGT had strong convergent validity with semantic and phonemic switching and adequate divergent validity with semantic and phonemic clustering. The Persian version of the Homophone Meaning Generation Test is an appropriate, valid, and reliable test to evaluate the ability to switch between verbal concepts in the assessment of word-finding performance.

  3. Primary central nervous system lymphoma and glioblastoma differentiation based on conventional magnetic resonance imaging by high-throughput SIFT features.

    PubMed

    Chen, Yinsheng; Li, Zeju; Wu, Guoqing; Yu, Jinhua; Wang, Yuanyuan; Lv, Xiaofei; Ju, Xue; Chen, Zhongping

    2018-07-01

    Due to the totally different therapeutic regimens needed for primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM), accurate differentiation of the two diseases by noninvasive imaging techniques is important for clinical decision-making. Thirty cases of PCNSL and 66 cases of GBM with conventional T1-contrast magnetic resonance imaging (MRI) were analyzed in this study. Convolutional neural networks was used to segment tumor automatically. A modified scale invariant feature transform (SIFT) method was utilized to extract three-dimensional local voxel arrangement information from segmented tumors. Fisher vector was proposed to normalize the dimension of SIFT features. An improved genetic algorithm (GA) was used to extract SIFT features with PCNSL and GBM discrimination ability. The data-set was divided into a cross-validation cohort and an independent validation cohort by the ratio of 2:1. Support vector machine with the leave-one-out cross-validation based on 20 cases of PCNSL and 44 cases of GBM was employed to build and validate the differentiation model. Among 16,384 high-throughput features, 1356 features show significant differences between PCNSL and GBM with p < 0.05 and 420 features with p < 0.001. A total of 496 features were finally chosen by improved GA algorithm. The proposed method produces PCNSL vs. GBM differentiation with an area under the curve (AUC) curve of 99.1% (98.2%), accuracy 95.3% (90.6%), sensitivity 85.0% (80.0%) and specificity 100% (95.5%) on the cross-validation cohort (and independent validation cohort). Since the local voxel arrangement characterization provided by SIFT features, proposed method produced more competitive PCNSL and GBM differentiation performance by using conventional MRI than methods based on advanced MRI.

  4. Validation of the Oncentra Brachy Advanced Collapsed cone Engine for a commercial (192)Ir source using heterogeneous geometries.

    PubMed

    Ma, Yunzhi; Lacroix, Fréderic; Lavallée, Marie-Claude; Beaulieu, Luc

    2015-01-01

    To validate the Advanced Collapsed cone Engine (ACE) dose calculation engine of Oncentra Brachy (OcB) treatment planning system using an (192)Ir source. Two levels of validation were performed, conformant to the model-based dose calculation algorithm commissioning guidelines of American Association of Physicists in Medicine TG-186 report. Level 1 uses all-water phantoms, and the validation is against TG-43 methodology. Level 2 uses real-patient cases, and the validation is against Monte Carlo (MC) simulations. For each case, the ACE and TG-43 calculations were performed in the OcB treatment planning system. ALGEBRA MC system was used to perform MC simulations. In Level 1, the ray effect depends on both accuracy mode and the number of dwell positions. The volume fraction with dose error ≥2% quickly reduces from 23% (13%) for a single dwell to 3% (2%) for eight dwell positions in the standard (high) accuracy mode. In Level 2, the 10% and higher isodose lines were observed overlapping between ACE (both standard and high-resolution modes) and MC. Major clinical indices (V100, V150, V200, D90, D50, and D2cc) were investigated and validated by MC. For example, among the Level 2 cases, the maximum deviation in V100 of ACE from MC is 2.75% but up to ~10% for TG-43. Similarly, the maximum deviation in D90 is 0.14 Gy between ACE and MC but up to 0.24 Gy for TG-43. ACE demonstrated good agreement with MC in most clinically relevant regions in the cases tested. Departure from MC is significant for specific situations but limited to low-dose (<10% isodose) regions. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  5. Minimizing false positive error with multiple performance validity tests: response to Bilder, Sugar, and Hellemann (2014 this issue).

    PubMed

    Larrabee, Glenn J

    2014-01-01

    Bilder, Sugar, and Hellemann (2014 this issue) contend that empirical support is lacking for use of multiple performance validity tests (PVTs) in evaluation of the individual case, differing from the conclusions of Davis and Millis (2014), and Larrabee (2014), who found no substantial increase in false positive rates using a criterion of failure of ≥ 2 PVTs and/or Symptom Validity Tests (SVTs) out of multiple tests administered. Reconsideration of data presented in Larrabee (2014) supports a criterion of ≥ 2 out of up to 7 PVTs/SVTs, as keeping false positive rates close to and in most cases below 10% in cases with bona fide neurologic, psychiatric, and developmental disorders. Strategies to minimize risk of false positive error are discussed, including (1) adjusting individual PVT cutoffs or criterion for number of PVTs failed, for examinees who have clinical histories placing them at risk for false positive identification (e.g., severe TBI, schizophrenia), (2) using the history of the individual case to rule out conditions known to result in false positive errors, (3) using normal performance in domains mimicked by PVTs to show that sufficient native ability exists for valid performance on the PVT(s) that have been failed, and (4) recognizing that as the number of PVTs/SVTs failed increases, the likelihood of valid clinical presentation decreases, with a corresponding increase in the likelihood of invalid test performance and symptom report.

  6. The convergent and discriminant validity of burnout measures in sport: a multi-trait/multi-method analysis.

    PubMed

    Cresswell, Scott L; Eklund, Robert C

    2006-02-01

    Athlete burnout research has been hampered by the lack of an adequate measurement tool. The Athlete Burnout Questionnaire (ABQ) and the Maslach Burnout Inventory General Survey (MBI-GS) are two recently developed self-report instruments designed to assess burnout. The convergent and discriminant validity of the ABQ and MBI-GS were assessed through multi-trait/multi-method analysis with a sporting population. Overall, the ABQ and the MBI-GS displayed acceptable convergent validity with matching subscales highly correlated, and satisfactory internal discriminant validity with lower correlations between non-matching subscales. Both scales also indicated an adequate discrimination between the concepts of burnout and depression. These findings add support to previous findings in non-sporting populations that depression and burnout are separate constructs. Based on the psychometric results, construct validity analysis and practical considerations, the results support the use of the ABQ to assess athlete burnout.

  7. Maternal sensitivity and attachment security in Thailand: cross-cultural validation of Western measures.

    PubMed

    Chaimongkol, Nujjaree N; Flick, Louise H

    2006-01-01

    The purpose of this study was to examine the psychometric properties of Thai versions of the Maternal Behavior Q-Sort (MBQS), Caldwell's HOME, and the Attachment Q-set (AQS). A sample of 110 Thai mother-infant dyads were studied. The Content Validity Index (CVIs) of the Thai MBQS, HOME and AQS were between 91% and 99%. Internal consistency of the HOME was .71. Interobserver reliability of the MBQS, HOME, and AQS were .95, .87, and .87, respectively. Convergent validity was supported by finding a positive correlation between the MBQS and the HOME (r = .29, p < .001). A positive correlation of .45 (p < .001) between the scores of the MBQS and the AQS indicated concurrent validity of these scales. Study findings indicate the Thai MBQS, HOME, and AQS are reliable and valid in this Thai sample and suggest that the Thai versions reflect concepts similar to those in the original English versions.

  8. Instruments Measuring Integrated Care: A Systematic Review of Measurement Properties.

    PubMed

    Bautista, Mary Ann C; Nurjono, Milawaty; Lim, Yee Wei; Dessers, Ezra; Vrijhoef, Hubertus Jm

    2016-12-01

    Policy Points: Investigations on systematic methodologies for measuring integrated care should coincide with the growing interest in this field of research. A systematic review of instruments provides insights into integrated care measurement, including setting the research agenda for validating available instruments and informing the decision to develop new ones. This study is the first systematic review of instruments measuring integrated care with an evidence synthesis of the measurement properties. We found 209 index instruments measuring different constructs related to integrated care; the strength of evidence on the adequacy of the majority of their measurement properties remained largely unassessed. Integrated care is an important strategy for increasing health system performance. Despite its growing significance, detailed evidence on the measurement properties of integrated care instruments remains vague and limited. Our systematic review aims to provide evidence on the state of the art in measuring integrated care. Our comprehensive systematic review framework builds on the Rainbow Model for Integrated Care (RMIC). We searched MEDLINE/PubMed for published articles on the measurement properties of instruments measuring integrated care and identified eligible articles using a standard set of selection criteria. We assessed the methodological quality of every validation study reported using the COSMIN checklist and extracted data on study and instrument characteristics. We also evaluated the measurement properties of each examined instrument per validation study and provided a best evidence synthesis on the adequacy of measurement properties of the index instruments. From the 300 eligible articles, we assessed the methodological quality of 379 validation studies from which we identified 209 index instruments measuring integrated care constructs. The majority of studies reported on instruments measuring constructs related to care integration (33%) and patient-centered care (49%); fewer studies measured care continuity/comprehensive care (15%) and care coordination/case management (3%). We mapped 84% of the measured constructs to the clinical integration domain of the RMIC, with fewer constructs related to the domains of professional (3.7%), organizational (3.4%), and functional (0.5%) integration. Only 8% of the instruments were mapped to a combination of domains; none were mapped exclusively to the system or normative integration domains. The majority of instruments were administered to either patients (60%) or health care providers (20%). Of the measurement properties, responsiveness (4%), measurement error (7%), and criterion (12%) and cross-cultural validity (14%) were less commonly reported. We found <50% of the validation studies to be of good or excellent quality for any of the measurement properties. Only a minority of index instruments showed strong evidence of positive findings for internal consistency (15%), content validity (19%), and structural validity (7%); with moderate evidence of positive findings for internal consistency (14%) and construct validity (14%). Our results suggest that the quality of measurement properties of instruments measuring integrated care is in need of improvement with the less-studied constructs and domains to become part of newly developed instruments. © 2016 Milbank Memorial Fund.

  9. MicroRNA expression in benign breast tissue and risk of subsequent invasive breast cancer.

    PubMed

    Rohan, Thomas; Ye, Kenny; Wang, Yihong; Glass, Andrew G; Ginsberg, Mindy; Loudig, Olivier

    2018-01-01

    MicroRNAs are endogenous, small non-coding RNAs that control gene expression by directing their target mRNAs for degradation and/or posttranscriptional repression. Abnormal expression of microRNAs is thought to contribute to the development and progression of cancer. A history of benign breast disease (BBD) is associated with increased risk of subsequent breast cancer. However, no large-scale study has examined the association between microRNA expression in BBD tissue and risk of subsequent invasive breast cancer (IBC). We conducted discovery and validation case-control studies nested in a cohort of 15,395 women diagnosed with BBD in a large health plan between 1971 and 2006 and followed to mid-2015. Cases were women with BBD who developed subsequent IBC; controls were matched 1:1 to cases on age, age at diagnosis of BBD, and duration of plan membership. The discovery stage (316 case-control pairs) entailed use of the Illumina MicroRNA Expression Profiling Assay (in duplicate) to identify breast cancer-associated microRNAs. MicroRNAs identified at this stage were ranked by the strength of the correlation between Illumina array and quantitative PCR results for 15 case-control pairs. The top ranked 14 microRNAs entered the validation stage (165 case-control pairs) which was conducted using quantitative PCR (in triplicate). In both stages, linear regression was used to evaluate the association between the mean expression level of each microRNA (response variable) and case-control status (independent variable); paired t-tests were also used in the validation stage. None of the 14 validation stage microRNAs was associated with breast cancer risk. The results of this study suggest that microRNA expression in benign breast tissue does not influence the risk of subsequent IBC.

  10. MicroRNA expression in benign breast tissue and risk of subsequent invasive breast cancer

    PubMed Central

    Ye, Kenny; Wang, Yihong; Ginsberg, Mindy; Loudig, Olivier

    2018-01-01

    MicroRNAs are endogenous, small non-coding RNAs that control gene expression by directing their target mRNAs for degradation and/or posttranscriptional repression. Abnormal expression of microRNAs is thought to contribute to the development and progression of cancer. A history of benign breast disease (BBD) is associated with increased risk of subsequent breast cancer. However, no large-scale study has examined the association between microRNA expression in BBD tissue and risk of subsequent invasive breast cancer (IBC). We conducted discovery and validation case-control studies nested in a cohort of 15,395 women diagnosed with BBD in a large health plan between 1971 and 2006 and followed to mid-2015. Cases were women with BBD who developed subsequent IBC; controls were matched 1:1 to cases on age, age at diagnosis of BBD, and duration of plan membership. The discovery stage (316 case-control pairs) entailed use of the Illumina MicroRNA Expression Profiling Assay (in duplicate) to identify breast cancer-associated microRNAs. MicroRNAs identified at this stage were ranked by the strength of the correlation between Illumina array and quantitative PCR results for 15 case-control pairs. The top ranked 14 microRNAs entered the validation stage (165 case-control pairs) which was conducted using quantitative PCR (in triplicate). In both stages, linear regression was used to evaluate the association between the mean expression level of each microRNA (response variable) and case-control status (independent variable); paired t-tests were also used in the validation stage. None of the 14 validation stage microRNAs was associated with breast cancer risk. The results of this study suggest that microRNA expression in benign breast tissue does not influence the risk of subsequent IBC. PMID:29432432

  11. Variable Case Detection and Many Unreported Cases of Surgical-Site Infection Following Colon Surgery and Abdominal Hysterectomy in a Statewide Validation.

    PubMed

    Calderwood, Michael S; Huang, Susan S; Keller, Vicki; Bruce, Christina B; Kazerouni, N Neely; Janssen, Lynn

    2017-09-01

    OBJECTIVE To assess hospital surgical-site infection (SSI) identification and reporting following colon surgery and abdominal hysterectomy via a statewide external validation METHODS Infection preventionists (IPs) from the California Department of Public Health (CDPH) performed on-site SSI validation for surgical procedures performed in hospitals that voluntarily participated. Validation involved chart review of SSI cases previously reported by hospitals plus review of patient records flagged for review by claims codes suggestive of SSI. We assessed the sensitivity of traditional surveillance and the added benefit of claims-based surveillance. We also evaluated the positive predictive value of claims-based surveillance (ie, workload efficiency). RESULTS Upon validation review, CDPH IPs identified 239 SSIs following colon surgery at 42 hospitals and 76 SSIs following abdominal hysterectomy at 34 hospitals. For colon surgery, traditional surveillance had a sensitivity of 50% (47% for deep incisional or organ/space [DI/OS] SSI), compared to 84% (88% for DI/OS SSI) for claims-based surveillance. For abdominal hysterectomy, traditional surveillance had a sensitivity of 68% (67% for DI/OS SSI) compared to 74% (78% for DI/OS SSI) for claims-based surveillance. Claims-based surveillance was also efficient, with 1 SSI identified for every 2 patients flagged for review who had undergone abdominal hysterectomy and for every 2.6 patients flagged for review who had undergone colon surgery. Overall, CDPH identified previously unreported SSIs in 74% of validation hospitals performing colon surgery and 35% of validation hospitals performing abdominal hysterectomy. CONCLUSIONS Claims-based surveillance is a standardized approach that hospitals can use to augment traditional surveillance methods and health departments can use for external validation. Infect Control Hosp Epidemiol 2017;38:1091-1097.

  12. Genome-wide methylation profiling identifies an essential role of reactive oxygen species in pediatric glioblastoma multiforme and validates a methylome specific for H3 histone family 3A with absence of G-CIMP/isocitrate dehydrogenase 1 mutation.

    PubMed

    Jha, Prerana; Pia Patric, Irene Rosita; Shukla, Sudhanshu; Pathak, Pankaj; Pal, Jagriti; Sharma, Vikas; Thinagararanjan, Sivaarumugam; Santosh, Vani; Suri, Vaishali; Sharma, Mehar Chand; Arivazhagan, Arimappamagan; Suri, Ashish; Gupta, Deepak; Somasundaram, Kumaravel; Sarkar, Chitra

    2014-12-01

    Pediatric glioblastoma multiforme (GBM) is rare, and there is a single study, a seminal discovery showing association of histone H3.3 and isocitrate dehydrogenase (IDH)1 mutation with a DNA methylation signature. The present study aims to validate these findings in an independent cohort of pediatric GBM, compare it with adult GBM, and evaluate the involvement of important functionally altered pathways. Genome-wide methylation profiling of 21 pediatric GBM cases was done and compared with adult GBM data (GSE22867). We performed gene mutation analysis of IDH1 and H3 histone family 3A (H3F3A), status evaluation of glioma cytosine-phosphate-guanine island methylator phenotype (G-CIMP), and Gene Ontology analysis. Experimental evaluation of reactive oxygen species (ROS) association was also done. Distinct differences were noted between methylomes of pediatric and adult GBM. Pediatric GBM was characterized by 94 hypermethylated and 1206 hypomethylated cytosine-phosphate-guanine (CpG) islands, with 3 distinct clusters, having a trend to prognostic correlation. Interestingly, none of the pediatric GBM cases showed G-CIMP/IDH1 mutation. Gene Ontology analysis identified ROS association in pediatric GBM, which was experimentally validated. H3F3A mutants (36.4%; all K27M) harbored distinct methylomes and showed enrichment of processes related to neuronal development, differentiation, and cell-fate commitment. Our study confirms that pediatric GBM has a distinct methylome compared with that of adults. Presence of distinct clusters and an H3F3A mutation-specific methylome indicate existence of epigenetic subgroups within pediatric GBM. Absence of IDH1/G-CIMP status further indicates that findings in adult GBM cannot be simply extrapolated to pediatric GBM and that there is a strong need for identification of separate prognostic markers. A possible role of ROS in pediatric GBM pathogenesis is demonstrated for the first time and needs further evaluation. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Identification of patients at high risk for Clostridium difficile infection: development and validation of a risk prediction model in hospitalized patients treated with antibiotics.

    PubMed

    van Werkhoven, C H; van der Tempel, J; Jajou, R; Thijsen, S F T; Diepersloot, R J A; Bonten, M J M; Postma, D F; Oosterheert, J J

    2015-08-01

    To develop and validate a prediction model for Clostridium difficile infection (CDI) in hospitalized patients treated with systemic antibiotics, we performed a case-cohort study in a tertiary (derivation) and secondary care hospital (validation). Cases had a positive Clostridium test and were treated with systemic antibiotics before suspicion of CDI. Controls were randomly selected from hospitalized patients treated with systemic antibiotics. Potential predictors were selected from the literature. Logistic regression was used to derive the model. Discrimination and calibration of the model were tested in internal and external validation. A total of 180 cases and 330 controls were included for derivation. Age >65 years, recent hospitalization, CDI history, malignancy, chronic renal failure, use of immunosuppressants, receipt of antibiotics before admission, nonsurgical admission, admission to the intensive care unit, gastric tube feeding, treatment with cephalosporins and presence of an underlying infection were independent predictors of CDI. The area under the receiver operating characteristic curve of the model in the derivation cohort was 0.84 (95% confidence interval 0.80-0.87), and was reduced to 0.81 after internal validation. In external validation, consisting of 97 cases and 417 controls, the model area under the curve was 0.81 (95% confidence interval 0.77-0.85) and model calibration was adequate (Brier score 0.004). A simplified risk score was derived. Using a cutoff of 7 points, the positive predictive value, sensitivity and specificity were 1.0%, 72% and 73%, respectively. In conclusion, a risk prediction model was developed and validated, with good discrimination and calibration, that can be used to target preventive interventions in patients with increased risk of CDI. Copyright © 2015 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  14. Validity of the Microcomputer Evaluation Screening and Assessment Aptitude Scores.

    ERIC Educational Resources Information Center

    Janikowski, Timothy P.; And Others

    1991-01-01

    Examined validity of Microcomputer Evaluation Screening and Assessment (MESA) aptitude scores relative to General Aptitude Test Battery (GATB) using multitrait-multimethod correlational analyses. Findings from 54 rehabilitation clients and 29 displaced workers revealed no evidence to support the construct validity of the MESA. (Author/NB)

  15. Assessing the Validity of Discourse Analysis: Transdisciplinary Convergence

    ERIC Educational Resources Information Center

    Jaipal-Jamani, Kamini

    2014-01-01

    Research studies using discourse analysis approaches make claims about phenomena or issues based on interpretation of written or spoken text, which includes images and gestures. How are findings/interpretations from discourse analysis validated? This paper proposes transdisciplinary convergence as a way to validate discourse analysis approaches to…

  16. Use of standardized patients to assess quality of tuberculosis care: a pilot, cross-sectional study

    PubMed Central

    Das, Jishnu; Kwan, Ada; Daniels, Ben; Satyanarayana, Srinath; Subbaraman, Ramnath; Bergkvist, Sofi; Das, Ranendra K.; Das, Veena; Pai, Madhukar

    2015-01-01

    SUMMARY Background Existing studies on quality of tuberculosis care mostly reflect knowledge, not actual practice. Methods We conducted a validation study on the use of standardized patients (SPs) for assessing quality of TB care. Four cases, two for presumed TB and one each for confirmed TB and suspected MDR-TB, were presented by 17 SPs, with 250 SP interactions among 100 consenting providers in Delhi, including qualified (29%), alternative medicine (40%) and informal providers (31%). Validation criteria were: (1) negligible risk and ability to avoid adverse events for providers and SPs; (2) low detection rates of SPs by providers, and (3) data accuracy across SPs and audio verification of SP recall. We used medical vignettes to assess provider knowledge for presumed TB. Correct case management was benchmarked using Standards for TB Care in India (STCI). Findings SPs were deployed with low detection rates (4.7% of 232 interactions), high correlation of recall with audio recordings (r=0.63; 95% CI: 0.53 – 0.79), and no safety concerns. Average consultation length was 6 minutes with 6.2 questions/exams completed, representing 35% (95% confidence interval [CI]: 33%–38%) of essential checklist items. Across all cases, only 52 of 250 (21%; 95% CI: 16%–26%) were correctly managed. Correct management was higher among MBBS doctors (adjusted OR=2.41, 95% CI: 1.17–4.93) as compared to all others. Provider knowledge in the vignettes was markedly more consistent with STCI than their practice. Interpretation The SP methodology can be successfully implemented to assess TB care. Our data suggest a big gap between provider knowledge and practice. PMID:26268690

  17. Utilizing population controls in rare-variant case-parent association tests.

    PubMed

    Jiang, Yu; Satten, Glen A; Han, Yujun; Epstein, Michael P; Heinzen, Erin L; Goldstein, David B; Allen, Andrew S

    2014-06-05

    There is great interest in detecting associations between human traits and rare genetic variation. To address the low power implicit in single-locus tests of rare genetic variants, many rare-variant association approaches attempt to accumulate information across a gene, often by taking linear combinations of single-locus contributions to a statistic. Using the right linear combination is key-an optimal test will up-weight true causal variants, down-weight neutral variants, and correctly assign the direction of effect for causal variants. Here, we propose a procedure that exploits data from population controls to estimate the linear combination to be used in an case-parent trio rare-variant association test. Specifically, we estimate the linear combination by comparing population control allele frequencies with allele frequencies in the parents of affected offspring. These estimates are then used to construct a rare-variant transmission disequilibrium test (rvTDT) in the case-parent data. Because the rvTDT is conditional on the parents' data, using parental data in estimating the linear combination does not affect the validity or asymptotic distribution of the rvTDT. By using simulation, we show that our new population-control-based rvTDT can dramatically improve power over rvTDTs that do not use population control information across a wide variety of genetic architectures. It also remains valid under population stratification. We apply the approach to a cohort of epileptic encephalopathy (EE) trios and find that dominant (or additive) inherited rare variants are unlikely to play a substantial role within EE genes previously identified through de novo mutation studies. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  18. Frequent PIK3CA Mutations in Colorectal and Endometrial Cancer with Double Somatic Mismatch Repair Mutations

    PubMed Central

    Cohen, Stacey A.; Turner, Emily H.; Beightol, Mallory B.; Jacobson, Angela; Gooley, Ted A.; Salipante, Stephen J.; Haraldsdottir, Sigurdis; Smith, Christina; Scroggins, Sheena; Tait, Jonathan F.; Grady, William M.; Lin, Edward H.; Cohn, David E.; Goodfellow, Paul J.; Arnold, Mark W.; de la Chapelle, Albert; Pearlman, Rachel; Hampel, Heather; Pritchard, Colin C.

    2016-01-01

    Background & Aims Double somatic mutations in mismatch repair (MMR) genes have recently been described in colorectal and endometrial cancers with microsatellite instability (MSI) not attributable to MLH1 hypermethylation or germline mutation. We sought to define the molecular phenotype of this newly recognized tumor subtype. Methods From two prospective Lynch syndrome screening studies, we identified patients with colorectal and endometrial tumors harboring ≥2 somatic MMR mutations, but normal germline MMR testing (“double somatic”). We determined the frequencies of tumor PIK3CA, BRAF, KRAS, NRAS, and PTEN mutations by targeted next-generation sequencing and used logistic-regression models to compare them to: Lynch syndrome, MLH1 hypermethylated, and microsatellite stable (MSS) tumors. We validated our findings using independent datasets from The Cancer Genome Atlas (TCGA). Results Among colorectal cancer cases, we found that 14/21 (67%) of double somatic cases had PIK3CA mutations vs. 4/18 (22%) Lynch syndrome, 2/10 (20%) MLH1 hypermethylated, and 12/78 (15%) MSS tumors; p<0.0001. PIK3CA mutations were detected in 100% of 13 double somatic endometrial cancers (p=0.04). BRAF mutations were absent in double somatic and Lynch syndrome colorectal tumors. We found highly similar results in a validation cohort from TCGA (113 colorectal, 178 endometrial cancer), with 100% of double somatic cases harboring a PIK3CA mutation (p<0.0001). Conclusions PIK3CA mutations are present in double somatic mutated colorectal and endometrial cancers at substantially higher frequencies than other MSI subgroups. PIK3CA mutation status may better define an emerging molecular entity in colorectal and endometrial cancers, with the potential to inform screening and therapeutic decision making. PMID:27302833

  19. Molecular pathology of brain edema after severe burns in forensic autopsy cases with special regard to the importance of reference gene selection.

    PubMed

    Wang, Qi; Ishikawa, Takaki; Michiue, Tomomi; Zhu, Bao-Li; Guan, Da-Wei; Maeda, Hitoshi

    2013-09-01

    Brain edema is believed to be linked to high mortality incidence after severe burns. The present study investigated the molecular pathology of brain damage and responses involving brain edema in forensic autopsy cases of fire fatality (n = 55) compared with sudden cardiac death (n = 11), mechanical asphyxia (n = 13), and non-brain injury cases (n = 22). Postmortem mRNA and immunohistochemical expressions of aquaporins (AQPs), claudin5 (CLDN5), and matrix metalloproteinases (MMPs) were examined. Prolonged deaths due to severe burns showed an increase in brain water content, but relative mRNA quantification, using different normalization methods, showed inconsistent results: in prolonged deaths due to severe burns, higher expression levels were detected for all markers when three previously validated reference genes, PES1, POLR2A, and IPO8, were used for normalization, higher for AQP1 and MMP9 when GAPDH alone was used for normalization and higher for MMP9, but lower for MMP2 when B2M alone was used for normalization. Additionally, when B2M alone was used for normalization, higher expression of AQP4 was detected in acute fire deaths. Furthermore, the expression stability values of these five reference genes calculated by geNorm demonstrated that B2M was the least stable one, followed by GAPDH. In immunostaining, only AQP1 and MMP9 showed differences among the causes of death: they were evident in most prolonged deaths due to severe burns. These findings suggest that systematic analysis of gene expressions using real-time PCR might be a useful procedure in forensic death investigation, and validation of reference genes is crucial.

  20. Polymorphisms in CARS are associated with gastric cancer risk: a two-stage case-control study in the Chinese population.

    PubMed

    Tian, Tian; Xiao, Ling; Du, Jiangbo; Zhu, Xun; Gu, Yayun; Qin, Na; Yan, Caiwang; Liu, Li; Ma, Hongxia; Jiang, Yue; Chen, Jiaping; Yu, Hao; Dai, Juncheng

    2017-11-01

    The cysteinyl transfer RNA synthetase gene (CARS) is located on chromosome band 11p15.5, which is an important tumor-suppressor gene region. Mutations in CARS have been identified in many kinds of cancers; however, evidence for a relationship between genetic variants in CARS and gastric cancer at the population level is still lacking. Thus, we explored the association of variants in CARS with gastric cancer using a two-stage case-control strategy in Chinese. We undertook a two-stage case-control study to investigate the association between polymorphisms in CARS and risk of gastric cancer with use of an Illumina Infinium ® BeadChip and an ABI 7900 system. Four single nucleotide polymorphisms (SNPs) were significantly associated with gastric cancer risk in both the discovery stage and the validation stage after adjustment for age and sex. In addition, the combined results of the two stages showed these SNPs were related to gastric cancer risk (P false discovery rate  ≤ 0.001 for rs384,490, rs729662, rs2071101, and rs7394702). In silico analyses revealed that rs384490 and rs7394702 could affect transcription factor response elements or DNA methylation of CARS, and rs729662 was associated with the prognosis of gastric cancer. Additionally, expression quantitative trait loci analysis showed rs384490 and rs729662 might alter expression of CARS-related genes. The potential functional SNPs in CARS might influence the biological functions of CARS or CARS-related genes and ultimately modify the occurrence and development of gastric cancer in Chinese. Further large-scale population-based studies or biological functional assays are warranted to validate our findings.

Top