Sample records for computerized diagnostic algorithm

  1. Mutual Information Item Selection Method in Cognitive Diagnostic Computerized Adaptive Testing with Short Test Length

    ERIC Educational Resources Information Center

    Wang, Chun

    2013-01-01

    Cognitive diagnostic computerized adaptive testing (CD-CAT) purports to combine the strengths of both CAT and cognitive diagnosis. Cognitive diagnosis models aim at classifying examinees into the correct mastery profile group so as to pinpoint the strengths and weakness of each examinee whereas CAT algorithms choose items to determine those…

  2. A new computerized diagnostic algorithm for quantitative evaluation of binocular misalignment in patients with strabismus

    NASA Astrophysics Data System (ADS)

    Nam, Kyoung Won; Kim, In Young; Kang, Ho Chul; Yang, Hee Kyung; Yoon, Chang Ki; Hwang, Jeong Min; Kim, Young Jae; Kim, Tae Yun; Kim, Kwang Gi

    2012-10-01

    Accurate measurement of binocular misalignment between both eyes is important for proper preoperative management, surgical planning, and postoperative evaluation of patients with strabismus. In this study, we proposed a new computerized diagnostic algorithm that can calculate the angle of binocular eye misalignment photographically by using a dedicated three-dimensional eye model mimicking the structure of the natural human eye. To evaluate the performance of the proposed algorithm, eight healthy volunteers and eight individuals with strabismus were recruited in this study, the horizontal deviation angle, vertical deviation angle, and angle of eye misalignment were calculated and the angular differences between the healthy and the strabismus groups were evaluated using the nonparametric Mann-Whitney test and the Pearson correlation test. The experimental results demonstrated a statistically significant difference between the healthy and strabismus groups (p = 0.015 < 0.05), but no statistically significant difference between the proposed method and the Krimsky test (p = 0.912 > 0.05). The measurements of the two methods were highly correlated (r = 0.969, p < 0.05). From the experimental results, we believe that the proposed diagnostic method has the potential to be a diagnostic tool that measures the physical disorder of the human eye to diagnose non-invasively the severity of strabismus.

  3. Developing a modular architecture for creation of rule-based clinical diagnostic criteria.

    PubMed

    Hong, Na; Pathak, Jyotishman; Chute, Christopher G; Jiang, Guoqian

    2016-01-01

    With recent advances in computerized patient records system, there is an urgent need for producing computable and standards-based clinical diagnostic criteria. Notably, constructing rule-based clinical diagnosis criteria has become one of the goals in the International Classification of Diseases (ICD)-11 revision. However, few studies have been done in building a unified architecture to support the need for diagnostic criteria computerization. In this study, we present a modular architecture for enabling the creation of rule-based clinical diagnostic criteria leveraging Semantic Web technologies. The architecture consists of two modules: an authoring module that utilizes a standards-based information model and a translation module that leverages Semantic Web Rule Language (SWRL). In a prototype implementation, we created a diagnostic criteria upper ontology (DCUO) that integrates ICD-11 content model with the Quality Data Model (QDM). Using the DCUO, we developed a transformation tool that converts QDM-based diagnostic criteria into Semantic Web Rule Language (SWRL) representation. We evaluated the domain coverage of the upper ontology model using randomly selected diagnostic criteria from broad domains (n = 20). We also tested the transformation algorithms using 6 QDM templates for ontology population and 15 QDM-based criteria data for rule generation. As the results, the first draft of DCUO contains 14 root classes, 21 subclasses, 6 object properties and 1 data property. Investigation Findings, and Signs and Symptoms are the two most commonly used element types. All 6 HQMF templates are successfully parsed and populated into their corresponding domain specific ontologies and 14 rules (93.3 %) passed the rule validation. Our efforts in developing and prototyping a modular architecture provide useful insight into how to build a scalable solution to support diagnostic criteria representation and computerization.

  4. Intelligent Diagnostic Assistant for Complicated Skin Diseases through C5's Algorithm.

    PubMed

    Jeddi, Fatemeh Rangraz; Arabfard, Masoud; Kermany, Zahra Arab

    2017-09-01

    Intelligent Diagnostic Assistant can be used for complicated diagnosis of skin diseases, which are among the most common causes of disability. The aim of this study was to design and implement a computerized intelligent diagnostic assistant for complicated skin diseases through C5's Algorithm. An applied-developmental study was done in 2015. Knowledge base was developed based on interviews with dermatologists through questionnaires and checklists. Knowledge representation was obtained from the train data in the database using Excel Microsoft Office. Clementine Software and C5's Algorithms were applied to draw the decision tree. Analysis of test accuracy was performed based on rules extracted using inference chains. The rules extracted from the decision tree were entered into the CLIPS programming environment and the intelligent diagnostic assistant was designed then. The rules were defined using forward chaining inference technique and were entered into Clips programming environment as RULE. The accuracy and error rates obtained in the training phase from the decision tree were 99.56% and 0.44%, respectively. The accuracy of the decision tree was 98% and the error was 2% in the test phase. Intelligent diagnostic assistant can be used as a reliable system with high accuracy, sensitivity, specificity, and agreement.

  5. Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.

    PubMed

    Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F

    1980-01-01

    Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.

  6. The 10/66 Dementia Research Group's fully operationalised DSM-IV dementia computerized diagnostic algorithm, compared with the 10/66 dementia algorithm and a clinician diagnosis: a population validation study

    PubMed Central

    Prince, Martin J; de Rodriguez, Juan Llibre; Noriega, L; Lopez, A; Acosta, Daisy; Albanese, Emiliano; Arizaga, Raul; Copeland, John RM; Dewey, Michael; Ferri, Cleusa P; Guerra, Mariella; Huang, Yueqin; Jacob, KS; Krishnamoorthy, ES; McKeigue, Paul; Sousa, Renata; Stewart, Robert J; Salas, Aquiles; Sosa, Ana Luisa; Uwakwa, Richard

    2008-01-01

    Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study). Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder. PMID:18577205

  7. Review of Current Data Exchange Practices: Providing Descriptive Data to Assist with Building Operations Decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livingood, W.; Stein, J.; Considine, T.

    Retailers who participate in the U.S. Department of Energy Commercial Building Energy Alliances (CBEA) identified the need to enhance communication standards. The means are available to collect massive numbers of buildings operational data, but CBEA members have difficulty transforming the data into usable information and energy-saving actions. Implementing algorithms for automated fault detection and diagnostics and linking building operational data to computerized maintenance management systems are important steps in the right direction, but have limited scalability for large building portfolios because the algorithms must be configured for each building.

  8. Motion Estimation and Compensation Strategies in Dynamic Computerized Tomography

    NASA Astrophysics Data System (ADS)

    Hahn, Bernadette N.

    2017-12-01

    A main challenge in computerized tomography consists in imaging moving objects. Temporal changes during the measuring process lead to inconsistent data sets, and applying standard reconstruction techniques causes motion artefacts which can severely impose a reliable diagnostics. Therefore, novel reconstruction techniques are required which compensate for the dynamic behavior. This article builds on recent results from a microlocal analysis of the dynamic setting, which enable us to formulate efficient analytic motion compensation algorithms for contour extraction. Since these methods require information about the dynamic behavior, we further introduce a motion estimation approach which determines parameters of affine and certain non-affine deformations directly from measured motion-corrupted Radon-data. Our methods are illustrated with numerical examples for both types of motion.

  9. Computerized Diagnostic Testing: Problems and Possibilities.

    ERIC Educational Resources Information Center

    McArthur, David L.

    The use of computers to build diagnostic inferences is explored in two contexts. In computerized monitoring of liquid oxygen systems for the space shuttle, diagnoses are exact because they can be derived within a world which is closed. In computerized classroom testing of reading comprehension, programs deliver a constrained form of adaptive…

  10. Computerized analysis of the 12-lead electrocardiogram to identify epicardial ventricular tachycardia exit sites.

    PubMed

    Yokokawa, Miki; Jung, Dae Yon; Joseph, Kim K; Hero, Alfred O; Morady, Fred; Bogun, Frank

    2014-11-01

    Twelve-lead electrocardiogram (ECG) criteria for epicardial ventricular tachycardia (VT) origins have been described. In patients with structural heart disease, the ability to predict an epicardial origin based on QRS morphology is limited and has been investigated only for limited regions in the heart. The purpose of this study was to determine whether a computerized algorithm is able to accurately differentiate epicardial vs endocardial origins of ventricular arrhythmias. Endocardial and epicardial pace-mapping were performed in 43 patients at 3277 sites. The 12-lead ECGs were digitized and analyzed using a mixture of gaussian model (MoG) to assess whether the algorithm was able to identify an epicardial vs endocardial origin of the paced rhythm. The MoG computerized algorithm was compared to algorithms published in prior reports. The computerized algorithm correctly differentiated epicardial vs endocardial pacing sites for 80% of the sites compared to an accuracy of 42% to 66% of other described criteria. The accuracy was higher in patients without structural heart disease than in those with structural heart disease (94% vs 80%, P = .0004) and for right bundle branch block (82%) compared to left bundle branch block morphologies (79%, P = .001). Validation studies showed the accuracy for VT exit sites to be 84%. A computerized algorithm was able to accurately differentiate the majority of epicardial vs endocardial pace-mapping sites. The algorithm is not region specific and performed best in patients without structural heart disease and with VTs having a right bundle branch block morphology. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  11. Dual-Objective Item Selection Criteria in Cognitive Diagnostic Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Kang, Hyeon-Ah; Zhang, Susu; Chang, Hua-Hua

    2017-01-01

    The development of cognitive diagnostic-computerized adaptive testing (CD-CAT) has provided a new perspective for gaining information about examinees' mastery on a set of cognitive attributes. This study proposes a new item selection method within the framework of dual-objective CD-CAT that simultaneously addresses examinees' attribute mastery…

  12. The Application of the Monte Carlo Approach to Cognitive Diagnostic Computerized Adaptive Testing With Content Constraints

    ERIC Educational Resources Information Center

    Mao, Xiuzhen; Xin, Tao

    2013-01-01

    The Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global…

  13. Diagnostic Yield of Transbronchial Biopsy in Comparison to High Resolution Computerized Tomography in Sarcoidosis Cases

    PubMed

    Akten, H Serpil; Kilic, Hatice; Celik, Bulent; Erbas, Gonca; Isikdogan, Zeynep; Turktas, Haluk; Kokturk, Nurdan

    2018-04-25

    This study aimed to evaluate the diagnostic yield of fiberoptic bronchoscopic (FOB) transbronchial biopsy and its relation with quantitative findings of high resolution computerized tomography (HRCT). A total of 83 patients, 19 males and 64 females with a mean age of 45.1 years diagnosed with sarcoidosis with complete records of high resolution computerized tomography were retrospectively recruited during the time period from Feb 2005 to Jan 2015. High resolution computerized tomography scans were retrospectively assessed in random order by an experienced observer without knowledge of the bronchoscopic results or lung function tests. According to the radiological staging with HRCT, 2.4% of the patients (n=2) were stage 0, 19.3% (n=16) were stage 1, 72.3% (n=60) were stage 2 and 6.0% (n=5) were stage 3. This study showed that transbronchial lung biopsy showed positive results in 39.7% of the stage I or II sarcoidosis patients who were diagnosed by bronchoscopy. Different high resolution computerized tomography patterns and different scores of involvement did make a difference in the diagnostic accuracy of transbronchial biopsy (p=0.007). Creative Commons Attribution License

  14. Computerized Classification Testing with the Rasch Model

    ERIC Educational Resources Information Center

    Eggen, Theo J. H. M.

    2011-01-01

    If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…

  15. A computerized compensator design algorithm with launch vehicle applications

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.; Mcdaniel, W. L., Jr.

    1976-01-01

    This short paper presents a computerized algorithm for the design of compensators for large launch vehicles. The algorithm is applicable to the design of compensators for linear, time-invariant, control systems with a plant possessing a single control input and multioutputs. The achievement of frequency response specifications is cast into a strict constraint mathematical programming format. An improved solution algorithm for solving this type of problem is given, along with the mathematical necessities for application to systems of the above type. A computer program, compensator improvement program (CIP), has been developed and applied to a pragmatic space-industry-related example.

  16. Multiparametric ultrasound in the detection of prostate cancer: a systematic review.

    PubMed

    Postema, Arnoud; Mischi, Massimo; de la Rosette, Jean; Wijkstra, Hessel

    2015-11-01

    To investigate the advances and clinical results of the different ultrasound modalities and the progress in combining them into multiparametric UltraSound (mpUS). A systematic literature search on mpUS and the different ultrasound modalities included: greyscale ultrasound, computerized transrectal ultrasound, Doppler and power Doppler techniques, dynamic contrast-enhanced ultrasound and (shear wave) elastography. Limited research available on combining ultrasound modalities has presented improvement in diagnostic performance. The data of two studies suggest that even adding a lower performing ultrasound modality to a better performing modality using crude methods can already improve the sensitivity by 13-51 %. The different modalities detect different tumours. No study has tried to combine ultrasound modalities employing a system similar to the PIRADS system used for mpMRI or more advanced classifying algorithms. Available evidence confirms that combining different ultrasound modalities significantly improves diagnostic performance.

  17. Validation of a computerized algorithm to quantify fetal heart rate deceleration area.

    PubMed

    Gyllencreutz, Erika; Lu, Ke; Lindecrantz, Kaj; Lindqvist, Pelle G; Nordstrom, Lennart; Holzmann, Malin; Abtahi, Farhad

    2018-05-16

    Reliability in visual cardiotocography interpretation is unsatisfying, which has led to development of computerized cardiotocography. Computerized analysis is well established for antenatal fetal surveillance, but has yet not performed sufficiently during labor. We aimed to investigate the capacity of a new computerized algorithm compared to visual assessment in identifying intrapartum fetal heart rate baseline and decelerations. Three-hundred-and-twelve intrapartum cardiotocography tracings with variable decelerations were analysed by the computerized algorithm and visually examined by two observers, blinded to each other and the computer analysis. The width, depth and area of each deceleration was measured. Four cases (>100 variable decelerations) were subject to in-depth detailed analysis. The outcome measures were bias in seconds (width), beats per minute (depth), and beats (area) between computer and observers by using Bland-Altman analysis. Interobserver reliability was determined by calculating intraclass correlation and Spearman rank analysis. The analysis (312 cases) showed excellent intraclass correlation (0.89-0.95) and very strong Spearman correlation (0.82-0.91). The detailed analysis of > 100 decelerations in 4 cases revealed low bias between the computer and the two observers; width 1.4 and 1.4 seconds, depth 5.1 and 0.7 beats per minute, and area 0.1 and -1.7 beats. This was comparable to the bias between the two observers; 0.3 seconds (width), 4.4 beats per minute (depth), and 1.7 beats (area). The intraclass correlation was excellent (0.90-0.98). A novel computerized algorithm for intrapartum cardiotocography analysis is as accurate as gold standard visual assessment with high correlation and low bias. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. Computerized Adaptive Test (CAT) Applications and Item Response Theory Models for Polytomous Items

    ERIC Educational Resources Information Center

    Aybek, Eren Can; Demirtasli, R. Nukhet

    2017-01-01

    This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…

  19. The role of computerized diagnostic proposals in the interpretation of the 12-lead electrocardiogram by cardiology and non-cardiology fellows.

    PubMed

    Novotny, Tomas; Bond, Raymond; Andrsova, Irena; Koc, Lumir; Sisakova, Martina; Finlay, Dewar; Guldenring, Daniel; Spinar, Jindrich; Malik, Marek

    2017-05-01

    Most contemporary 12-lead electrocardiogram (ECG) devices offer computerized diagnostic proposals. The reliability of these automated diagnoses is limited. It has been suggested that incorrect computer advice can influence physician decision-making. This study analyzed the role of diagnostic proposals in the decision process by a group of fellows of cardiology and other internal medicine subspecialties. A set of 100 clinical 12-lead ECG tracings was selected covering both normal cases and common abnormalities. A team of 15 junior Cardiology Fellows and 15 Non-Cardiology Fellows interpreted the ECGs in 3 phases: without any diagnostic proposal, with a single diagnostic proposal (half of them intentionally incorrect), and with four diagnostic proposals (only one of them being correct) for each ECG. Self-rated confidence of each interpretation was collected. Availability of diagnostic proposals significantly increased the diagnostic accuracy (p<0.001). Nevertheless, in case of a single proposal (either correct or incorrect) the increase of accuracy was present in interpretations with correct diagnostic proposals, while the accuracy was substantially reduced with incorrect proposals. Confidence levels poorly correlated with interpretation scores (rho≈2, p<0.001). Logistic regression showed that an interpreter is most likely to be correct when the ECG offers a correct diagnostic proposal (OR=10.87) or multiple proposals (OR=4.43). Diagnostic proposals affect the diagnostic accuracy of ECG interpretations. The accuracy is significantly influenced especially when a single diagnostic proposal (either correct or incorrect) is provided. The study suggests that the presentation of multiple computerized diagnoses is likely to improve the diagnostic accuracy of interpreters. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Accurate ECG diagnosis of atrial tachyarrhythmias using quantitative analysis: a prospective diagnostic and cost-effectiveness study.

    PubMed

    Krummen, David E; Patel, Mitul; Nguyen, Hong; Ho, Gordon; Kazi, Dhruv S; Clopton, Paul; Holland, Marian C; Greenberg, Scott L; Feld, Gregory K; Faddis, Mitchell N; Narayan, Sanjiv M

    2010-11-01

    Quantitative ECG Analysis. Optimal atrial tachyarrhythmia management is facilitated by accurate electrocardiogram interpretation, yet typical atrial flutter (AFl) may present without sawtooth F-waves or RR regularity, and atrial fibrillation (AF) may be difficult to separate from atypical AFl or rapid focal atrial tachycardia (AT). We analyzed whether improved diagnostic accuracy using a validated analysis tool significantly impacts costs and patient care. We performed a prospective, blinded, multicenter study using a novel quantitative computerized algorithm to identify atrial tachyarrhythmia mechanism from the surface ECG in patients referred for electrophysiology study (EPS). In 122 consecutive patients (age 60 ± 12 years) referred for EPS, 91 sustained atrial tachyarrhythmias were studied. ECGs were also interpreted by 9 physicians from 3 specialties for comparison and to allow healthcare system modeling. Diagnostic accuracy was compared to the diagnosis at EPS. A Markov model was used to estimate the impact of improved arrhythmia diagnosis. We found 13% of typical AFl ECGs had neither sawtooth flutter waves nor RR regularity, and were misdiagnosed by the majority of clinicians (0/6 correctly diagnosed by consensus visual interpretation) but correctly by quantitative analysis in 83% (5/6, P = 0.03). AF diagnosis was also improved through use of the algorithm (92%) versus visual interpretation (primary care: 76%, P < 0.01). Economically, we found that these improvements in diagnostic accuracy resulted in an average cost-savings of $1,303 and 0.007 quality-adjusted-life-years per patient. Typical AFl and AF are frequently misdiagnosed using visual criteria. Quantitative analysis improves diagnostic accuracy and results in improved healthcare costs and patient outcomes. © 2010 Wiley Periodicals, Inc.

  1. Engineering integrated digital circuits with allosteric ribozymes for scaling up molecular computation and diagnostics.

    PubMed

    Penchovsky, Robert

    2012-10-19

    Here we describe molecular implementations of integrated digital circuits, including a three-input AND logic gate, a two-input multiplexer, and 1-to-2 decoder using allosteric ribozymes. Furthermore, we demonstrate a multiplexer-decoder circuit. The ribozymes are designed to seek-and-destroy specific RNAs with a certain length by a fully computerized procedure. The algorithm can accurately predict one base substitution that alters the ribozyme's logic function. The ability to sense the length of RNA molecules enables single ribozymes to be used as platforms for multiple interactions. These ribozymes can work as integrated circuits with the functionality of up to five logic gates. The ribozyme design is universal since the allosteric and substrate domains can be altered to sense different RNAs. In addition, the ribozymes can specifically cleave RNA molecules with triplet-repeat expansions observed in genetic disorders such as oculopharyngeal muscular dystrophy. Therefore, the designer ribozymes can be employed for scaling up computing and diagnostic networks in the fields of molecular computing and diagnostics and RNA synthetic biology.

  2. Self-reported transient ischemic attack and stroke symptoms: methods and baseline prevalence. The ARIC Study, 1987-1989.

    PubMed

    Toole, J F; Lefkowitz, D S; Chambless, L E; Wijnberg, L; Paton, C C; Heiss, G

    1996-11-01

    As part of the Atherosclerosis Risk in Communities (ARIC) Study assessment of the etiology and sequelae of atherosclerosis, a standardized questionnaire on transient ischemic attack (TIA) and nonfatal stroke and a computerized diagnostic algorithm simulating clinical reasoning were developed and tested at the four ARIC field centers: Forsyth County, North Carolina; Minneapolis, Minnesota; Jackson, Mississippi; and Washington County, Maryland. The diagnostic algorithm used participant responses to a series of questions about six neurologic trigger symptoms to identify symptoms of TIA or stroke and their vascular distribution. Among 12,205 ARIO participants reporting their lifetime occurrence of one or more symptoms probably due to cerebrovascular causes, nearly half (47%) reported the sudden onset of at least one symptom sometime prior to their ARIC examination. Of those with at least one symptom, only 12.9% were classified by the computer algorithm as having symptoms of TIA or stroke. Dizziness/loss of balance was the most frequently reported symptom (36%); 1.2% of these persons were classified by the algorithm as having a TIA/stroke event. Positive symptoms of speech dysfunction were classified most often (77.%) as being symptoms of TIA or stroke. Symptoms suggesting TIA were reported more frequently than symptoms suggesting stroke by both sexes. TIA or stroke-like phenomena were more frequent (p < 0.001) in females (7%) than in males (5%) and increased with age in both sexes (p = 0.13 for females; p = 0.02 for males). In Forsyth County, TIA and stroke symptoms were greater in African Americans than in Caucasians (p = 0.05, controlling for sex). The association of algorithmically defined symptoms of TIA or stroke with traditional cerebrovascular risk factors is the subject of a companion paper.

  3. Making a structured psychiatric diagnostic interview faithful to the nomenclature.

    PubMed

    Robins, Lee N; Cottler, Linda B

    2004-10-15

    Psychiatric diagnostic interviews to be used in epidemiologic studies by lay interviewers have, since the 1970s, attempted to operationalize existing psychiatric nomenclatures. How to maximize the chances that they do so successfully has not previously been spelled out. In this article, the authors discuss strategies for each of the seven steps involved in writing, updating, or modifying a diagnostic interview and its supporting materials: 1) writing questions that match the nomenclature's criteria, 2) checking that respondents will be willing and able to answer the questions, 3) choosing a format acceptable to interviewers that maximizes accurate answering and recording of answers, 4) constructing a data entry and cleaning program that highlights errors to be corrected, 5) creating a diagnostic scoring program that matches the nomenclature's algorithms, 6) developing an interviewer training program that maximizes reliability, and 7) computerizing the interview. For each step, the authors discuss how to identify errors, correct them, and validate the revisions. Although operationalization will never be perfect because of ambiguities in the nomenclature, specifying methods for minimizing divergence from the nomenclature is timely as users modify existing interviews and look forward to updating interviews based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, and the International Classification of Diseases, Eleventh Revision.

  4. Clinical applications of computerized thermography

    NASA Technical Reports Server (NTRS)

    Anbar, Michael

    1988-01-01

    Computerized or digital, thermography is a rapidly growing diagnostic imaging modality. It has superseded contact thermography and analog imaging thermography which do not allow effective quantization. Medical applications of digital thermography can be classified in two groups: static and dynamic imaging. They can also be classified into macro thermography (resolution greater than 1 mm) and micro thermography (resolution less than 100 microns). Both modalities allow a thermal resolution of 0.1 C. The diagnostic power of images produced by any of these modalities can be augmented by the use of digital image enhancement and image recognition procedures. Computerized thermography has been applied in neurology, cardiovascular and plastic surgery, rehabilitation and sports medicine, psychiatry, dermatology and ophthalmology. Examples of these applications are shown and their scope and limitations are discussed.

  5. Computerized Lung Sound Analysis as diagnostic aid for the detection of abnormal lung sounds: a systematic review and meta-analysis

    PubMed Central

    Gurung, Arati; Scrafford, Carolyn G; Tielsch, James M; Levine, Orin S; Checkley, William

    2011-01-01

    Rationale The standardized use of a stethoscope for chest auscultation in clinical research is limited by its inherent inter-listener variability. Electronic auscultation and automated classification of recorded lung sounds may help prevent some these shortcomings. Objective We sought to perform a systematic review and meta-analysis of studies implementing computerized lung sounds analysis (CLSA) to aid in the detection of abnormal lung sounds for specific respiratory disorders. Methods We searched for articles on CLSA in MEDLINE, EMBASE, Cochrane Library and ISI Web of Knowledge through July 31, 2010. Following qualitative review, we conducted a meta-analysis to estimate the sensitivity and specificity of CLSA for the detection of abnormal lung sounds. Measurements and Main Results Of 208 articles identified, we selected eight studies for review. Most studies employed either electret microphones or piezoelectric sensors for auscultation, and Fourier Transform and Neural Network algorithms for analysis and automated classification of lung sounds. Overall sensitivity for the detection of wheezes or crackles using CLSA was 80% (95% CI 72–86%) and specificity was 85% (95% CI 78–91%). Conclusions While quality data on CLSA are relatively limited, analysis of existing information suggests that CLSA can provide a relatively high specificity for detecting abnormal lung sounds such as crackles and wheezes. Further research and product development could promote the value of CLSA in research studies or its diagnostic utility in clinical setting. PMID:21676606

  6. Computerized lung sound analysis as diagnostic aid for the detection of abnormal lung sounds: a systematic review and meta-analysis.

    PubMed

    Gurung, Arati; Scrafford, Carolyn G; Tielsch, James M; Levine, Orin S; Checkley, William

    2011-09-01

    The standardized use of a stethoscope for chest auscultation in clinical research is limited by its inherent inter-listener variability. Electronic auscultation and automated classification of recorded lung sounds may help prevent some of these shortcomings. We sought to perform a systematic review and meta-analysis of studies implementing computerized lung sound analysis (CLSA) to aid in the detection of abnormal lung sounds for specific respiratory disorders. We searched for articles on CLSA in MEDLINE, EMBASE, Cochrane Library and ISI Web of Knowledge through July 31, 2010. Following qualitative review, we conducted a meta-analysis to estimate the sensitivity and specificity of CLSA for the detection of abnormal lung sounds. Of 208 articles identified, we selected eight studies for review. Most studies employed either electret microphones or piezoelectric sensors for auscultation, and Fourier Transform and Neural Network algorithms for analysis and automated classification of lung sounds. Overall sensitivity for the detection of wheezes or crackles using CLSA was 80% (95% CI 72-86%) and specificity was 85% (95% CI 78-91%). While quality data on CLSA are relatively limited, analysis of existing information suggests that CLSA can provide a relatively high specificity for detecting abnormal lung sounds such as crackles and wheezes. Further research and product development could promote the value of CLSA in research studies or its diagnostic utility in clinical settings. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. COMP (Computerized Operational Materials Prescription).

    ERIC Educational Resources Information Center

    Rosenkranz, Catherine I.

    Described is Project COMP (Computerized Operational Materials Prescription), an individualized reading instructional program for educable mentally retarded (EMR) children in regular or special classes. The program is designed to correlate with the Wisconsin Design for Reading (WDR) and to utilize a diagnostic teaching specialist who uses specific…

  8. [The clinical economic analysis of the methods of ischemic heart disease diagnostics].

    PubMed

    Kalashnikov, V Iu; Mitriagina, S N; Syrkin, A L; Poltavskaia, M G; Sorokina, E G

    2007-01-01

    The clinical economical analysis was applied to assess the application of different techniques of ischemic heart disease diagnostics - the electro-cardiographic monitoring, the treadmill-testing, the stress-echo cardiographic with dobutamine, the single-photon computerized axial tomography with load, the multi-spiral computerized axial tomography with coronary arteries staining in patients with different initial probability of disease occurrence. In all groups, the best value of "cost-effectiveness" had the treadmill-test. The patients with low risk needed 17.4 rubles to precise the probability of ischemic heart disease occurrence at 1%. In the group with medium and high risk this indicator was 9.4 and 24.7 rubles correspondingly. It is concluded that to precise the probability of ischemic heart disease occurrence after tredmil-test in the patients with high probability it is appropriate to use the single-photon computerized axial tomography with load and in the case of patients with low probability the multi-spiral computerized axial tomography with coronary arteries staining.

  9. Computerized tomography platform using beta rays

    NASA Astrophysics Data System (ADS)

    Paetkau, Owen; Parsons, Zachary; Paetkau, Mark

    2017-12-01

    A computerized tomography (CT) system using a 0.1 μCi Sr-90 beta source, Geiger counter, and low density foam samples was developed. A simple algorithm was used to construct images from the data collected with the beta CT scanner. The beta CT system is analogous to X-ray CT as both types of radiation are sensitive to density variations. This system offers a platform for learning opportunities in an undergraduate laboratory, covering topics such as image reconstruction algorithms, radiation exposure, and the energy dependence of absorption.

  10. [Modernized study on eye's signs of blood-stasis syndrome].

    PubMed

    Wu, Rui; Xie, Jian-xiang; Zhao, Feng-da

    2011-03-01

    To make out a computerized formula to diagnose eye's signs of blood-stasis syndrome (BSS), and to improve the previous diagnostic methods by naked eyes. The formula was created by detecting and analyzing the changes of eye's signs in 544 patients (261 of non-BSS and 283 of BSS) quantitatively, adopting computer's color scale principle. And the sensitivity, specificity and accuracy of the formula were verified in 382 patients (97 non-BSS and 285 of BSS). The computerized integral was compared with the naked eye integral, and the normal reference value was calculated with percentile. Various observatory indices of eye's sign were positively correlated with BSS. The specificity of the computerized method was 83.5%, and the diagnostic sensitivity was 89.8%, the accuracy 88.2%, and the correct index 0.733. Comparisons between the computerized integral method and the naked eye integral method showed significant difference in patients of non-BSS or of BSS in various degrees (including mild, moderate and severe) (P < 0.01). The reference value of the naked eye method was below 15. The computerized formula of eye's signs has higher specificity and sensitivity in the diagnosis of BSS, while the naked eye integral method is proved to be useful.

  11. Termination Criteria for Computerized Classification Testing

    ERIC Educational Resources Information Center

    Thompson, Nathan A.

    2011-01-01

    Computerized classification testing (CCT) is an approach to designing tests with intelligent algorithms, similar to adaptive testing, but specifically designed for the purpose of classifying examinees into categories such as "pass" and "fail." Like adaptive testing for point estimation of ability, the key component is the…

  12. Computer-Based and Paper-Based Measurement of Semantic Knowledge

    DTIC Science & Technology

    1989-01-01

    of Personality Assessment , 34, 353-361. McArthur, D. L., & Choppin, B. H. (1984). Computerized diagnostic testing. Journal 15 of Educational...Computers in Human Behavior, 1, 49-58. Lushene, R. E., O’Neii, H. F., & Dunn, T. (1974). Equivalent validity of a completely computerized MMPI. Journal

  13. Initial clinical experience with computerized tomography of the body.

    PubMed

    Stephens, D H; Sheedy, P F; Hattery, R R; Hartman, G W

    1976-04-01

    Computerized tomography of the body, now possible with an instrument that can complete a scan rapidly enough to permit patients to suspend respiration, adds an important new dimension to radiologic diagnosis. Cross-sectional antomy is uniquely reconstructed to provide accurate diagnostic information for various disorders throughout the body.

  14. An overview of selected information storage and retrieval issues in computerized document processing

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Ihebuzor, Valentine U.

    1984-01-01

    The rapid development of computerized information storage and retrieval techniques has introduced the possibility of extending the word processing concept to document processing. A major advantage of computerized document processing is the relief of the tedious task of manual editing and composition usually encountered by traditional publishers through the immense speed and storage capacity of computers. Furthermore, computerized document processing provides an author with centralized control, the lack of which is a handicap of the traditional publishing operation. A survey of some computerized document processing techniques is presented with emphasis on related information storage and retrieval issues. String matching algorithms are considered central to document information storage and retrieval and are also discussed.

  15. [Complex automatic data processing in multi-profile hospitals].

    PubMed

    Dovzhenko, Iu M; Panov, G D

    1990-01-01

    The computerization of data processing in multi-disciplinary hospitals is the key factor in raising the quality of medical care provided to the population, intensifying the work of the personnel, improving the curative and diagnostic process and the use of resources. Even a small experience in complex computerization at the Botkin Hospital indicates that due to the use of the automated system the quality of data processing in being improved, a high level of patients' examination is being provided, a speedy training of young specialists is being achieved, conditions are being created for continuing education of physicians through the analysis of their own activity. At big hospitals a complex solution of administrative and curative diagnostic tasks on the basis of general hospital network of display connection and general hospital data bank is the most prospective form of computerization.

  16. How to Use the DX SYSTEM of Diagnostic Testing. Methodology Project.

    ERIC Educational Resources Information Center

    McArthur, David; Cabello, Beverly

    The DX SYSTEM of Diagnostic Testing is an easy-to-use computerized system for developing and administering diagnostic tests. A diagnostic test measures a student's mastery of a specific domain (skill or content area). It examines the necessary subskills hierarchically from the most to the least complex. The DX SYSTEM features tailored testing with…

  17. Combined single photon emission computerized tomography and conventional computerized tomography: Clinical value for the shoulder surgeons?

    PubMed Central

    Hirschmann, Michael T.; Schmid, Rahel; Dhawan, Ranju; Skarvan, Jiri; Rasch, Helmut; Friederich, Niklaus F.; Emery, Roger

    2011-01-01

    With the cases described, we strive to introduce single photon emission computerized tomography in combination with conventional computer tomography (SPECT/CT) to shoulder surgeons, illustrate the possible clinical value it may offer as new diagnostic radiologic modality, and discuss its limitations. SPECT/CT may facilitate the establishment of diagnosis, process of decision making, and further treatment for complex shoulder pathologies. Some of these advantages were highlighted in cases that are frequently seen in most shoulder clinics. PMID:22058640

  18. An Introduction to the Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Tian, Jian-quan; Miao, Dan-min; Zhu, Xia; Gong, Jing-jing

    2007-01-01

    Computerized adaptive testing (CAT) has unsurpassable advantages over traditional testing. It has become the mainstream in large scale examinations in modern society. This paper gives a brief introduction to CAT including differences between traditional testing and CAT, the principles of CAT, psychometric theory and computer algorithms of CAT, the…

  19. Variable-Length Computerized Adaptive Testing Based on Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hsu, Chia-Ling; Wang, Wen-Chung; Chen, Shu-Ying

    2013-01-01

    Interest in developing computerized adaptive testing (CAT) under cognitive diagnosis models (CDMs) has increased recently. CAT algorithms that use a fixed-length termination rule frequently lead to different degrees of measurement precision for different examinees. Fixed precision, in which the examinees receive the same degree of measurement…

  20. [The role of multidetector computer tomography in diagnosis of acute pancreatitis].

    PubMed

    Lohanikhina, K Iu; Hordiienko, K P; Kozarenko, T M

    2014-10-01

    With the objective to improve the diagnostic semiotics of an acute pancreatitis (AP) 35 patients were examined, using 64-cut computeric tomograph Lightspeed VCT (GE, USA) with intravenous augmentation in arterial and portal phases. Basing on analysis of the investigations conducted, using multidetector computeric tomography (MDCT), the AP semiotics was systematized, which is characteristic for oedematous and destructive forms, diagnosed in 19 (44.2%) and 16 (45.8%) patients, accordingly. The procedure for estimation of preservation of the organ functional capacity in pancreonecrosis pres- ence was elaborated, promoting rising of the method diagnostic efficacy by 5.3 - 9.4%.

  1. An Efficiency Balanced Information Criterion for Item Selection in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2012-01-01

    Successful administration of computerized adaptive testing (CAT) programs in educational settings requires that test security and item exposure control issues be taken seriously. Developing an item selection algorithm that strikes the right balance between test precision and level of item pool utilization is the key to successful implementation…

  2. A Framework for the Development of Computerized Adaptive Tests

    ERIC Educational Resources Information Center

    Thompson, Nathan A.; Weiss, David J.

    2011-01-01

    A substantial amount of research has been conducted over the past 40 years on technical aspects of computerized adaptive testing (CAT), such as item selection algorithms, item exposure controls, and termination criteria. However, there is little literature providing practical guidance on the development of a CAT. This paper seeks to collate some…

  3. A Mixture Rasch Model-Based Computerized Adaptive Test for Latent Class Identification

    ERIC Educational Resources Information Center

    Jiao, Hong; Macready, George; Liu, Junhui; Cho, Youngmi

    2012-01-01

    This study explored a computerized adaptive test delivery algorithm for latent class identification based on the mixture Rasch model. Four item selection methods based on the Kullback-Leibler (KL) information were proposed and compared with the reversed and the adaptive KL information under simulated testing conditions. When item separation was…

  4. Controlling Item Exposure Conditional on Ability in Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Stocking, Martha L.; Lewis, Charles

    1998-01-01

    Ensuring item and pool security in a continuous testing environment is explored through a new method of controlling exposure rate of items conditional on ability level in computerized testing. Properties of this conditional control on exposure rate, when used in conjunction with a particular adaptive testing algorithm, are explored using simulated…

  5. Computerization of guidelines: a knowledge specification method to convert text to detailed decision tree for electronic implementation.

    PubMed

    Aguirre-Junco, Angel-Ricardo; Colombet, Isabelle; Zunino, Sylvain; Jaulent, Marie-Christine; Leneveut, Laurence; Chatellier, Gilles

    2004-01-01

    The initial step for the computerization of guidelines is the knowledge specification from the prose text of guidelines. We describe a method of knowledge specification based on a structured and systematic analysis of text allowing detailed specification of a decision tree. We use decision tables to validate the decision algorithm and decision trees to specify and represent this algorithm, along with elementary messages of recommendation. Edition tools are also necessary to facilitate the process of validation and workflow between expert physicians who will validate the specified knowledge and computer scientist who will encode the specified knowledge in a guide-line model. Applied to eleven different guidelines issued by an official agency, the method allows a quick and valid computerization and integration in a larger decision support system called EsPeR (Personalized Estimate of Risks). The quality of the text guidelines is however still to be developed further. The method used for computerization could help to define a framework usable at the initial step of guideline development in order to produce guidelines ready for electronic implementation.

  6. Adaptive Decision Aiding in Computer-Assisted Instruction: Adaptive Computerized Training System (ACTS).

    ERIC Educational Resources Information Center

    Hopf-Weichel, Rosemarie; And Others

    This report describes results of the first year of a three-year program to develop and evaluate a new Adaptive Computerized Training System (ACTS) for electronics maintenance training. (ACTS incorporates an adaptive computer program that learns the student's diagnostic and decision value structure, compares it to that of an expert, and adapts the…

  7. Visualization techniques for tongue analysis in traditional Chinese medicine

    NASA Astrophysics Data System (ADS)

    Pham, Binh L.; Cai, Yang

    2004-05-01

    Visual inspection of the tongue has been an important diagnostic method of Traditional Chinese Medicine (TCM). Clinic data have shown significant connections between various viscera cancers and abnormalities in the tongue and the tongue coating. Visual inspection of the tongue is simple and inexpensive, but the current practice in TCM is mainly experience-based and the quality of the visual inspection varies between individuals. The computerized inspection method provides quantitative models to evaluate color, texture and surface features on the tongue. In this paper, we investigate visualization techniques and processes to allow interactive data analysis with the aim to merge computerized measurements with human expert's diagnostic variables based on five-scale diagnostic conditions: Healthy (H), History Cancers (HC), History of Polyps (HP), Polyps (P) and Colon Cancer (C).

  8. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    NASA Astrophysics Data System (ADS)

    Guo, J.; Bücherl, T.; Zou, Y.; Guo, Z.

    2011-09-01

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

  9. Accurately Diagnosing Uric Acid Stones from Conventional Computerized Tomography Imaging: Development and Preliminary Assessment of a Pixel Mapping Software.

    PubMed

    Ganesan, Vishnu; De, Shubha; Shkumat, Nicholas; Marchini, Giovanni; Monga, Manoj

    2018-02-01

    Preoperative determination of uric acid stones from computerized tomography imaging would be of tremendous clinical use. We sought to design a software algorithm that could apply data from noncontrast computerized tomography to predict the presence of uric acid stones. Patients with pure uric acid and calcium oxalate stones were identified from our stone registry. Only stones greater than 4 mm which were clearly traceable from initial computerized tomography to final composition were included in analysis. A semiautomated computer algorithm was used to process image data. Average and maximum HU, eccentricity (deviation from a circle) and kurtosis (peakedness vs flatness) were automatically generated. These parameters were examined in several mathematical models to predict the presence of uric acid stones. A total of 100 patients, of whom 52 had calcium oxalate and 48 had uric acid stones, were included in the final analysis. Uric acid stones were significantly larger (12.2 vs 9.0 mm, p = 0.03) but calcium oxalate stones had higher mean attenuation (457 vs 315 HU, p = 0.001) and maximum attenuation (918 vs 553 HU, p <0.001). Kurtosis was significantly higher in each axis for calcium oxalate stones (each p <0.001). A composite algorithm using attenuation distribution pattern, average attenuation and stone size had overall 89% sensitivity, 91% specificity, 91% positive predictive value and 89% negative predictive value to predict uric acid stones. A combination of stone size, attenuation intensity and attenuation pattern from conventional computerized tomography can distinguish uric acid stones from calcium oxalate stones with high sensitivity and specificity. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  10. Methodology for vocational psychodiagnostics of senior schoolchildren using information technologies

    NASA Astrophysics Data System (ADS)

    Bogdanovskaya, I. M.; Kosheleva, A. N.; Kiselev, P. B.; Davydova, Yu. A.

    2017-01-01

    The article identifies the role and main problems of vocational psychodiagnostics in modern socio-cultural conditions. It analyzes the potentials of information technologies in vocational psychodiagnostics of senior schoolchildren. The article describes the theoretical and methodological grounds, content and diagnostic potentials of the computerized method in vocational psychodiagnostics. The computerized method includes three blocks of sub-tests to identify intellectual potential, personal qualities, professional interests and values, career orientations, as well as subtests to analyze the specific life experience of senior schoolchildren. The results of diagnostics allow developing an integrated psychodiagnostic conclusion with recommendations. The article contains options of software architecture for the given method.

  11. Assembling a Computerized Adaptive Testing Item Pool as a Set of Linear Tests

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Ariel, Adelaide; Veldkamp, Bernard P.

    2006-01-01

    Test-item writing efforts typically results in item pools with an undesirable correlational structure between the content attributes of the items and their statistical information. If such pools are used in computerized adaptive testing (CAT), the algorithm may be forced to select items with less than optimal information, that violate the content…

  12. A New Item Selection Procedure for Mixed Item Type in Computerized Classification Testing.

    ERIC Educational Resources Information Center

    Lau, C. Allen; Wang, Tianyou

    This paper proposes a new Information-Time index as the basis for item selection in computerized classification testing (CCT) and investigates how this new item selection algorithm can help improve test efficiency for item pools with mixed item types. It also investigates how practical constraints such as item exposure rate control, test…

  13. Clinical Use of the Pediatric Attention Disorders Diagnostic Screener for Children at Risk for Attention Deficit Hyperactivity Disorder: Case Illustrations

    ERIC Educational Resources Information Center

    Keiser, Ashley; Reddy, Linda

    2013-01-01

    The Pediatric Attention Disorders Diagnostic Screener is a multidimensional, computerized screening tool designed to assess attention and global aspects of executive functioning in children at risk for attention disorders. The screener consists of a semi-structured diagnostic interview, brief parent and teacher rating scales, 3 computer-based…

  14. Decreased rates of hypoglycemia following implementation of a comprehensive computerized insulin order set and titration algorithm in the inpatient setting.

    PubMed

    Sinha Gregory, Naina; Seley, Jane Jeffrie; Gerber, Linda M; Tang, Chin; Brillon, David

    2016-12-01

    More than one-third of hospitalized patients have hyperglycemia. Despite evidence that improving glycemic control leads to better outcomes, achieving recognized targets remains a challenge. The objective of this study was to evaluate the implementation of a computerized insulin order set and titration algorithm on rates of hypoglycemia and overall inpatient glycemic control. A prospective observational study evaluating the impact of a glycemic order set and titration algorithm in an academic medical center in non-critical care medical and surgical inpatients. The initial intervention was hospital-wide implementation of a comprehensive insulin order set. The secondary intervention was initiation of an insulin titration algorithm in two pilot medicine inpatient units. Point of care testing blood glucose reports were analyzed. These reports included rates of hypoglycemia (BG < 70 mg/dL) and hyperglycemia (BG >200 mg/dL in phase 1, BG > 180 mg/dL in phase 2). In the first phase of the study, implementation of the insulin order set was associated with decreased rates of hypoglycemia (1.92% vs 1.61%; p < 0.001) and increased rates of hyperglycemia (24.02% vs 27.27%; p < 0.001) from 2010 to 2011. In the second phase, addition of a titration algorithm was associated with decreased rates of hypoglycemia (2.57% vs 1.82%; p = 0.039) and increased rates of hyperglycemia (31.76% vs 41.33%; p < 0.001) from 2012 to 2013. A comprehensive computerized insulin order set and titration algorithm significantly decreased rates of hypoglycemia. This significant reduction in hypoglycemia was associated with increased rates of hyperglycemia. Hardwiring the algorithm into the electronic medical record may foster adoption.

  15. A computational framework for converting textual clinical diagnostic criteria into the quality data model.

    PubMed

    Hong, Na; Li, Dingcheng; Yu, Yue; Xiu, Qiongying; Liu, Hongfang; Jiang, Guoqian

    2016-10-01

    Constructing standard and computable clinical diagnostic criteria is an important but challenging research field in the clinical informatics community. The Quality Data Model (QDM) is emerging as a promising information model for standardizing clinical diagnostic criteria. To develop and evaluate automated methods for converting textual clinical diagnostic criteria in a structured format using QDM. We used a clinical Natural Language Processing (NLP) tool known as cTAKES to detect sentences and annotate events in diagnostic criteria. We developed a rule-based approach for assigning the QDM datatype(s) to an individual criterion, whereas we invoked a machine learning algorithm based on the Conditional Random Fields (CRFs) for annotating attributes belonging to each particular QDM datatype. We manually developed an annotated corpus as the gold standard and used standard measures (precision, recall and f-measure) for the performance evaluation. We harvested 267 individual criteria with the datatypes of Symptom and Laboratory Test from 63 textual diagnostic criteria. We manually annotated attributes and values in 142 individual Laboratory Test criteria. The average performance of our rule-based approach was 0.84 of precision, 0.86 of recall, and 0.85 of f-measure; the performance of CRFs-based classification was 0.95 of precision, 0.88 of recall and 0.91 of f-measure. We also implemented a web-based tool that automatically translates textual Laboratory Test criteria into the QDM XML template format. The results indicated that our approaches leveraging cTAKES and CRFs are effective in facilitating diagnostic criteria annotation and classification. Our NLP-based computational framework is a feasible and useful solution in developing diagnostic criteria representation and computerization. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles.

    PubMed

    Barker, Jocelyn; Hoogi, Assaf; Depeursinge, Adrien; Rubin, Daniel L

    2016-05-01

    Computerized analysis of digital pathology images offers the potential of improving clinical care (e.g. automated diagnosis) and catalyzing research (e.g. discovering disease subtypes). There are two key challenges thwarting computerized analysis of digital pathology images: first, whole slide pathology images are massive, making computerized analysis inefficient, and second, diverse tissue regions in whole slide images that are not directly relevant to the disease may mislead computerized diagnosis algorithms. We propose a method to overcome both of these challenges that utilizes a coarse-to-fine analysis of the localized characteristics in pathology images. An initial surveying stage analyzes the diversity of coarse regions in the whole slide image. This includes extraction of spatially localized features of shape, color and texture from tiled regions covering the slide. Dimensionality reduction of the features assesses the image diversity in the tiled regions and clustering creates representative groups. A second stage provides a detailed analysis of a single representative tile from each group. An Elastic Net classifier produces a diagnostic decision value for each representative tile. A weighted voting scheme aggregates the decision values from these tiles to obtain a diagnosis at the whole slide level. We evaluated our method by automatically classifying 302 brain cancer cases into two possible diagnoses (glioblastoma multiforme (N = 182) versus lower grade glioma (N = 120)) with an accuracy of 93.1% (p < 0.001). We also evaluated our method in the dataset provided for the 2014 MICCAI Pathology Classification Challenge, in which our method, trained and tested using 5-fold cross validation, produced a classification accuracy of 100% (p < 0.001). Our method showed high stability and robustness to parameter variation, with accuracy varying between 95.5% and 100% when evaluated for a wide range of parameters. Our approach may be useful to automatically differentiate between the two cancer subtypes. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Computerized Classification Testing under the One-Parameter Logistic Response Model with Ability-Based Guessing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Huang, Sheng-Yun

    2011-01-01

    The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…

  18. Fuzzy logic algorithm for quantitative tissue characterization of diffuse liver diseases from ultrasound images.

    PubMed

    Badawi, A M; Derbala, A S; Youssef, A M

    1999-08-01

    Computerized ultrasound tissue characterization has become an objective means for diagnosis of liver diseases. It is difficult to differentiate diffuse liver diseases, namely cirrhotic and fatty liver by visual inspection from the ultrasound images. The visual criteria for differentiating diffused diseases are rather confusing and highly dependent upon the sonographer's experience. This often causes a bias effects in the diagnostic procedure and limits its objectivity and reproducibility. Computerized tissue characterization to assist quantitatively the sonographer for the accurate differentiation and to minimize the degree of risk is thus justified. Fuzzy logic has emerged as one of the most active area in classification. In this paper, we present an approach that employs Fuzzy reasoning techniques to automatically differentiate diffuse liver diseases using numerical quantitative features measured from the ultrasound images. Fuzzy rules were generated from over 140 cases consisting of normal, fatty, and cirrhotic livers. The input to the fuzzy system is an eight dimensional vector of feature values: the mean gray level (MGL), the percentile 10%, the contrast (CON), the angular second moment (ASM), the entropy (ENT), the correlation (COR), the attenuation (ATTEN) and the speckle separation. The output of the fuzzy system is one of the three categories: cirrhosis, fatty or normal. The steps done for differentiating the pathologies are data acquisition and feature extraction, dividing the input spaces of the measured quantitative data into fuzzy sets. Based on the expert knowledge, the fuzzy rules are generated and applied using the fuzzy inference procedures to determine the pathology. Different membership functions are developed for the input spaces. This approach has resulted in very good sensitivities and specificity for classifying diffused liver pathologies. This classification technique can be used in the diagnostic process, together with the history information, laboratory, clinical and pathological examinations.

  19. Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships.

    PubMed

    Hatipoglu, Nuh; Bilgin, Gokhan

    2017-10-01

    In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.

  20. Validation of diabetes mellitus and hypertension diagnosis in computerized medical records in primary health care

    PubMed Central

    2011-01-01

    Background Computerized Clinical Records, which are incorporated in primary health care practice, have great potential for research. In order to use this information, data quality and reliability must be assessed to prevent compromising the validity of the results. The aim of this study is to validate the diagnosis of hypertension and diabetes mellitus in the computerized clinical records of primary health care, taking the diagnosis criteria established in the most prominently used clinical guidelines as the gold standard against which what measure the sensitivity, specificity, and determine the predictive values. The gold standard for diabetes mellitus was the diagnostic criteria established in 2003 American Diabetes Association Consensus Statement for diabetic subjects. The gold standard for hypertension was the diagnostic criteria established in the Joint National Committee published in 2003. Methods A cross-sectional multicentre validation study of diabetes mellitus and hypertension diagnoses in computerized clinical records of primary health care was carried out. Diagnostic criteria from the most prominently clinical practice guidelines were considered for standard reference. Sensitivity, specificity, positive and negative predictive values, and global agreement (with kappa index), were calculated. Results were shown overall and stratified by sex and age groups. Results The agreement for diabetes mellitus with the reference standard as determined by the guideline was almost perfect (κ = 0.990), with a sensitivity of 99.53%, a specificity of 99.49%, a positive predictive value of 91.23% and a negative predictive value of 99.98%. Hypertension diagnosis showed substantial agreement with the reference standard as determined by the guideline (κ = 0.778), the sensitivity was 85.22%, the specificity 96.95%, the positive predictive value 85.24%, and the negative predictive value was 96.95%. Sensitivity results were worse in patients who also had diabetes and in those aged 70 years or over. Conclusions Our results substantiate the validity of using diagnoses of diabetes and hypertension found within the computerized clinical records for epidemiologic studies. PMID:22035202

  1. Application of Adaptive Decision Aiding Systems to Computer-Assisted Instruction. Final Report, January-December 1974.

    ERIC Educational Resources Information Center

    May, Donald M.; And Others

    The minicomputer-based Computerized Diagnostic and Decision Training (CDDT) system described combines the principles of artificial intelligence, decision theory, and adaptive computer assisted instruction for training in electronic troubleshooting. The system incorporates an adaptive computer program which learns the student's diagnostic and…

  2. Controlled Trial Using Computerized Feedback to Improve Physicians' Diagnostic Judgments.

    ERIC Educational Resources Information Center

    Poses, Roy M.; And Others

    1992-01-01

    A study involving 14 experienced physicians investigated the effectiveness of a computer program (providing statistical feedback to teach a clinical diagnostic rule that predicts the probability of streptococcal pharyngitis), in conjunction with traditional lecture and periodic disease-prevalence reports. Results suggest the integrated method is a…

  3. Automated particle identification through regression analysis of size, shape and colour

    NASA Astrophysics Data System (ADS)

    Rodriguez Luna, J. C.; Cooper, J. M.; Neale, S. L.

    2016-04-01

    Rapid point of care diagnostic tests and tests to provide therapeutic information are now available for a range of specific conditions from the measurement of blood glucose levels for diabetes to card agglutination tests for parasitic infections. Due to a lack of specificity these test are often then backed up by more conventional lab based diagnostic methods for example a card agglutination test may be carried out for a suspected parasitic infection in the field and if positive a blood sample can then be sent to a lab for confirmation. The eventual diagnosis is often achieved by microscopic examination of the sample. In this paper we propose a computerized vision system for aiding in the diagnostic process; this system used a novel particle recognition algorithm to improve specificity and speed during the diagnostic process. We will show the detection and classification of different types of cells in a diluted blood sample using regression analysis of their size, shape and colour. The first step is to define the objects to be tracked by a Gaussian Mixture Model for background subtraction and binary opening and closing for noise suppression. After subtracting the objects of interest from the background the next challenge is to predict if a given object belongs to a certain category or not. This is a classification problem, and the output of the algorithm is a Boolean value (true/false). As such the computer program should be able to "predict" with reasonable level of confidence if a given particle belongs to the kind we are looking for or not. We show the use of a binary logistic regression analysis with three continuous predictors: size, shape and color histogram. The results suggest this variables could be very useful in a logistic regression equation as they proved to have a relatively high predictive value on their own.

  4. Computerized tomography as a diagnostic aid in acute hemorrhagic leukoencephalitis.

    PubMed

    Rothstein, T L; Shaw, C M

    1983-03-01

    Computerized tomography (CT) in a pathologically proven case of acute hemorrhagic leukoencephalitis (AHL) showed a mass effect and increased absorption coefficient in the right hemisphere within 18 hours of the onset of neurological symptoms. The changes corresponded to the site of white matter edema, necrosis, and petechial hemorrhages demonstrated postmortem. The early changes of CT reflect the hyperacute nature of AHL and differ from those of herpes simplex encephalitis.

  5. Improved algorithm for computerized detection and quantification of pulmonary emphysema at high-resolution computed tomography (HRCT)

    NASA Astrophysics Data System (ADS)

    Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik

    2001-05-01

    Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.

  6. A Reproducible Computerized Method for Quantitation of Capillary Density using Nailfold Capillaroscopy.

    PubMed

    Cheng, Cynthia; Lee, Chadd W; Daskalakis, Constantine

    2015-10-27

    Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient's microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.(1) This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique.

  7. A Reproducible Computerized Method for Quantitation of Capillary Density using Nailfold Capillaroscopy

    PubMed Central

    Daskalakis, Constantine

    2015-01-01

    Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient’s microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.1 This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique. PMID:26554744

  8. Operational modelling: the mechanisms influencing TB diagnostic yield in an Xpert® MTB/RIF-based algorithm.

    PubMed

    Dunbar, R; Naidoo, P; Beyers, N; Langley, I

    2017-04-01

    Cape Town, South Africa. To compare the diagnostic yield for smear/culture and Xpert® MTB/RIF algorithms and to investigate the mechanisms influencing tuberculosis (TB) yield. We developed and validated an operational model of the TB diagnostic process, first with the smear/culture algorithm and then with the Xpert algorithm. We modelled scenarios by varying TB prevalence, adherence to diagnostic algorithms and human immunodeficiency virus (HIV) status. This enabled direct comparisons of diagnostic yield in the two algorithms to be made. Routine data showed that diagnostic yield had decreased over the period of the Xpert algorithm roll-out compared to the yield when the smear/culture algorithm was in place. However, modelling yield under identical conditions indicated a 13.3% increase in diagnostic yield from the Xpert algorithm compared to smear/culture. The model demonstrated that the extensive use of culture in the smear/culture algorithm and the decline in TB prevalence are the main factors contributing to not finding an increase in diagnostic yield in the routine data. We demonstrate the benefits of an operational model to determine the effect of scale-up of a new diagnostic algorithm, and recommend that policy makers use operational modelling to make appropriate decisions before new diagnostic algorithms are scaled up.

  9. Validation of Computerized Adaptive Testing in an Outpatient Non-academic Setting: the VOCATIONS Trial

    PubMed Central

    Achtyes, Eric Daniel; Halstead, Scott; Smart, LeAnn; Moore, Tara; Frank, Ellen; Kupfer, David J.; Gibbons, Robert

    2015-01-01

    Objective Computerized adaptive tests (CAT) provide an alternative to fixed-length assessments for diagnostic screening and severity measurement of psychiatric disorders. We sought to cross-sectionally validate a suite of computerized adaptive tests for mental health (CAT-MH) in a community psychiatric sample. Methods 145 adult psychiatric outpatients and controls were prospectively evaluated with CAT for depression, mania and anxiety symptoms, compared to gold-standard psychiatric assessments including: Structured Clinical Interview for DSM IV-TR (SCID), Hamilton Rating Scale for Depression (HAM-D25), Patient Health Questionnaire (PHQ-9), Center for Epidemiologic Studies Depression Scale (CES-D), and Global Assessment of Functioning (GAF). Results Sensitivity and specificity for the computerized adaptive diagnostic test for depression (CAD-MDD) were .96 and .64, respectively (.96 and 1.00 for major depression versus controls). CAT for depression severity (CAT-DI) correlated well to standard depression scales HAM-D25 (r=.79), PHQ-9 (r=.90), CES-D (r=.90) and had OR=27.88 for current SCID major depressive disorder diagnosis across its range. CAT for anxiety severity (CAT-ANX) correlated to HAM-D25 (r=.73), PHQ-9 (r=.78), CES-D (r=.81), and had OR=11.52 for current SCID generalized anxiety disorder diagnosis across its range. CAT for mania severity (CAT-MANIA) did not correlate well to HAM-D25 (r=.31), PHQ-9 (r=.37), CES-D (r=.39), but had an OR=11.56 for a current SCID bipolar diagnosis across its range. Participants found the CAT-MH suite of tests acceptable and easy to use, averaging 51.7 items and 9.4 minutes to complete the full battery. Conclusions Compared to current gold-standard diagnostic and assessment measures, CAT-MH provides an effective, rapidly-administered assessment of psychiatric symptoms. PMID:26030317

  10. Diagnostic support for selected neuromuscular diseases using answer-pattern recognition and data mining techniques: a proof of concept multicenter prospective trial.

    PubMed

    Grigull, Lorenz; Lechner, Werner; Petri, Susanne; Kollewe, Katja; Dengler, Reinhard; Mehmecke, Sandra; Schumacher, Ulrike; Lücke, Thomas; Schneider-Gold, Christiane; Köhler, Cornelia; Güttsches, Anne-Katrin; Kortum, Xiaowei; Klawonn, Frank

    2016-03-08

    Diagnosis of neuromuscular diseases in primary care is often challenging. Rare diseases such as Pompe disease are easily overlooked by the general practitioner. We therefore aimed to develop a diagnostic support tool using patient-oriented questions and combined data mining algorithms recognizing answer patterns in individuals with selected neuromuscular diseases. A multicenter prospective study for the proof of concept was conducted thereafter. First, 16 interviews with patients were conducted focusing on their pre-diagnostic observations and experiences. From these interviews, we developed a questionnaire with 46 items. Then, patients with diagnosed neuromuscular diseases as well as patients without such a disease answered the questionnaire to establish a database for data mining. For proof of concept, initially only six diagnoses were chosen (myotonic dystrophy and myotonia (MdMy), Pompe disease (MP), amyotrophic lateral sclerosis (ALS), polyneuropathy (PNP), spinal muscular atrophy (SMA), other neuromuscular diseases, and no neuromuscular disease (NND). A prospective study was performed to validate the automated malleable system, which included six different classification methods combined in a fusion algorithm proposing a final diagnosis. Finally, new diagnoses were incorporated into the system. In total, questionnaires from 210 individuals were used to train the system. 89.5 % correct diagnoses were achieved during cross-validation. The sensitivity of the system was 93-97 % for individuals with MP, with MdMy and without neuromuscular diseases, but only 69 % in SMA and 81 % in ALS patients. In the prospective trial, 57/64 (89 %) diagnoses were predicted correctly by the computerized system. All questions, or rather all answers, increased the diagnostic accuracy of the system, with the best results reached by the fusion of different classifier methods. Receiver operating curve (ROC) and p-value analyses confirmed the results. A questionnaire-based diagnostic support tool using data mining methods exhibited good results in predicting selected neuromuscular diseases. Due to the variety of neuromuscular diseases, additional studies are required to measure beneficial effects in the clinical setting.

  11. Computer-Interpreted Electrocardiograms: Benefits and Limitations.

    PubMed

    Schläpfer, Jürg; Wellens, Hein J

    2017-08-29

    Computerized interpretation of the electrocardiogram (CIE) was introduced to improve the correct interpretation of the electrocardiogram (ECG), facilitating health care decision making and reducing costs. Worldwide, millions of ECGs are recorded annually, with the majority automatically analyzed, followed by an immediate interpretation. Limitations in the diagnostic accuracy of CIE were soon recognized and still persist, despite ongoing improvement in ECG algorithms. Unfortunately, inexperienced physicians ordering the ECG may fail to recognize interpretation mistakes and accept the automated diagnosis without criticism. Clinical mismanagement may result, with the risk of exposing patients to useless investigations or potentially dangerous treatment. Consequently, CIE over-reading and confirmation by an experienced ECG reader are essential and are repeatedly recommended in published reports. Implementation of new ECG knowledge is also important. The current status of automated ECG interpretation is reviewed, with suggestions for improvement. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  12. Otolaryngology and ophthalmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanafee, W.N.

    A literature review with 227 references of the diagnostic use of computerized tomography for head and neck problems is presented. The anatomy, congenital malformations, infectious diseases, and nioplasms of the auditory organs, paranasal sinuses, pharynx, larynx and salivary glands are examined in detail. A major impetus to the use of computerized tomography has been the realization by the health care industry that CT scanning offers details of tumors in the head and neck area that are not available by other modalities. (KRM)

  13. Computerized scoring algorithms for the Autobiographical Memory Test.

    PubMed

    Takano, Keisuke; Gutenbrunner, Charlotte; Martens, Kris; Salmon, Karen; Raes, Filip

    2018-02-01

    Reduced specificity of autobiographical memories is a hallmark of depressive cognition. Autobiographical memory (AM) specificity is typically measured by the Autobiographical Memory Test (AMT), in which respondents are asked to describe personal memories in response to emotional cue words. Due to this free descriptive responding format, the AMT relies on experts' hand scoring for subsequent statistical analyses. This manual coding potentially impedes research activities in big data analytics such as large epidemiological studies. Here, we propose computerized algorithms to automatically score AM specificity for the Dutch (adult participants) and English (youth participants) versions of the AMT by using natural language processing and machine learning techniques. The algorithms showed reliable performances in discriminating specific and nonspecific (e.g., overgeneralized) autobiographical memories in independent testing data sets (area under the receiver operating characteristic curve > .90). Furthermore, outcome values of the algorithms (i.e., decision values of support vector machines) showed a gradient across similar (e.g., specific and extended memories) and different (e.g., specific memory and semantic associates) categories of AMT responses, suggesting that, for both adults and youth, the algorithms well capture the extent to which a memory has features of specific memories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  15. The use of a computerized algorithm to determine single cardiac cell volumes.

    PubMed

    Marino, T A; Cook, L; Cook, P N; Dwyer, S J

    1981-04-01

    Single cardiac muscles cell volume data have been difficult to obtain, especially because the shape of a cell is quite complex. With the aid of a surface reconstruction method, a cell volume estimation algorithm has been developed that can be used on serial of cells. The cell surface is reconstructed by means of triangular tiles so that the cell is represented as a polyhedron. When this algorithm was tested on computer generated surfaces of a known volume, the difference was less than 1.6%. Serial sections of two phantoms of a known volume were also reconstructed and a comparison of the mathematically derived volumes and the computed volume estimations gave a per cent difference of between 2.8% and 4.1%. Finally cell volumes derived using conventional methods and volumes calculated using the algorithm were compared. The mean atrial muscle cell volume derived using conventional methods was 7752.7 +/- 644.7 micrometers3, while the mean computerized algorithm estimated atrial muscle cell volume was 7110.6 +/- 625.5 micrometers3. For AV bundle cells the mean cell volume obtained by conventional methods was 484.4 +/- 88.8 micrometers3 and the volume derived from the computer algorithm was 506.0 +/- 78.5 micrometers3. The differences between the volumes calculated using conventional methods and the algorithm were not significantly different.

  16. The Impact of Receiving the Same Items on Consecutive Computer Adaptive Test Administrations.

    ERIC Educational Resources Information Center

    O'Neill, Thomas; Lunz, Mary E.; Thiede, Keith

    2000-01-01

    Studied item exposure in a computerized adaptive test when the item selection algorithm presents examinees with questions they were asked in a previous test administration. Results with 178 repeat examinees on a medical technologists' test indicate that the combined use of an adaptive algorithm to select items and latent trait theory to estimate…

  17. The computerized adaptive diagnostic test for major depressive disorder (CAD-MDD): a screening tool for depression.

    PubMed

    Gibbons, Robert D; Hooker, Giles; Finkelman, Matthew D; Weiss, David J; Pilkonis, Paul A; Frank, Ellen; Moore, Tara; Kupfer, David J

    2013-07-01

    To develop a computerized adaptive diagnostic screening tool for depression that decreases patient and clinician burden and increases sensitivity and specificity for clinician-based DSM-IV diagnosis of major depressive disorder (MDD). 656 individuals with and without minor and major depression were recruited from a psychiatric clinic and a community mental health center and through public announcements (controls without depression). The focus of the study was the development of the Computerized Adaptive Diagnostic Test for Major Depressive Disorder (CAD-MDD) diagnostic screening tool based on a decision-theoretical approach (random forests and decision trees). The item bank consisted of 88 depression scale items drawn from 73 depression measures. Sensitivity and specificity for predicting clinician-based Structured Clinical Interview for DSM-IV Axis I Disorders diagnoses of MDD were the primary outcomes. Diagnostic screening accuracy was then compared to that of the Patient Health Questionnaire-9 (PHQ-9). An average of 4 items per participant was required (maximum of 6 items). Overall sensitivity and specificity were 0.95 and 0.87, respectively. For the PHQ-9, sensitivity was 0.70 and specificity was 0.91. High sensitivity and reasonable specificity for a clinician-based DSM-IV diagnosis of depression can be obtained using an average of 4 adaptively administered self-report items in less than 1 minute. Relative to the currently used PHQ-9, the CAD-MDD dramatically increased sensitivity while maintaining similar specificity. As such, the CAD-MDD will identify more true positives (lower false-negative rate) than the PHQ-9 using half the number of items. Inexpensive (relative to clinical assessment), efficient, and accurate screening of depression in the settings of primary care, psychiatric epidemiology, molecular genetics, and global health are all direct applications of the current system. © Copyright 2013 Physicians Postgraduate Press, Inc.

  18. Improving the utility of the fine motor skills subscale of the comprehensive developmental inventory for infants and toddlers: a computerized adaptive test.

    PubMed

    Huang, Chien-Yu; Tung, Li-Chen; Chou, Yeh-Tai; Chou, Willy; Chen, Kuan-Lin; Hsieh, Ching-Lin

    2017-07-27

    This study aimed at improving the utility of the fine motor subscale of the comprehensive developmental inventory for infants and toddlers (CDIIT) by developing a computerized adaptive test of fine motor skills. We built an item bank for the computerized adaptive test of fine motor skills using the fine motor subscale of the CDIIT items fitting the Rasch model. We also examined the psychometric properties and efficiency of the computerized adaptive test of fine motor skills with simulated computerized adaptive tests. Data from 1742 children with suspected developmental delays were retrieved. The mean scores of the fine motor subscale of the CDIIT increased along with age groups (mean scores = 1.36-36.97). The computerized adaptive test of fine motor skills contains 31 items meeting the Rasch model's assumptions (infit mean square = 0.57-1.21, outfit mean square = 0.11-1.17). For children of 6-71 months, the computerized adaptive test of fine motor skills had high Rasch person reliability (average reliability >0.90), high concurrent validity (rs = 0.67-0.99), adequate to excellent diagnostic accuracy (area under receiver operating characteristic = 0.71-1.00), and large responsiveness (effect size = 1.05-3.93). The computerized adaptive test of fine motor skills used 48-84% fewer items than the fine motor subscale of the CDIIT. The computerized adaptive test of fine motor skills used fewer items for assessment but was as reliable and valid as the fine motor subscale of the CDIIT. Implications for Rehabilitation We developed a computerized adaptive test based on the comprehensive developmental inventory for infants and toddlers (CDIIT) for assessing fine motor skills. The computerized adaptive test has been shown to be efficient because it uses fewer items than the original measure and automatically presents the results right after the test is completed. The computerized adaptive test is as reliable and valid as the CDIIT.

  19. Simplified diagnostic coding sheet for computerized data storage and analysis in ophthalmology.

    PubMed

    Tauber, J; Lahav, M

    1987-11-01

    A review of currently-available diagnostic coding systems revealed that most are either too abbreviated or too detailed. We have compiled a simplified diagnostic coding sheet based on the International Coding and Diagnosis (ICD-9), which is both complete and easy to use in a general practice. The information is transferred to a computer, which uses the relevant (ICD-9) diagnoses as database and can be retrieved later for display of patients' problems or analysis of clinical data.

  20. Mini-Stroke vs. Regular Stroke: What's the Difference?

    MedlinePlus

    ... may need various diagnostic tests, such as a magnetic resonance imaging (MRI) scan or a computerized tomography ( ... org," "Mayo Clinic Healthy Living," and the triple-shield Mayo Clinic logo are trademarks of Mayo Foundation ...

  1. Investigation of computer-aided colonic crypt pattern analysis

    NASA Astrophysics Data System (ADS)

    Qi, Xin; Pan, Yinsheng; Sivak, Michael V., Jr.; Olowe, Kayode; Rollins, Andrew M.

    2007-02-01

    Colorectal cancer is the second leading cause of cancer-related death in the United States. Approximately 50% of these deaths could be prevented by earlier detection through screening. Magnification chromoendoscopy is a technique which utilizes tissue stains applied to the gastrointestinal mucosa and high-magnification endoscopy to better visualize and characterize lesions. Prior studies have shown that shapes of colonic crypts change with disease and show characteristic patterns. Current methods for assessing colonic crypt patterns are somewhat subjective and not standardized. Computerized algorithms could be used to standardize colonic crypt pattern assessment. We have imaged resected colonic mucosa in vitro (N = 70) using methylene blue dye and a surgical microscope to approximately simulate in vivo imaging with magnification chromoendoscopy. We have developed a method of computerized processing to analyze the crypt patterns in the images. The quantitative image analysis consists of three steps. First, the crypts within the region of interest of colonic tissue are semi-automatically segmented using watershed morphological processing. Second, crypt size and shape parameters are extracted from the segmented crypts. Third, each sample is assigned to a category according to the Kudo criteria. The computerized classification is validated by comparison with human classification using the Kudo classification criteria. The computerized colonic crypt pattern analysis algorithm will enable a study of in vivo magnification chromoendoscopy of colonic crypt pattern correlated with risk of colorectal cancer. This study will assess the feasibility of screening and surveillance of the colon using magnification chromoendoscopy.

  2. The mass-action law based algorithm for cost-effective approach for cancer drug discovery and development.

    PubMed

    Chou, Ting-Chao

    2011-01-01

    The mass-action law based system analysis via mathematical induction and deduction lead to the generalized theory and algorithm that allows computerized simulation of dose-effect dynamics with small size experiments using a small number of data points in vitro, in animals, and in humans. The median-effect equation of the mass-action law deduced from over 300 mechanism specific-equations has been shown to be the unified theory that serves as the common-link for complicated biomedical systems. After using the median-effect principle as the common denominator, its applications are mechanism-independent, drug unit-independent, and dynamic order-independent; and can be used generally for single drug analysis or for multiple drug combinations in constant-ratio or non-constant ratios. Since the "median" is the common link and universal reference point in biological systems, these general enabling lead to computerized quantitative bio-informatics for econo-green bio-research in broad disciplines. Specific applications of the theory, especially relevant to drug discovery, drug combination, and clinical trials, have been cited or illustrated in terms of algorithms, experimental design and computerized simulation for data analysis. Lessons learned from cancer research during the past fifty years provide a valuable opportunity to reflect, and to improve the conventional divergent approach and to introduce a new convergent avenue, based on the mass-action law principle, for the efficient cancer drug discovery and the low-cost drug development.

  3. The mass-action law based algorithm for cost-effective approach for cancer drug discovery and development

    PubMed Central

    Chou, Ting-Chao

    2011-01-01

    The mass-action law based system analysis via mathematical induction and deduction lead to the generalized theory and algorithm that allows computerized simulation of dose-effect dynamics with small size experiments using a small number of data points in vitro, in animals, and in humans. The median-effect equation of the mass-action law deduced from over 300 mechanism specific-equations has been shown to be the unified theory that serves as the common-link for complicated biomedical systems. After using the median-effect principle as the common denominator, its applications are mechanism-independent, drug unit-independent, and dynamic order-independent; and can be used generally for single drug analysis or for multiple drug combinations in constant-ratio or non-constant ratios. Since the “median” is the common link and universal reference point in biological systems, these general enabling lead to computerized quantitative bio-informatics for econo-green bio-research in broad disciplines. Specific applications of the theory, especially relevant to drug discovery, drug combination, and clinical trials, have been cited or illustrated in terms of algorithms, experimental design and computerized simulation for data analysis. Lessons learned from cancer research during the past fifty years provide a valuable opportunity to reflect, and to improve the conventional divergent approach and to introduce a new convergent avenue, based on the mass-action law principle, for the efficient cancer drug discovery and the low-cost drug development. PMID:22016837

  4. Health technology assessment review: Computerized glucose regulation in the intensive care unit - how to create artificial control

    PubMed Central

    2009-01-01

    Current care guidelines recommend glucose control (GC) in critically ill patients. To achieve GC, many ICUs have implemented a (nurse-based) protocol on paper. However, such protocols are often complex, time-consuming, and can cause iatrogenic hypoglycemia. Computerized glucose regulation protocols may improve patient safety, efficiency, and nurse compliance. Such computerized clinical decision support systems (Cuss) use more complex logic to provide an insulin infusion rate based on previous blood glucose levels and other parameters. A computerized CDSS for glucose control has the potential to reduce overall workload, reduce the chance of human cognitive failure, and improve glucose control. Several computer-assisted glucose regulation programs have been published recently. In order of increasing complexity, the three main types of algorithms used are computerized flowcharts, Proportional-Integral-Derivative (PID), and Model Predictive Control (MPC). PID is essentially a closed-loop feedback system, whereas MPC models the behavior of glucose and insulin in ICU patients. Although the best approach has not yet been determined, it should be noted that PID controllers are generally thought to be more robust than MPC systems. The computerized Cuss that are most likely to emerge are those that are fully a part of the routine workflow, use patient-specific characteristics and apply variable sampling intervals. PMID:19849827

  5. Advanced soft computing diagnosis method for tumour grading.

    PubMed

    Papageorgiou, E I; Spyridonos, P P; Stylios, C D; Ravazoula, P; Groumpos, P P; Nikiforidis, G N

    2006-01-01

    To develop an advanced diagnostic method for urinary bladder tumour grading. A novel soft computing modelling methodology based on the augmentation of fuzzy cognitive maps (FCMs) with the unsupervised active Hebbian learning (AHL) algorithm is applied. One hundred and twenty-eight cases of urinary bladder cancer were retrieved from the archives of the Department of Histopathology, University Hospital of Patras, Greece. All tumours had been characterized according to the classical World Health Organization (WHO) grading system. To design the FCM model for tumour grading, three experts histopathologists defined the main histopathological features (concepts) and their impact on grade characterization. The resulted FCM model consisted of nine concepts. Eight concepts represented the main histopathological features for tumour grading. The ninth concept represented the tumour grade. To increase the classification ability of the FCM model, the AHL algorithm was applied to adjust the weights of the FCM. The proposed FCM grading model achieved a classification accuracy of 72.5%, 74.42% and 95.55% for tumours of grades I, II and III, respectively. An advanced computerized method to support tumour grade diagnosis decision was proposed and developed. The novelty of the method is based on employing the soft computing method of FCMs to represent specialized knowledge on histopathology and on augmenting FCMs ability using an unsupervised learning algorithm, the AHL. The proposed method performs with reasonably high accuracy compared to other existing methods and at the same time meets the physicians' requirements for transparency and explicability.

  6. Basic physiological systems indicator's informative assessment for children and adolescents obesity diagnosis tasks

    NASA Astrophysics Data System (ADS)

    Marukhina, O. V.; Berestneva, O. G.; Emelyanova, Yu A.; Romanchukov, S. V.; Petrova, L.; Lombardo, C.; Kozlova, N. V.

    2018-05-01

    The healthcare computerization creates opportunities to the clinical decision support system development. In the course of diagnosis, doctor manipulates a considerable amount of data and makes a decision in the context of uncertainty basing upon the first-hand experience and knowledge. The situation is exacerbated by the fact that the knowledge scope in medicine is incrementally growing, but the decision-making time does not increase. The amount of medical malpractice is growing and it leads to various negative effects, even the mortality rate increase. IT-solution's development for clinical purposes is one of the most promising and efficient ways to prevent these effects. That is why the efforts of many IT specialists are directed to the doctor's heuristics simulating software or expert-based medical decision-making algorithms development. Thus, the objective of this study is to develop techniques and approaches for the body physiological system's informative value assessment index for the obesity degree evaluation based on the diagnostic findings.

  7. TRANPLAN and GIS support for agencies in Alabama

    DOT National Transportation Integrated Search

    2001-08-06

    Travel demand models are computerized programs intended to forecast future roadway traffic volumes for a community based on selected socioeconomic variables and travel behavior algorithms. Software to operate these travel demand models is currently a...

  8. A computerized clinical decision support system as a means of implementing depression guidelines.

    PubMed

    Trivedi, Madhukar H; Kern, Janet K; Grannemann, Bruce D; Altshuler, Kenneth Z; Sunderajan, Prabha

    2004-08-01

    The authors describe the history and current use of computerized systems for implementing treatment guidelines in general medicine as well as the development, testing, and early use of a computerized decision support system for depression treatment among "real-world" clinical settings in Texas. In 1999 health care experts from Europe and the United States met to confront the well-documented challenges of implementing treatment guidelines and to identify strategies for improvement. They suggested the integration of guidelines into computer systems that is incorporated into clinical workflow. Several studies have demonstrated improvements in physicians' adherence to guidelines when such guidelines are provided in a computerized format. Although computerized decision support systems are being used in many areas of medicine and have demonstrated improved patient outcomes, their use in psychiatric illness is limited. The authors designed and developed a computerized decision support system for the treatment of major depressive disorder by using evidence-based guidelines, transferring the knowledge gained from the Texas Medication Algorithm Project (TMAP). This computerized decision support system (CompTMAP) provides support in diagnosis, treatment, follow-up, and preventive care and can be incorporated into the clinical setting. CompTMAP has gone through extensive testing to ensure accuracy and reliability. Physician surveys have indicated a positive response to CompTMAP, although the sample was insufficient for statistical testing. CompTMAP is part of a new era of comprehensive computerized decision support systems that take advantage of advances in automation and provide more complete clinical support to physicians in clinical practice.

  9. The poppy seed test for colovesical fistula: big bang, little bucks!

    PubMed

    Kwon, Eric O; Armenakas, Noel A; Scharf, Stephen C; Panagopoulos, Georgia; Fracchia, John A

    2008-04-01

    Diagnosis of a colovesical fistula is often challenging, and usually involves numerous invasive and expensive tests and procedures. The poppy seed test stands out as an exception to this rule. We evaluated the accuracy and cost-effectiveness of various established diagnostic tests used to evaluate a suspected colovesical fistula. We identified 20 prospectively entered patients with surgically confirmed colovesical fistulas between 2000 and 2006. Each patient was evaluated preoperatively with a (51)chromium nuclear study, computerized tomography of the abdomen and pelvis with oral and intravenous contrast medium, and the poppy seed test. Costs were calculated using institutional charges, 2006 Medicare limiting approved charges and the market price, respectively. The z test was used to compare the proportion of patients who tested positive for a fistula with each of these modalities. The chromium study was positive in 16 of 20 patients (80%) at a cost of $490.83 per study. Computerized tomography was positive in 14 of 20 patients (70%) at a cost of $652.92 per study. The poppy seed test was positive in 20 of 20 patients (100%) at a cost of $5.37 per study. The difference in the proportion of patients who tested positive for a fistula on computerized tomography and the poppy seed test was statistically significant (p = 0.03). There was no difference between the chromium group and the computerized tomography or poppy seed group (p = 0.72 and 0.12, respectively). The poppy seed test is an accurate, convenient and inexpensive diagnostic test. It is an ideal initial consideration for evaluating a suspected colovesical fistula.

  10. The effect of different exercise protocols and regression-based algorithms on the assessment of the anaerobic threshold.

    PubMed

    Zuniga, Jorge M; Housh, Terry J; Camic, Clayton L; Bergstrom, Haley C; Schmidt, Richard J; Johnson, Glen O

    2014-09-01

    The purpose of this study was to examine the effect of ramp and step incremental cycle ergometer tests on the assessment of the anaerobic threshold (AT) using 3 different computerized regression-based algorithms. Thirteen healthy adults (mean age and body mass [SD] = 23.4 [3.3] years and body mass = 71.7 [11.1] kg) visited the laboratory on separate occasions. Two-way repeated measures analyses of variance with appropriate follow-up procedures were used to analyze the data. The step protocol resulted in greater mean values across algorithms than the ramp protocol for the V[Combining Dot Above]O2 (step = 1.7 [0.6] L·min and ramp = 1.5 [0.4] L·min) and heart rate (HR) (step = 133 [21] b·min and ramp = 124 [15] b·min) at the AT. There were no significant mean differences, however, in power outputs at the AT between the step (115.2 [44.3] W) and the ramp (112.2 [31.2] W) protocols. Furthermore, there were no significant mean differences for V[Combining Dot Above]O2, HR, or power output across protocols among the 3 computerized regression-based algorithms used to estimate the AT. The current findings suggested that the protocol selection, but not the regression-based algorithms can affect the assessment of the V[Combining Dot Above]O2 and HR at the AT.

  11. Regulatory issues for computerized electrocardiographic devices.

    PubMed

    Muni, Neal I; Ho, Charles; Mallis, Elias

    2004-01-01

    Computerized electrocardiogram (ECG) devices are regulated in the U.S. by the FDA Center for Devices and Radiological Health (CDRH). This article aims to highlight the salient points of the FDA regulatory review process, including the important distinction between a "tool" claim and a "clinical" claim in the intended use of a computerized ECG device. Specifically, a tool claim relates to the ability of the device to accurately measure a certain ECG parameter, such as T-wave alternans (TWA), while a clinical claim imputes a particular health hazard associated with the identified parameter, such as increased risk of ventricular tachyarrhythmia or sudden death. Given that both types of claims are equally important and receive the same regulatory scrutiny, the manufacturer of a new ECG diagnostic device should consider the distinction and regulatory pathways for approval between the two types of claims discussed in this paper.

  12. Comprehensive Digital Imaging Network Project At Georgetown University Hospital

    NASA Astrophysics Data System (ADS)

    Mun, Seong K.; Stauffer, Douglas; Zeman, Robert; Benson, Harold; Wang, Paul; Allman, Robert

    1987-10-01

    The radiology practice is going through rapid changes due to the introduction of state-of-the-art computed based technologies. For the last twenty years we have witnessed the introduction of many new medical diagnostic imaging systems such as x-ray computed tomo-graphy, digital subtraction angiography (DSA), computerized nuclear medicine, single pho-ton emission computed tomography (SPECT), positron emission tomography (PET) and more re-cently, computerized digital radiography and nuclear magnetic resonance imaging (MRI). Other than the imaging systems, there has been a steady introduction of computed based information systems for radiology departments and hospitals.

  13. Atlas of computerized blood flow analysis in bone disease.

    PubMed

    Gandsman, E J; Deutsch, S D; Tyson, I B

    1983-11-01

    The role of computerized blood flow analysis in routine bone scanning is reviewed. Cases illustrating the technique include proven diagnoses of toxic synovitis, Legg-Perthes disease, arthritis, avascular necrosis of the hip, fractures, benign and malignant tumors, Paget's disease, cellulitis, osteomyelitis, and shin splints. Several examples also show the use of the technique in monitoring treatment. The use of quantitative data from the blood flow, bone uptake phase, and static images suggests specific diagnostic patterns for each of the diseases presented in this atlas. Thus, this technique enables increased accuracy in the interpretation of the radionuclide bone scan.

  14. Recommendations for the standardization and interpretation of the electrocardiogram: part II: Electrocardiography diagnostic statement list: a scientific statement from the American Heart Association Electrocardiography and Arrhythmias Committee, Council on Clinical Cardiology; the American College of Cardiology Foundation; and the Heart Rhythm Society: endorsed by the International Society for Computerized Electrocardiology.

    PubMed

    Mason, Jay W; Hancock, E William; Gettes, Leonard S; Bailey, James J; Childers, Rory; Deal, Barbara J; Josephson, Mark; Kligfield, Paul; Kors, Jan A; Macfarlane, Peter; Pahlm, Olle; Mirvis, David M; Okin, Peter; Rautaharju, Pentti; Surawicz, Borys; van Herpen, Gerard; Wagner, Galen S; Wellens, Hein

    2007-03-13

    This statement provides a concise list of diagnostic terms for ECG interpretation that can be shared by students, teachers, and readers of electrocardiography. This effort was motivated by the existence of multiple automated diagnostic code sets containing imprecise and overlapping terms. An intended outcome of this statement list is greater uniformity of ECG diagnosis and a resultant improvement in patient care. The lexicon includes primary diagnostic statements, secondary diagnostic statements, modifiers, and statements for the comparison of ECGs. This diagnostic lexicon should be reviewed and updated periodically.

  15. Recommendations for the standardization and interpretation of the electrocardiogram: part II: electrocardiography diagnostic statement list a scientific statement from the American Heart Association Electrocardiography and Arrhythmias Committee, Council on Clinical Cardiology; the American College of Cardiology Foundation; and the Heart Rhythm Society Endorsed by the International Society for Computerized Electrocardiology.

    PubMed

    Mason, Jay W; Hancock, E William; Gettes, Leonard S; Bailey, James J; Childers, Rory; Deal, Barbara J; Josephson, Mark; Kligfield, Paul; Kors, Jan A; Macfarlane, Peter; Pahlm, Olle; Mirvis, David M; Okin, Peter; Rautaharju, Pentti; Surawicz, Borys; van Herpen, Gerard; Wagner, Galen S; Wellens, Hein

    2007-03-13

    This statement provides a concise list of diagnostic terms for ECG interpretation that can be shared by students, teachers, and readers of electrocardiography. This effort was motivated by the existence of multiple automated diagnostic code sets containing imprecise and overlapping terms. An intended outcome of this statement list is greater uniformity of ECG diagnosis and a resultant improvement in patient care. The lexicon includes primary diagnostic statements, secondary diagnostic statements, modifiers, and statements for the comparison of ECGs. This diagnostic lexicon should be reviewed and updated periodically.

  16. A Computer-Based Diagnostic/Information Patient Management System for Isolated Environments. MEDIC Ten Years Later

    DTIC Science & Technology

    1987-02-04

    corpsmen for application to computerized medical diagnosis. Proceedings of the 6th Congress of the International Ergonomics Associationf 1976, p...Internal medicine; Cardiovascular diseases; Pulmonary diseases; Abdomen; Pain; Mental disorders; Psychiatry; Oral diseases; Dentistry ; Medical

  17. Decision support in psychiatry – a comparison between the diagnostic outcomes using a computerized decision support system versus manual diagnosis

    PubMed Central

    Bergman, Lars G; Fors, Uno GH

    2008-01-01

    Background Correct diagnosis in psychiatry may be improved by novel diagnostic procedures. Computerized Decision Support Systems (CDSS) are suggested to be able to improve diagnostic procedures, but some studies indicate possible problems. Therefore, it could be important to investigate CDSS systems with regard to their feasibility to improve diagnostic procedures as well as to save time. Methods This study was undertaken to compare the traditional 'paper and pencil' diagnostic method SCID1 with the computer-aided diagnostic system CB-SCID1 to ascertain processing time and accuracy of diagnoses suggested. 63 clinicians volunteered to participate in the study and to solve two paper-based cases using either a CDSS or manually. Results No major difference between paper and pencil and computer-supported diagnosis was found. Where a difference was found it was in favour of paper and pencil. For example, a significantly shorter time was found for paper and pencil for the difficult case, as compared to computer support. A significantly higher number of correct diagnoses were found in the diffilt case for the diagnosis 'Depression' using the paper and pencil method. Although a majority of the clinicians found the computer method supportive and easy to use, it took a longer time and yielded fewer correct diagnoses than with paper and pencil. Conclusion This study could not detect any major difference in diagnostic outcome between traditional paper and pencil methods and computer support for psychiatric diagnosis. Where there were significant differences, traditional paper and pencil methods were better than the tested CDSS and thus we conclude that CDSS for diagnostic procedures may interfere with diagnosis accuracy. A limitation was that most clinicians had not previously used the CDSS system under study. The results of this study, however, confirm that CDSS development for diagnostic purposes in psychiatry has much to deal with before it can be used for routine clinical purposes. PMID:18261222

  18. TFTR diagnostic control and data acquisition system

    NASA Astrophysics Data System (ADS)

    Sauthoff, N. R.; Daniels, R. E.

    1985-05-01

    General computerized control and data-handling support for TFTR diagnostics is presented within the context of the Central Instrumentation, Control and Data Acquisition (CICADA) System. Procedures, hardware, the interactive man-machine interface, event-driven task scheduling, system-wide arming and data acquisition, and a hierarchical data base of raw data and results are described. Similarities in data structures involved in control, monitoring, and data acquisition afford a simplification of the system functions, based on ``groups'' of devices. Emphases and optimizations appropriate for fusion diagnostic system designs are provided. An off-line data reduction computer system is under development.

  19. TFTR diagnostic control and data acquisition system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sauthoff, N.R.; Daniels, R.E.; PPL Computer Division

    1985-05-01

    General computerized control and data-handling support for TFTR diagnostics is presented within the context of the Central Instrumentation, Control and Data Acquisition (CICADA) System. Procedures, hardware, the interactive man--machine interface, event-driven task scheduling, system-wide arming and data acquisition, and a hierarchical data base of raw data and results are described. Similarities in data structures involved in control, monitoring, and data acquisition afford a simplification of the system functions, based on ''groups'' of devices. Emphases and optimizations appropriate for fusion diagnostic system designs are provided. An off-line data reduction computer system is under development.

  20. Evaluation of Spontaneous Spinal Cerebrospinal Fluid Leaks Disease by Computerized Image Processing.

    PubMed

    Yıldırım, Mustafa S; Kara, Sadık; Albayram, Mehmet S; Okkesim, Şükrü

    2016-05-17

    Spontaneous Spinal Cerebrospinal Fluid Leaks (SSCFL) is a disease based on tears on the dura mater. Due to widespread symptoms and low frequency of the disease, diagnosis is problematic. Diagnostic lumbar puncture is commonly used for diagnosing SSCFL, though it is invasive and may cause pain, inflammation or new leakages. T2-weighted MR imaging is also used for diagnosis; however, the literature on T2-weighted MRI states that findings for diagnosis of SSCFL could be erroneous when differentiating the diseased and control. One another technique for diagnosis is CT-myelography, but this has been suggested to be less successful than T2-weighted MRI and it needs an initial lumbar puncture. This study aimed to develop an objective, computerized numerical analysis method using noninvasive routine Magnetic Resonance Images that can be used in the evaluation and diagnosis of SSCFL disease. Brain boundaries were automatically detected using methods of mathematical morphology, and a distance transform was employed. According to normalized distances, average densities of certain sites were proportioned and a numerical criterion related to cerebrospinal fluid distribution was calculated. The developed method was able to differentiate between 14 patients and 14 control subjects significantly with p = 0.0088 and d = 0.958. Also, the pre and post-treatment MRI of four patients was obtained and analyzed. The results were differentiated statistically (p = 0.0320, d = 0.853). An original, noninvasive and objective diagnostic test based on computerized image processing has been developed for evaluation of SSCFL. To our knowledge, this is the first computerized image processing method for evaluation of the disease. Discrimination between patients and controls shows the validity of the method. Also, post-treatment changes observed in four patients support this verdict.

  1. Integration of On-Line and Off-Line Diagnostic Algorithms for Aircraft Engine Health Management

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2007-01-01

    This paper investigates the integration of on-line and off-line diagnostic algorithms for aircraft gas turbine engines. The on-line diagnostic algorithm is designed for in-flight fault detection. It continuously monitors engine outputs for anomalous signatures induced by faults. The off-line diagnostic algorithm is designed to track engine health degradation over the lifetime of an engine. It estimates engine health degradation periodically over the course of the engine s life. The estimate generated by the off-line algorithm is used to update the on-line algorithm. Through this integration, the on-line algorithm becomes aware of engine health degradation, and its effectiveness to detect faults can be maintained while the engine continues to degrade. The benefit of this integration is investigated in a simulation environment using a nonlinear engine model.

  2. Computerized bioterrorism education and training for nurses on bioterrorism attack agents.

    PubMed

    Nyamathi, Adeline M; Casillas, Adrian; King, Major L; Gresham, Louise; Pierce, Elaine; Farb, Daniel; Wiechmann, Carrie; Weichmann, Carrie

    2010-08-01

    Biological agents have the ability to cause large-scale mass casualties. For this reason, their likely use in future terrorist attacks is a concern for national security. Recent studies show that nurses are ill prepared to deal with agents used in biological warfare. Achieving a goal for bioterrorism preparedness is directly linked to comprehensive education and training that enables first-line responders such as nurses to diagnose infectious agents rapidly. The study evaluated participants' responses to biological agents using a computerized bioterrorism education and training program versus a standard bioterrorism education and training program. Both programs improved participants' ability to complete and solve case studies involving the identification of specific biological agents. Participants in the computerized bioterrorism education and training program were more likely to solve the cases critically without reliance on expert consultants. However, participants in the standard bioterrorism education and training program reduced the use of unnecessary diagnostic tests.

  3. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  4. Diagnostic Accuracy of a New Cardiac Electrical Biomarker for Detection of Electrocardiogram Changes Suggestive of Acute Myocardial Ischemic Injury

    PubMed Central

    Schreck, David M; Fishberg, Robert D

    2014-01-01

    Objective A new cardiac “electrical” biomarker (CEB) for detection of 12-lead electrocardiogram (ECG) changes indicative of acute myocardial ischemic injury has been identified. Objective was to test CEB diagnostic accuracy. Methods This is a blinded, observational retrospective case-control, noninferiority study. A total of 508 ECGs obtained from archived digital databases were interpreted by cardiologist and emergency physician (EP) blinded reference standards for presence of acute myocardial ischemic injury. CEB was constructed from three ECG cardiac monitoring leads using nonlinear modeling. Comparative active controls included ST voltage changes (J-point, ST area under curve) and a computerized ECG interpretive algorithm (ECGI). Training set of 141 ECGs identified CEB cutoffs by receiver-operating-characteristic (ROC) analysis. Test set of 367 ECGs was analyzed for validation. Poor-quality ECGs were excluded. Sensitivity, specificity, and negative and positive predictive values were calculated with 95% confidence intervals. Adjudication was performed by consensus. Results CEB demonstrated noninferiority to all active controls by hypothesis testing. CEB adjudication demonstrated 85.3–94.4% sensitivity, 92.5–93.0% specificity, 93.8–98.6% negative predictive value, and 74.6–83.5% positive predictive value. CEB was superior against all active controls in EP analysis, and against ST area under curve and ECGI by cardiologist. Conclusion CEB detects acute myocardial ischemic injury with high diagnostic accuracy. CEB is instantly constructed from three ECG leads on the cardiac monitor and displayed instantly allowing immediate cost-effective identification of patients with acute ischemic injury during cardiac rhythm monitoring. PMID:24118724

  5. Diagnostic work-up and loss of tuberculosis suspects in Jogjakarta, Indonesia.

    PubMed

    Ahmad, Riris Andono; Matthys, Francine; Dwihardiani, Bintari; Rintiswati, Ning; de Vlas, Sake J; Mahendradhata, Yodi; van der Stuyft, Patrick

    2012-02-15

    Early and accurate diagnosis of pulmonary tuberculosis (TB) is critical for successful TB control. To assist in the diagnosis of smear-negative pulmonary TB, the World Health Organisation (WHO) recommends the use of a diagnostic algorithm. Our study evaluated the implementation of the national tuberculosis programme's diagnostic algorithm in routine health care settings in Jogjakarta, Indonesia. The diagnostic algorithm is based on the WHO TB diagnostic algorithm, which had already been implemented in the health facilities. We prospectively documented the diagnostic work-up of all new tuberculosis suspects until a diagnosis was reached. We used clinical audit forms to record each step chronologically. Data on the patient's gender, age, symptoms, examinations (types, dates, and results), and final diagnosis were collected. Information was recorded for 754 TB suspects; 43.5% of whom were lost during the diagnostic work-up in health centres, 0% in lung clinics. Among the TB suspects who completed diagnostic work-ups, 51.1% and 100.0% were diagnosed without following the national TB diagnostic algorithm in health centres and lung clinics, respectively. However, the work-up in the health centres and lung clinics generally conformed to international standards for tuberculosis care (ISTC). Diagnostic delays were significantly longer in health centres compared to lung clinics. The high rate of patients lost in health centres needs to be addressed through the implementation of TB suspect tracing and better programme supervision. The national TB algorithm needs to be revised and differentiated according to the level of care.

  6. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  7. The Development of MUMPS-Based Rehabilitation Psychology Computer Applications.

    ERIC Educational Resources Information Center

    Dutro, Kenneth R.

    The use of computer assisted programs in career exploration and occupational information is well documented. Various phases of the vocational counseling process, i.e., diagnostic evaluation, program planning, career exploration, case management, and program evaluation, offer similarly promising opportunities for computerization. Using the…

  8. The Autism Diagnostic Observation Schedule, Module 4: Revised Algorithm and Standardized Severity Scores

    ERIC Educational Resources Information Center

    Hus, Vanessa; Lord, Catherine

    2014-01-01

    The recently published Autism Diagnostic Observation Schedule, 2nd edition (ADOS-2) includes revised diagnostic algorithms and standardized severity scores for modules used to assess younger children. A revised algorithm and severity scores are not yet available for Module 4, used with verbally fluent adults. The current study revises the Module 4…

  9. Improved Diagnostic Validity of the ADOS Revised Algorithms: A Replication Study in an Independent Sample

    ERIC Educational Resources Information Center

    Oosterling, Iris; Roos, Sascha; de Bildt, Annelies; Rommelse, Nanda; de Jonge, Maretha; Visser, Janne; Lappenschaar, Martijn; Swinkels, Sophie; van der Gaag, Rutger Jan; Buitelaar, Jan

    2010-01-01

    Recently, Gotham et al. ("2007") proposed revised algorithms for the Autism Diagnostic Observation Schedule (ADOS) with improved diagnostic validity. The aim of the current study was to replicate predictive validity, factor structure, and correlations with age and verbal and nonverbal IQ of the ADOS revised algorithms for Modules 1 and 2…

  10. E-waste Management and Refurbishment Prediction (EMARP) Model for Refurbishment Industries.

    PubMed

    Resmi, N G; Fasila, K A

    2017-10-01

    This paper proposes a novel algorithm for establishing a standard methodology to manage and refurbish e-waste called E-waste Management And Refurbishment Prediction (EMARP), which can be adapted by refurbishing industries in order to improve their performance. Waste management, particularly, e-waste management is a serious issue nowadays. Computerization has been into waste management in different ways. Much of the computerization has happened in planning the waste collection, recycling and disposal process and also managing documents and reports related to waste management. This paper proposes a computerized model to make predictions for e-waste refurbishment. All possibilities for reusing the common components among the collected e-waste samples are predicted, thus minimizing the wastage. Simulation of the model has been done to analyse the accuracy in the predictions made by the system. The model can be scaled to accommodate the real-world scenario. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Three-dimensional volume rendering of the ankle based on magnetic resonance images enables the generation of images comparable to real anatomy.

    PubMed

    Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio

    2009-11-01

    We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon-bone-muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18-30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data.

  12. Three-dimensional volume rendering of the ankle based on magnetic resonance images enables the generation of images comparable to real anatomy

    PubMed Central

    Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio

    2009-01-01

    We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon–bone–muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18–30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data. PMID:19678857

  13. Exploring the Impact of Technology on Communication in Medicine and Health.

    ERIC Educational Resources Information Center

    Auyash, Stewart

    1984-01-01

    Summarizes some events in the use of medical technology in relation to the spoken word and doctor-patient communication. Reports on a new computerized diagnostic system (PROMIS-the Problem Oriented Medical Record System) and discusses its impact on health communication and medical education. (PD)

  14. Diagnostic work-up and loss of tuberculosis suspects in Jogjakarta, Indonesia

    PubMed Central

    2012-01-01

    Background Early and accurate diagnosis of pulmonary tuberculosis (TB) is critical for successful TB control. To assist in the diagnosis of smear-negative pulmonary TB, the World Health Organisation (WHO) recommends the use of a diagnostic algorithm. Our study evaluated the implementation of the national tuberculosis programme's diagnostic algorithm in routine health care settings in Jogjakarta, Indonesia. The diagnostic algorithm is based on the WHO TB diagnostic algorithm, which had already been implemented in the health facilities. Methods We prospectively documented the diagnostic work-up of all new tuberculosis suspects until a diagnosis was reached. We used clinical audit forms to record each step chronologically. Data on the patient's gender, age, symptoms, examinations (types, dates, and results), and final diagnosis were collected. Results Information was recorded for 754 TB suspects; 43.5% of whom were lost during the diagnostic work-up in health centres, 0% in lung clinics. Among the TB suspects who completed diagnostic work-ups, 51.1% and 100.0% were diagnosed without following the national TB diagnostic algorithm in health centres and lung clinics, respectively. However, the work-up in the health centres and lung clinics generally conformed to international standards for tuberculosis care (ISTC). Diagnostic delays were significantly longer in health centres compared to lung clinics. Conclusions The high rate of patients lost in health centres needs to be addressed through the implementation of TB suspect tracing and better programme supervision. The national TB algorithm needs to be revised and differentiated according to the level of care. PMID:22333111

  15. Systematic Benchmarking of Diagnostic Technologies for an Electrical Power System

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Jensen, David; Poll, Scott

    2009-01-01

    Automated health management is a critical functionality for complex aerospace systems. A wide variety of diagnostic algorithms have been developed to address this technical challenge. Unfortunately, the lack of support to perform large-scale V&V (verification and validation) of diagnostic technologies continues to create barriers to effective development and deployment of such algorithms for aerospace vehicles. In this paper, we describe a formal framework developed for benchmarking of diagnostic technologies. The diagnosed system is the Advanced Diagnostics and Prognostics Testbed (ADAPT), a real-world electrical power system (EPS), developed and maintained at the NASA Ames Research Center. The benchmarking approach provides a systematic, empirical basis to the testing of diagnostic software and is used to provide performance assessment for different diagnostic algorithms.

  16. The Development of STAR Early Literacy. Report.

    ERIC Educational Resources Information Center

    School Renaissance Inst., Inc., Madison, WI.

    This report describes the development and testing of a computerized early literacy diagnostic assessment for students in prekindergarten to grade 3 that can measure skills across a variety of preliteracy and reading domains. The STAR Early Literacy assessment was developed by a team of more than 50 people, including literacy experts,…

  17. Item Selection in Multidimensional Computerized Adaptive Testing--Gaining Information from Different Angles

    ERIC Educational Resources Information Center

    Wang, Chun; Chang, Hua-Hua

    2011-01-01

    Over the past thirty years, obtaining diagnostic information from examinees' item responses has become an increasingly important feature of educational and psychological testing. The objective can be achieved by sequentially selecting multidimensional items to fit the class of latent traits being assessed, and therefore Multidimensional…

  18. An Antibiotic Resource Program for Students of the Health Professions.

    ERIC Educational Resources Information Center

    Tritz, Gerald J.

    1986-01-01

    Provides a description of a computer program developed to supplement instruction in testing of antibiotics on clinical isolates of microorganisms. The program is a simulation and database for interpretation of experimental data designed to enhance laboratory learning and prepare future physicians to use computerized diagnostic instrumentation and…

  19. Student Modeling and Ab Initio Language Learning.

    ERIC Educational Resources Information Center

    Heift, Trude; Schulze, Mathias

    2003-01-01

    Provides examples of student modeling techniques that have been employed in computer-assisted language learning over the past decade. Describes two systems for learning German: "German Tutor" and "Geroline." Shows how a student model can support computerized adaptive language testing for diagnostic purposes in a Web-based language learning…

  20. Using three-dimensional-computerized tomography as a diagnostic tool for temporo-mandibular joint ankylosis: a case report.

    PubMed

    Kao, S Y; Chou, J; Lo, J; Yang, J; Chou, A P; Joe, C J; Chang, R C

    1999-04-01

    Roentgenographic examination has long been a useful diagnostic tool for temporo-mandibular joint (TMJ) disease. The methods include TMJ tomography, panoramic radiography and computerized tomography (CT) scan with or without injection of contrast media. Recently, three-dimensional CT (3D-CT), reconstructed from the two-dimensional image of a CT scan to simulate the soft tissue or bony structure of the real target, was proposed. In this report, a case of TMJ ankylosis due to traumatic injury is presented. 3D-CT was employed as one of the presurgical roentgenographic diagnostic tools. The conventional radiographic examination including panoramic radiography and tomography showed lesions in both sides of the mandible. CT scanning further suggested that the right-sided lesion was more severe than that on the left. With 3D-CT image reconstruction the size and extent of the lesions were clearly observable. The decision was made to proceed with an initial surgical approach on the right side. With condylectomy and condylar replacement using an autogenous costochondral graft on the right side, the range of mouth opening improved significantly. In this case report, 3D-CT demonstrates its advantages as a tool for the correct and precise diagnosis of TMJ ankylosis.

  1. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  2. Comparative analysis of expert and machine-learning methods for classification of body cavity effusions in companion animals.

    PubMed

    Hotz, Christine S; Templeton, Steven J; Christopher, Mary M

    2005-03-01

    A rule-based expert system using CLIPS programming language was created to classify body cavity effusions as transudates, modified transudates, exudates, chylous, and hemorrhagic effusions. The diagnostic accuracy of the rule-based system was compared with that produced by 2 machine-learning methods: Rosetta, a rough sets algorithm and RIPPER, a rule-induction method. Results of 508 body cavity fluid analyses (canine, feline, equine) obtained from the University of California-Davis Veterinary Medical Teaching Hospital computerized patient database were used to test CLIPS and to test and train RIPPER and Rosetta. The CLIPS system, using 17 rules, achieved an accuracy of 93.5% compared with pathologist consensus diagnoses. Rosetta accurately classified 91% of effusions by using 5,479 rules. RIPPER achieved the greatest accuracy (95.5%) using only 10 rules. When the original rules of the CLIPS application were replaced with those of RIPPER, the accuracy rates were identical. These results suggest that both rule-based expert systems and machine-learning methods hold promise for the preliminary classification of body fluids in the clinical laboratory.

  3. How Effective Is Algorithm-Guided Treatment for Depressed Inpatients? Results from the Randomized Controlled Multicenter German Algorithm Project 3 Trial

    PubMed Central

    Wiethoff, Katja; Baghai, Thomas C; Fisher, Robert; Seemüller, Florian; Laakmann, Gregor; Brieger, Peter; Cordes, Joachim; Malevani, Jaroslav; Laux, Gerd; Hauth, Iris; Möller, Hans-Jürgen; Kronmüller, Klaus-Thomas; Smolka, Michael N; Schlattmann, Peter; Berger, Maximilian; Ricken, Roland; Stamm, Thomas J; Heinz, Andreas; Bauer, Michael

    2017-01-01

    Abstract Background Treatment algorithms are considered as key to improve outcomes by enhancing the quality of care. This is the first randomized controlled study to evaluate the clinical effect of algorithm-guided treatment in inpatients with major depressive disorder. Methods Inpatients, aged 18 to 70 years with major depressive disorder from 10 German psychiatric departments were randomized to 5 different treatment arms (from 2000 to 2005), 3 of which were standardized stepwise drug treatment algorithms (ALGO). The fourth arm proposed medications and provided less specific recommendations based on a computerized documentation and expert system (CDES), the fifth arm received treatment as usual (TAU). ALGO included 3 different second-step strategies: lithium augmentation (ALGO LA), antidepressant dose-escalation (ALGO DE), and switch to a different antidepressant (ALGO SW). Time to remission (21-item Hamilton Depression Rating Scale ≤9) was the primary outcome. Results Time to remission was significantly shorter for ALGO DE (n=91) compared with both TAU (n=84) (HR=1.67; P=.014) and CDES (n=79) (HR=1.59; P=.031) and ALGO SW (n=89) compared with both TAU (HR=1.64; P=.018) and CDES (HR=1.56; P=.038). For both ALGO LA (n=86) and ALGO DE, fewer antidepressant medications were needed to achieve remission than for CDES or TAU (P<.001). Remission rates at discharge differed across groups; ALGO DE had the highest (89.2%) and TAU the lowest rates (66.2%). Conclusions A highly structured algorithm-guided treatment is associated with shorter times and fewer medication changes to achieve remission with depressed inpatients than treatment as usual or computerized medication choice guidance. PMID:28645191

  4. How Effective Is Algorithm-Guided Treatment for Depressed Inpatients? Results from the Randomized Controlled Multicenter German Algorithm Project 3 Trial.

    PubMed

    Adli, Mazda; Wiethoff, Katja; Baghai, Thomas C; Fisher, Robert; Seemüller, Florian; Laakmann, Gregor; Brieger, Peter; Cordes, Joachim; Malevani, Jaroslav; Laux, Gerd; Hauth, Iris; Möller, Hans-Jürgen; Kronmüller, Klaus-Thomas; Smolka, Michael N; Schlattmann, Peter; Berger, Maximilian; Ricken, Roland; Stamm, Thomas J; Heinz, Andreas; Bauer, Michael

    2017-09-01

    Treatment algorithms are considered as key to improve outcomes by enhancing the quality of care. This is the first randomized controlled study to evaluate the clinical effect of algorithm-guided treatment in inpatients with major depressive disorder. Inpatients, aged 18 to 70 years with major depressive disorder from 10 German psychiatric departments were randomized to 5 different treatment arms (from 2000 to 2005), 3 of which were standardized stepwise drug treatment algorithms (ALGO). The fourth arm proposed medications and provided less specific recommendations based on a computerized documentation and expert system (CDES), the fifth arm received treatment as usual (TAU). ALGO included 3 different second-step strategies: lithium augmentation (ALGO LA), antidepressant dose-escalation (ALGO DE), and switch to a different antidepressant (ALGO SW). Time to remission (21-item Hamilton Depression Rating Scale ≤9) was the primary outcome. Time to remission was significantly shorter for ALGO DE (n=91) compared with both TAU (n=84) (HR=1.67; P=.014) and CDES (n=79) (HR=1.59; P=.031) and ALGO SW (n=89) compared with both TAU (HR=1.64; P=.018) and CDES (HR=1.56; P=.038). For both ALGO LA (n=86) and ALGO DE, fewer antidepressant medications were needed to achieve remission than for CDES or TAU (P<.001). Remission rates at discharge differed across groups; ALGO DE had the highest (89.2%) and TAU the lowest rates (66.2%). A highly structured algorithm-guided treatment is associated with shorter times and fewer medication changes to achieve remission with depressed inpatients than treatment as usual or computerized medication choice guidance. © The Author 2017. Published by Oxford University Press on behalf of CINP.

  5. A Computerized Decision Support System for Depression in Primary Care

    PubMed Central

    Kurian, Benji T.; Trivedi, Madhukar H.; Grannemann, Bruce D.; Claassen, Cynthia A.; Daly, Ella J.; Sunderajan, Prabha

    2009-01-01

    Objective: In 2004, results from The Texas Medication Algorithm Project (TMAP) showed better clinical outcomes for patients whose physicians adhered to a paper-and-pencil algorithm compared to patients who received standard clinical treatment for major depressive disorder (MDD). However, implementation of and fidelity to the treatment algorithm among various providers was observed to be inadequate. A computerized decision support system (CDSS) for the implementation of the TMAP algorithm for depression has since been developed to improve fidelity and adherence to the algorithm. Method: This was a 2-group, parallel design, clinical trial (one patient group receiving MDD treatment from physicians using the CDSS and the other patient group receiving usual care) conducted at 2 separate primary care clinics in Texas from March 2005 through June 2006. Fifty-five patients with MDD (DSM-IV criteria) with no significant difference in disease characteristics were enrolled, 32 of whom were treated by physicians using CDSS and 23 were treated by physicians using usual care. The study's objective was to evaluate the feasibility and efficacy of implementing a CDSS to assist physicians acutely treating patients with MDD compared to usual care in primary care. Primary efficacy outcomes for depression symptom severity were based on the 17-item Hamilton Depression Rating Scale (HDRS17) evaluated by an independent rater. Results: Patients treated by physicians employing CDSS had significantly greater symptom reduction, based on the HDRS17, than patients treated with usual care (P < .001). Conclusions: The CDSS algorithm, utilizing measurement-based care, was superior to usual care for patients with MDD in primary care settings. Larger randomized controlled trials are needed to confirm these findings. Trial Registration: clinicaltrials.gov Identifier: NCT00551083 PMID:19750065

  6. A computerized decision support system for depression in primary care.

    PubMed

    Kurian, Benji T; Trivedi, Madhukar H; Grannemann, Bruce D; Claassen, Cynthia A; Daly, Ella J; Sunderajan, Prabha

    2009-01-01

    In 2004, results from The Texas Medication Algorithm Project (TMAP) showed better clinical outcomes for patients whose physicians adhered to a paper-and-pencil algorithm compared to patients who received standard clinical treatment for major depressive disorder (MDD). However, implementation of and fidelity to the treatment algorithm among various providers was observed to be inadequate. A computerized decision support system (CDSS) for the implementation of the TMAP algorithm for depression has since been developed to improve fidelity and adherence to the algorithm. This was a 2-group, parallel design, clinical trial (one patient group receiving MDD treatment from physicians using the CDSS and the other patient group receiving usual care) conducted at 2 separate primary care clinics in Texas from March 2005 through June 2006. Fifty-five patients with MDD (DSM-IV criteria) with no significant difference in disease characteristics were enrolled, 32 of whom were treated by physicians using CDSS and 23 were treated by physicians using usual care. The study's objective was to evaluate the feasibility and efficacy of implementing a CDSS to assist physicians acutely treating patients with MDD compared to usual care in primary care. Primary efficacy outcomes for depression symptom severity were based on the 17-item Hamilton Depression Rating Scale (HDRS(17)) evaluated by an independent rater. Patients treated by physicians employing CDSS had significantly greater symptom reduction, based on the HDRS(17), than patients treated with usual care (P < .001). The CDSS algorithm, utilizing measurement-based care, was superior to usual care for patients with MDD in primary care settings. Larger randomized controlled trials are needed to confirm these findings. clinicaltrials.gov Identifier: NCT00551083.

  7. The Added Value of the Combined Use of the Autism Diagnostic Interview-Revised and the Autism Diagnostic Observation Schedule: Diagnostic Validity in a Clinical Swedish Sample of Toddlers and Young Preschoolers

    ERIC Educational Resources Information Center

    Zander, Eric; Sturm, Harald; Bölte, Sven

    2015-01-01

    The diagnostic validity of the new research algorithms of the Autism Diagnostic Interview-Revised and the revised algorithms of the Autism Diagnostic Observation Schedule was examined in a clinical sample of children aged 18-47 months. Validity was determined for each instrument separately and their combination against a clinical consensus…

  8. A computerized traffic control algorithm to determine optimal traffic signal settings. Ph.D. Thesis - Toledo Univ.

    NASA Technical Reports Server (NTRS)

    Seldner, K.

    1977-01-01

    An algorithm was developed to optimally control the traffic signals at each intersection using a discrete time traffic model applicable to heavy or peak traffic. Off line optimization procedures were applied to compute the cycle splits required to minimize the lengths of the vehicle queues and delay at each intersection. The method was applied to an extensive traffic network in Toledo, Ohio. Results obtained with the derived optimal settings are compared with the control settings presently in use.

  9. Computerized tomography versus magnetic resonance imaging: a comparative study in hypothalamic-pituitary and parasellar pathology.

    PubMed

    Webb, S M; Ruscalleda, J; Schwarzstein, D; Calaf-Alsina, J; Rovira, A; Matos, G; Puig-Domingo, M; de Leiva, A

    1992-05-01

    We wished to analyse the relative value of computerized tomography and magnetic resonance in patients referred for evaluation of pituitary and parasellar lesions. We performed a separate evaluation by two independent neuroradiologists of computerized tomography and magnetic resonance images ordered numerically and anonymously, with no clinical data available. We studied 40 patients submitted for hypothalamic-pituitary study; 31 were carried out preoperatively, of which histological confirmation later became available in 14. The remaining nine patients were evaluated postoperatively. Over 40 parameters relating to the bony margins, cavernous sinuses, carotid arteries, optic chiasm, suprasellar cisterns, pituitary, pituitary stalk and extension of the lesion were evaluated. These reports were compared with the initial ones offered when the scans were ordered, and with the final diagnosis. Concordance between initial computerized tomography and magnetic resonance was observed in 27 cases (67.5%); among the discordant cases computerized tomography showed the lesion in two, magnetic resonance in 10, while in the remaining case reported to harbour a microadenoma on computerized tomography the differential diagnosis between a true TSH-secreting microadenoma and pituitary resistance to thyroid hormones is still unclear. Both neuroradiologists coincided in their reports in 32 patients (80%); when the initial report was compared with those of the neuroradiologists, concordance was observed with at least one of them in 34 instances (85%). Discordant results were observed principally in microadenomas secreting ACTH or PRL and in delayed puberty. In the eight patients with Cushing's disease (histologically confirmed in six) magnetic resonance was positive in five and computerized tomography in two; the abnormal image correctly identified the side of the lesion at surgery. In patients referred for evaluation of Cushing's syndrome or hyperprolactinaemia (due to microadenomas) or after surgery, magnetic resonance is clearly preferable to computerized tomography. In macroadenomas both scans are equally diagnostic but magnetic resonance offers more information on pituitary morphology and neighbouring structures. Nevertheless, there are cases in which the results of computerized tomography and magnetic resonance will complement each other, since different parameters are analysed with each examination and discordant results are encountered.

  10. [Multi-centre clinical assessment of the Russian language version of the Diagnostic Interview for Psychoses].

    PubMed

    Smirnova, D A; Petrova, N N; Pavlichenko, A V; Martynikhin, I A; Dorofeikova, M V; Eremkin, V I; Izmailova, O V; Osadshiy, Yu Yu; Romanov, D V; Ubeikon, D A; Fedotov, I A; Sheifer, M S; Shustov, A D; Yashikhina, A A; Clark, M; Badcock, J; Watterreus, A; Morgan, V; Jablensky, A

    2018-01-01

    The Diagnostic Interview for Psychoses (DIP) was developed to enhance the quality of diagnostic assessment of psychotic disorders. The aim of the study was the adaptation of the Russian language version and evaluation of its validity and reliability. Ninety-eight patients with psychotic disorders (89 video recordings) were assessed by 12 interviewers using the Russian version of DIP at 7 clinical sites (in 6 cities of the Russian Federation). DIP ratings on 32 cases of a randomized case sample were made by 9 interviewers and the inter-rater reliability was compared with the researchers' DIP ratings. Overall pairwise agreement and Cohen's kappa were calculated. Diagnostic validity was evaluated on the basis of comparing the researchers' ratings using the Russian version of DIP with the 'gold standard' ratings of the same 62 clinical cases from the Western Australia Family Study Schizophrenia (WAFSS). The mean duration of the interview was 47±21 minutes. The Kappa statistic demonstrated a significant or almost perfect level of agreement on the majority of DIP items (84.54%) and a significant agreement for the ICD-10 diagnoses generated by the DIP computer diagnostic algorithm (κ=0.68; 95% CI 0.53,0.93). The level of agreement on the researchers' diagnoses was considerably lower (κ=0.31; 95% CI 0.06,0.56). The agreement on affective and positive psychotic symptoms was significantly higher than agreement on negative symptoms (F(2,44)=20.72, p<0.001, η2=0.485). The diagnostic validity of the Russian language version of DIP was confirmed by 73% (45/62) of the Russian DIP diagnoses matching the original WAFSS diagnoses. Among the mismatched diagnoses were 80 cases with a diagnosis of F20 Schizophrenia in the medical documentation compared to the researchers' F20 diagnoses in only 68 patients and in 62 of the DIP computerized diagnostic outputs. The reported level of subjective difficulties experienced when using the DIP was low to moderate. The results of the study confirm the validity and reliability of the Russian version of the DIP for evaluating psychotic disorders. DIP can be recommended for use in education and training, clinical practice and research as an important diagnostic resource.

  11. New Autism Diagnostic Interview-Revised Algorithms for Toddlers and Young Preschoolers from 12 to 47 Months of Age

    ERIC Educational Resources Information Center

    Kim, So Hyun; Lord, Catherine

    2012-01-01

    Autism Diagnostic Interview-Revised (Rutter et al. in "Autism diagnostic interview-revised." Western Psychological Services, Los Angeles, 2003) diagnostic algorithms specific to toddlers and young preschoolers were created using 829 assessments of children aged from 12 to 47 months with ASD, nonspectrum disorders, and typical development. The…

  12. Usefulness of magnifying endoscopy with narrow-band imaging for diagnosis of depressed gastric lesions

    PubMed Central

    SUMIE, HIROAKI; SUMIE, SHUJI; NAKAHARA, KEITA; WATANABE, YASUTOMO; MATSUO, KEN; MUKASA, MICHITA; SAKAI, TAKESHI; YOSHIDA, HIKARU; TSURUTA, OSAMU; SATA, MICHIO

    2014-01-01

    The usefulness of magnifying endoscopy with narrow-band imaging (ME-NBI) for the diagnosis of early gastric cancer is well known, however, there are no evaluation criteria. The aim of this study was to devise and evaluate a novel diagnostic algorithm for ME-NBI in depressed early gastric cancer. Between August, 2007 and May, 2011, 90 patients with a total of 110 depressed gastric lesions were enrolled in the study. A diagnostic algorithm was devised based on ME-NBI microvascular findings: microvascular irregularity and abnormal microvascular patterns (fine network, corkscrew and unclassified patterns). The diagnostic efficiency of the algorithm for gastric cancer and histological grade was assessed by measuring its mean sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy. Furthermore, inter- and intra-observer variation were measured. In the differential diagnosis of gastric cancer from non-cancerous lesions, the mean sensitivity, specificity, PPV, NPV, and accuracy of the diagnostic algorithm were 86.7, 48.0, 94.4, 26.7, and 83.2%, respectively. Furthermore, in the differential diagnosis of undifferentiated adenocarcinoma from differentiated adenocarcinoma, the mean sensitivity, specificity, PPV, NPV, and accuracy of the diagnostic algorithm were 61.6, 86.3, 69.0, 84.8, and 79.1%, respectively. For the ME-NBI final diagnosis using this algorithm, the mean κ values for inter- and intra-observer agreement were 0.50 and 0.77, respectively. In conclusion, the diagnostic algorithm based on ME-NBI microvascular findings was convenient and had high diagnostic accuracy, reliability and reproducibility in the differential diagnosis of depressed gastric lesions. PMID:24649321

  13. Effective Heart Disease Detection Based on Quantitative Computerized Traditional Chinese Medicine Using Representation Based Classifiers.

    PubMed

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-01-01

    At present, heart disease is the number one cause of death worldwide. Traditionally, heart disease is commonly detected using blood tests, electrocardiogram, cardiac computerized tomography scan, cardiac magnetic resonance imaging, and so on. However, these traditional diagnostic methods are time consuming and/or invasive. In this paper, we propose an effective noninvasive computerized method based on facial images to quantitatively detect heart disease. Specifically, facial key block color features are extracted from facial images and analyzed using the Probabilistic Collaborative Representation Based Classifier. The idea of facial key block color analysis is founded in Traditional Chinese Medicine. A new dataset consisting of 581 heart disease and 581 healthy samples was experimented by the proposed method. In order to optimize the Probabilistic Collaborative Representation Based Classifier, an analysis of its parameters was performed. According to the experimental results, the proposed method obtains the highest accuracy compared with other classifiers and is proven to be effective at heart disease detection.

  14. Computerized algorithms for partial cuts

    Treesearch

    R.L. Ernst; S.L. Stout

    1991-01-01

    Stand density, stand structure (diameter distribution), and species composition are all changed by intermediate treatments in forest stands. To use computer stand-growth simulators to assess the effects of different treatments on stand growth and development, users must be able to duplicate silviculturally realistic treatments in the simulator. In this paper, we review...

  15. Alternative diagnostic strategies for coronary artery disease in women: demonstration of the usefulness and efficiency of probability analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melin, J.A.; Wijns, W.; Vanbutsele, R.J.

    Alternative strategies using conditional probability analysis for the diagnosis of coronary artery disease (CAD) were examined in 93 infarct-free women presenting with chest pain. Another group of 42 consecutive female patients was prospectively analyzed. For this latter group, the physician had access to the pretest and posttest probability of CAD before coronary angiography. These 135 women all underwent stress electrocardiographic, thallium scintigraphic, and coronary angiographic examination. The pretest and posttest probabilities of coronary disease were derived from a computerized Bayesian algorithm. Probability estimates were calculated by the four following hypothetical strategies: SO, in which history, including risk factors, was considered;more » S1, in which history and stress electrocardiographic results were considered; S2, in which history and stress electrocardiographic and stress thallium scintigraphic results were considered; and S3, in which history and stress electrocardiographic results were used, but in which stress scintigraphic results were considered only if the poststress probability of CAD was between 10% and 90%, i.e., if a sufficient level of diagnostic certainty could not be obtained with the electrocardiographic results alone. The strategies were compared with respect to accuracy with the coronary angiogram as the standard. For both groups of women, S2 and S3 were found to be the most accurate in predicting the presence or absence of coronary disease (p less than .05). However, it was found with use of S3 that more than one-third of the thallium scintigrams could have been avoided without loss of accuracy. It was also found that diagnostic catheterization performed to exclude CAD as a diagnosis could have been avoided in half of the patients without loss of accuracy.(ABSTRACT TRUNCATED AT 250 WORDS)« less

  16. A genetic algorithm used for solving one optimization problem

    NASA Astrophysics Data System (ADS)

    Shipacheva, E. N.; Petunin, A. A.; Berezin, I. M.

    2017-12-01

    A problem of minimizing the length of the blank run for a cutting tool during cutting of sheet materials into shaped blanks is discussed. This problem arises during the preparation of control programs for computerized numerical control (CNC) machines. A discrete model of the problem is analogous in setting to the generalized travelling salesman problem with limitations in the form of precursor conditions determined by the technological features of cutting. A certain variant of a genetic algorithm for solving this problem is described. The effect of the parameters of the developed algorithm on the solution result for the problem with limitations is investigated.

  17. Computer-aided US diagnosis of breast lesions by using cell-based contour grouping.

    PubMed

    Cheng, Jie-Zhi; Chou, Yi-Hong; Huang, Chiun-Sheng; Chang, Yeun-Chung; Tiu, Chui-Mei; Chen, Kuei-Wu; Chen, Chung-Ming

    2010-06-01

    To develop a computer-aided diagnostic algorithm with automatic boundary delineation for differential diagnosis of benign and malignant breast lesions at ultrasonography (US) and investigate the effect of boundary quality on the performance of a computer-aided diagnostic algorithm. This was an institutional review board-approved retrospective study with waiver of informed consent. A cell-based contour grouping (CBCG) segmentation algorithm was used to delineate the lesion boundaries automatically. Seven morphologic features were extracted. The classifier was a logistic regression function. Five hundred twenty breast US scans were obtained from 520 subjects (age range, 15-89 years), including 275 benign (mean size, 15 mm; range, 5-35 mm) and 245 malignant (mean size, 18 mm; range, 8-29 mm) lesions. The newly developed computer-aided diagnostic algorithm was evaluated on the basis of boundary quality and differentiation performance. The segmentation algorithms and features in two conventional computer-aided diagnostic algorithms were used for comparative study. The CBCG-generated boundaries were shown to be comparable with the manually delineated boundaries. The area under the receiver operating characteristic curve (AUC) and differentiation accuracy were 0.968 +/- 0.010 and 93.1% +/- 0.7, respectively, for all 520 breast lesions. At the 5% significance level, the newly developed algorithm was shown to be superior to the use of the boundaries and features of the two conventional computer-aided diagnostic algorithms in terms of AUC (0.974 +/- 0.007 versus 0.890 +/- 0.008 and 0.788 +/- 0.024, respectively). The newly developed computer-aided diagnostic algorithm that used a CBCG segmentation method to measure boundaries achieved a high differentiation performance. Copyright RSNA, 2010

  18. Online Calibration Methods for the DINA Model with Independent Attributes in CD-CAT

    ERIC Educational Resources Information Center

    Chen, Ping; Xin, Tao; Wang, Chun; Chang, Hua-Hua

    2012-01-01

    Item replenishing is essential for item bank maintenance in cognitive diagnostic computerized adaptive testing (CD-CAT). In regular CAT, online calibration is commonly used to calibrate the new items continuously. However, until now no reference has publicly become available about online calibration for CD-CAT. Thus, this study investigates the…

  19. Establishing Ongoing, Early Identification Programs for Mental Health Problems in Our Schools: A Feasibility Study

    ERIC Educational Resources Information Center

    Nemeroff, Robin; Levitt, Jessica Mass; Faul, Lisa; Wonpat-Borja, Ahtoy; Bufferd, Sara; Setterberg, Stephen; Jensen, Peter S.

    2008-01-01

    The study evaluates the feasibility and effectiveness of several mental health screening and assessment tools in schools. A computerized version of the Diagnostic Interview Schedule for Children-IV proved to be feasible bridging the gap between mental health providers and unmet need of children accompanying risks.

  20. Enhanced Case Management versus Substance Abuse Treatment Alone among Substance Abusers with Depression

    ERIC Educational Resources Information Center

    Striley, Catherine W.; Nattala, Prasanthi; Ben Abdallah, Arbi; Dennis, Michael L.; Cottler, Linda B.

    2013-01-01

    This pilot study evaluated the effectiveness of enhanced case management for substance abusers with comorbid major depression, which was an integrated approach to care. One hundred and 20 participants admitted to drug treatment who also met Computerized Diagnostic Interview Schedule criteria for major depression at baseline were randomized to…

  1. Identification of Alcohol Disorders at a University Mental Health Centre, Using the CAGE.

    ERIC Educational Resources Information Center

    Ross, Helen E.; Tisdall, Gordon W.

    1994-01-01

    Examined usefulness of CAGE in screening for alcohol use disorders in university students (n=110) attending campus psychiatric health service. Fourteen students were identified as having current alcohol use disorder by means of Computerized Diagnostic Interview Schedule. Results suggest that CAGE is able in this population to detect usually mild…

  2. The Design of an ITS-Based Business Simulation: A New Epistemology for Learning.

    ERIC Educational Resources Information Center

    Gold, Steven C.

    1998-01-01

    Discusses the design and use of intelligent tutoring systems (ITS) for computerized business simulations. Reviews the use of ITS as an instructional technology; presents a model for ITS-based business simulations; examines the user interface and link between the ITS and simulation; and recommends expert-consultant diagnostic testing, and…

  3. ATS-PD: An Adaptive Testing System for Psychological Disorders

    ERIC Educational Resources Information Center

    Donadello, Ivan; Spoto, Andrea; Sambo, Francesco; Badaloni, Silvana; Granziol, Umberto; Vidotto, Giulio

    2017-01-01

    The clinical assessment of mental disorders can be a time-consuming and error-prone procedure, consisting of a sequence of diagnostic hypothesis formulation and testing aimed at restricting the set of plausible diagnoses for the patient. In this article, we propose a novel computerized system for the adaptive testing of psychological disorders.…

  4. The Theory about CD-CAT Based on FCA and Its Application

    ERIC Educational Resources Information Center

    Shuqun, Yang; Shuliang, Ding; Zhiqiang, Yao

    2009-01-01

    Cognitive diagnosis (CD) plays an important role in intelligent tutoring system. Computerized adaptive testing (CAT) is adaptive, fair, and efficient, which is suitable to large-scale examination. Traditional cognitive diagnostic test needs quite large number of items, the efficient and tailored CAT could be a remedy for it, so the CAT with…

  5. Interpretation Training in Individuals with Generalized Social Anxiety Disorder: A Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Amir, Nader; Taylor, Charles T.

    2012-01-01

    Objective: To examine the efficacy of a multisession computerized interpretation modification program (IMP) in the treatment of generalized social anxiety disorder (GSAD). Method: The sample comprised 49 individuals meeting diagnostic criteria for GSAD who were enrolled in a randomized, double-blind placebo-controlled trial comparing IMP (n = 23)…

  6. The Development and Evaluation of Listening and Speaking Diagnosis and Remedial Teaching System

    ERIC Educational Resources Information Center

    Hsiao, Hsien-Sheng; Chang, Cheng-Sian; Lin, Chiou-Yan; Chen, Berlin; Wu, Chia-Hou; Lin, Chien-Yu

    2016-01-01

    In this study, a system was developed to offer adaptive remedial instruction materials to learners of Chinese as a foreign language (CFL). The Chinese Listening and Speaking Diagnosis and Remedial Instruction (CLSDRI) system integrated computerized diagnostic tests and remedial instruction materials to diagnose errors made in listening…

  7. A Pilot Study of a Computerized Decision Support System to Detect Invasive Fungal Infection in Pediatric Hematology/Oncology Patients.

    PubMed

    Bartlett, Adam; Goeman, Emma; Vedi, Aditi; Mostaghim, Mona; Trahair, Toby; O'Brien, Tracey A; Palasanthiran, Pamela; McMullan, Brendan

    2015-11-01

    Computerized decision support systems (CDSSs) can provide indication-specific antimicrobial recommendations and approvals as part of hospital antimicrobial stewardship (AMS) programs. The aim of this study was to assess the performance of a CDSS for surveillance of invasive fungal infections (IFIs) in an inpatient hematology/oncology cohort. Between November 1, 2012, and October 31, 2013, pediatric hematology/oncology inpatients diagnosed with an IFI were identified through an audit of the CDSS and confirmed by medical record review. The results were compared to hospital diagnostic-related group (DRG) coding for IFI throughout the same period. A total of 83 patients were prescribed systemic antifungals according to the CDSS for the 12-month period. The CDSS correctly identified 19 patients with IFI on medical record review, compared with 10 patients identified by DRG coding, of whom 9 were confirmed to have IFI on medical record review. CDSS was superior to diagnostic coding in detecting IFI in an inpatient pediatric hematology/oncology cohort. The functionality of CDSS lends itself to inpatient infectious diseases surveillance but depends on prescriber adherence.

  8. Diagnostic Performance of a Novel Coronary CT Angiography Algorithm: Prospective Multicenter Validation of an Intracycle CT Motion Correction Algorithm for Diagnostic Accuracy.

    PubMed

    Andreini, Daniele; Lin, Fay Y; Rizvi, Asim; Cho, Iksung; Heo, Ran; Pontone, Gianluca; Bartorelli, Antonio L; Mushtaq, Saima; Villines, Todd C; Carrascosa, Patricia; Choi, Byoung Wook; Bloom, Stephen; Wei, Han; Xing, Yan; Gebow, Dan; Gransar, Heidi; Chang, Hyuk-Jae; Leipsic, Jonathon; Min, James K

    2018-06-01

    Motion artifact can reduce the diagnostic accuracy of coronary CT angiography (CCTA) for coronary artery disease (CAD). The purpose of this study was to compare the diagnostic performance of an algorithm dedicated to correcting coronary motion artifact with the performance of standard reconstruction methods in a prospective international multicenter study. Patients referred for clinically indicated invasive coronary angiography (ICA) for suspected CAD prospectively underwent an investigational CCTA examination free from heart rate-lowering medications before they underwent ICA. Blinded core laboratory interpretations of motion-corrected and standard reconstructions for obstructive CAD (≥ 50% stenosis) were compared with ICA findings. Segments unevaluable owing to artifact were considered obstructive. The primary endpoint was per-subject diagnostic accuracy of the intracycle motion correction algorithm for obstructive CAD found at ICA. Among 230 patients who underwent CCTA with the motion correction algorithm and standard reconstruction, 92 (40.0%) had obstructive CAD on the basis of ICA findings. At a mean heart rate of 68.0 ± 11.7 beats/min, the motion correction algorithm reduced the number of nondiagnostic scans compared with standard reconstruction (20.4% vs 34.8%; p < 0.001). Diagnostic accuracy for obstructive CAD with the motion correction algorithm (62%; 95% CI, 56-68%) was not significantly different from that of standard reconstruction on a per-subject basis (59%; 95% CI, 53-66%; p = 0.28) but was superior on a per-vessel basis: 77% (95% CI, 74-80%) versus 72% (95% CI, 69-75%) (p = 0.02). The motion correction algorithm was superior in subgroups of patients with severely obstructive (≥ 70%) stenosis, heart rate ≥ 70 beats/min, and vessels in the atrioventricular groove. The motion correction algorithm studied reduces artifacts and improves diagnostic performance for obstructive CAD on a per-vessel basis and in selected subgroups on a per-subject basis.

  9. Using qualitative research to inform development of a diagnostic algorithm for UTI in children.

    PubMed

    de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D

    2013-06-01

    Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged <5 years presenting acutely unwell to primary care. Qualitative methods were applied using semi-structured interviews of 30 UK doctors and nurses working with young children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.

  10. Computerized estimation of compatibility of stressors at work and worker's health characteristics.

    PubMed

    Susnik, J; Bizjak, B; Cestnik, B

    1996-09-01

    A system of computerized estimation of compatibility of stressors at work and worker's health characteristics is presented. Each characteristic is defined and scored on a specific scale. Incompatible workplace characteristics as related to worker's characteristics are singled out and offered to the user for an ergonomic solution. Work on the system started in 1987. This paper deals with the system's further development, which involves a larger number of topics, changes of the algorithm and presentation of an applicative case. Comparison of the system's results with those of medical experts shows that the use of the system tends to improve the thoroughness and consistency of incompatibility evaluations and consequently to make working ability assessment more objective.

  11. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-Site Study

    ERIC Educational Resources Information Center

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L.; Yerys, Benjamin E.; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised…

  12. LUNGx Challenge for computerized lung nodule classification

    DOE PAGES

    Armato, Samuel G.; Drukker, Karen; Li, Feng; ...

    2016-12-19

    The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants’ computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. We present ten groups that applied their own methods to 73 lung nodules (37 benign and 36 malignant) thatmore » were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists’ AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. Lastly, the continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.« less

  13. LUNGx Challenge for computerized lung nodule classification

    PubMed Central

    Armato, Samuel G.; Drukker, Karen; Li, Feng; Hadjiiski, Lubomir; Tourassi, Georgia D.; Engelmann, Roger M.; Giger, Maryellen L.; Redmond, George; Farahani, Keyvan; Kirby, Justin S.; Clarke, Laurence P.

    2016-01-01

    Abstract. The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants’ computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. Ten groups applied their own methods to 73 lung nodules (37 benign and 36 malignant) that were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists’ AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. The continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community. PMID:28018939

  14. LUNGx Challenge for computerized lung nodule classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armato, Samuel G.; Drukker, Karen; Li, Feng

    The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants’ computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. We present ten groups that applied their own methods to 73 lung nodules (37 benign and 36 malignant) thatmore » were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists’ AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. Lastly, the continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.« less

  15. Accuracy of dementia diagnosis: a direct comparison between radiologists and a computerized method.

    PubMed

    Klöppel, Stefan; Stonnington, Cynthia M; Barnes, Josephine; Chen, Frederick; Chu, Carlton; Good, Catriona D; Mader, Irina; Mitchell, L Anne; Patel, Ameet C; Roberts, Catherine C; Fox, Nick C; Jack, Clifford R; Ashburner, John; Frackowiak, Richard S J

    2008-11-01

    There has been recent interest in the application of machine learning techniques to neuroimaging-based diagnosis. These methods promise fully automated, standard PC-based clinical decisions, unbiased by variable radiological expertise. We recently used support vector machines (SVMs) to separate sporadic Alzheimer's disease from normal ageing and from fronto-temporal lobar degeneration (FTLD). In this study, we compare the results to those obtained by radiologists. A binary diagnostic classification was made by six radiologists with different levels of experience on the same scans and information that had been previously analysed with SVM. SVMs correctly classified 95% (sensitivity/specificity: 95/95) of sporadic Alzheimer's disease and controls into their respective groups. Radiologists correctly classified 65-95% (median 89%; sensitivity/specificity: 88/90) of scans. SVM correctly classified another set of sporadic Alzheimer's disease in 93% (sensitivity/specificity: 100/86) of cases, whereas radiologists ranged between 80% and 90% (median 83%; sensitivity/specificity: 80/85). SVMs were better at separating patients with sporadic Alzheimer's disease from those with FTLD (SVM 89%; sensitivity/specificity: 83/95; compared to radiological range from 63% to 83%; median 71%; sensitivity/specificity: 64/76). Radiologists were always accurate when they reported a high degree of diagnostic confidence. The results show that well-trained neuroradiologists classify typical Alzheimer's disease-associated scans comparable to SVMs. However, SVMs require no expert knowledge and trained SVMs can readily be exchanged between centres for use in diagnostic classification. These results are encouraging and indicate a role for computerized diagnostic methods in clinical practice.

  16. Accuracy of dementia diagnosis—a direct comparison between radiologists and a computerized method

    PubMed Central

    Stonnington, Cynthia M.; Barnes, Josephine; Chen, Frederick; Chu, Carlton; Good, Catriona D.; Mader, Irina; Mitchell, L. Anne; Patel, Ameet C.; Roberts, Catherine C.; Fox, Nick C.; Jack, Clifford R.; Ashburner, John; Frackowiak, Richard S. J.

    2008-01-01

    There has been recent interest in the application of machine learning techniques to neuroimaging-based diagnosis. These methods promise fully automated, standard PC-based clinical decisions, unbiased by variable radiological expertise. We recently used support vector machines (SVMs) to separate sporadic Alzheimer's disease from normal ageing and from fronto-temporal lobar degeneration (FTLD). In this study, we compare the results to those obtained by radiologists. A binary diagnostic classification was made by six radiologists with different levels of experience on the same scans and information that had been previously analysed with SVM. SVMs correctly classified 95% (sensitivity/specificity: 95/95) of sporadic Alzheimer's disease and controls into their respective groups. Radiologists correctly classified 65–95% (median 89%; sensitivity/specificity: 88/90) of scans. SVM correctly classified another set of sporadic Alzheimer's disease in 93% (sensitivity/specificity: 100/86) of cases, whereas radiologists ranged between 80% and 90% (median 83%; sensitivity/specificity: 80/85). SVMs were better at separating patients with sporadic Alzheimer's disease from those with FTLD (SVM 89%; sensitivity/specificity: 83/95; compared to radiological range from 63% to 83%; median 71%; sensitivity/specificity: 64/76). Radiologists were always accurate when they reported a high degree of diagnostic confidence. The results show that well-trained neuroradiologists classify typical Alzheimer's disease-associated scans comparable to SVMs. However, SVMs require no expert knowledge and trained SVMs can readily be exchanged between centres for use in diagnostic classification. These results are encouraging and indicate a role for computerized diagnostic methods in clinical practice. PMID:18835868

  17. Diagnostic Utility of the ADI-R and DSM-5 in the Assessment of Latino Children and Adolescents.

    PubMed

    Magaña, Sandy; Vanegas, Sandra B

    2017-05-01

    Latino children in the US are systematically underdiagnosed with Autism Spectrum Disorder (ASD); therefore, it is important that recent changes to the diagnostic process do not exacerbate this pattern of under-identification. Previous research has found that the Autism Diagnostic Interview-Revised (ADI-R) algorithm, based on the Diagnostic and Statistical Manual of Mental Disorder, Fourth Edition, Text Revision (DSM-IV-TR), has limitations with Latino children of Spanish speaking parents. We evaluated whether an ADI-R algorithm based on the new DSM-5 classification for ASD would be more sensitive in identifying Latino children of Spanish speaking parents who have a clinical diagnosis of ASD. Findings suggest that the DSM-5 algorithm shows better sensitivity than the DSM-IV-TR algorithm for Latino children.

  18. [Relevance of big data for molecular diagnostics].

    PubMed

    Bonin-Andresen, M; Smiljanovic, B; Stuhlmüller, B; Sörensen, T; Grützkau, A; Häupl, T

    2018-04-01

    Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.

  19. Using Web Speech Technology with Language Learning Applications

    ERIC Educational Resources Information Center

    Daniels, Paul

    2015-01-01

    In this article, the author presents the history of human-to-computer interaction based upon the design of sophisticated computerized speech recognition algorithms. Advancements such as the arrival of cloud-based computing and software like Google's Web Speech API allows anyone with an Internet connection and Chrome browser to take advantage of…

  20. A Top-Down Approach to Designing the Computerized Adaptive Multistage Test

    ERIC Educational Resources Information Center

    Luo, Xiao; Kim, Doyoung

    2018-01-01

    The top-down approach to designing a multistage test is relatively understudied in the literature and underused in research and practice. This study introduced a route-based top-down design approach that directly sets design parameters at the test level and utilizes the advanced automated test assembly algorithm seeking global optimality. The…

  1. Super and parallel computers and their impact on civil engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamat, M.P.

    1986-01-01

    This book presents the papers given at a conference on the use of supercomputers in civil engineering. Topics considered at the conference included solving nonlinear equations on a hypercube, a custom architectured parallel processing system, distributed data processing, algorithms, computer architecture, parallel processing, vector processing, computerized simulation, and cost benefit analysis.

  2. Comparison of dermatoscopic diagnostic algorithms based on calculation: The ABCD rule of dermatoscopy, the seven-point checklist, the three-point checklist and the CASH algorithm in dermatoscopic evaluation of melanocytic lesions.

    PubMed

    Unlu, Ezgi; Akay, Bengu N; Erdem, Cengizhan

    2014-07-01

    Dermatoscopic analysis of melanocytic lesions using the CASH algorithm has rarely been described in the literature. The purpose of this study was to compare the sensitivity, specificity, and diagnostic accuracy rates of the ABCD rule of dermatoscopy, the seven-point checklist, the three-point checklist, and the CASH algorithm in the diagnosis and dermatoscopic evaluation of melanocytic lesions on the hairy skin. One hundred and fifteen melanocytic lesions of 115 patients were examined retrospectively using dermatoscopic images and compared with the histopathologic diagnosis. Four dermatoscopic algorithms were carried out for all lesions. The ABCD rule of dermatoscopy showed sensitivity of 91.6%, specificity of 60.4%, and diagnostic accuracy of 66.9%. The seven-point checklist showed sensitivity, specificity, and diagnostic accuracy of 87.5, 65.9, and 70.4%, respectively; the three-point checklist 79.1, 62.6, 66%; and the CASH algorithm 91.6, 64.8, and 70.4%, respectively. To our knowledge, this is the first study that compares the sensitivity, specificity and diagnostic accuracy of the ABCD rule of dermatoscopy, the three-point checklist, the seven-point checklist, and the CASH algorithm for the diagnosis of melanocytic lesions on the hairy skin. In our study, the ABCD rule of dermatoscopy and the CASH algorithm showed the highest sensitivity for the diagnosis of melanoma. © 2014 Japanese Dermatological Association.

  3. Self-Reported HIV-Positive Status But Subsequent HIV-Negative Test Result Using Rapid Diagnostic Testing Algorithms Among Seven Sub-Saharan African Military Populations

    DTIC Science & Technology

    2017-07-07

    RESEARCH ARTICLE Self-reported HIV-positive status but subsequent HIV-negative test result using rapid diagnostic testing algorithms among seven sub...America * judith.harbertson.ctr@mail.mil Abstract HIV rapid diagnostic tests (RDTs) combined in an algorithm are the current standard for HIV diagnosis...in many sub-Saharan African countries, and extensive laboratory testing has con- firmed HIV RDTs have excellent sensitivity and specificity. However

  4. Asymptomatic Emphysematous Pyelonephritis - Positron Emission Tomography Computerized Tomography Aided Diagnostic and Therapeutic Elucidation

    PubMed Central

    Pathapati, Deepti; Shinkar, Pawan Gulabrao; kumar, Satya Awadhesh; Jha; Dattatreya, Palanki Satya; Chigurupati, Namrata; Chigurupati, Mohana Vamsy; Rao, Vatturi Venkata Satya Prabhakar

    2017-01-01

    The authors report an interesting coincidental unearthing by 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) of a potentially serious medical condition of emphysematous pyelonephritis in a case of nasopharyngeal carcinoma. The management by conservative ureteric stenting and antibiotics was done with gratifying clinical outcome. PMID:28242985

  5. An Approach towards Ultrasound Kidney Cysts Detection using Vector Graphic Image Analysis

    NASA Astrophysics Data System (ADS)

    Mahmud, Wan Mahani Hafizah Wan; Supriyanto, Eko

    2017-08-01

    This study develops new approach towards detection of kidney ultrasound image for both with single cyst as well as multiple cysts. 50 single cyst images and 25 multiple cysts images were used to test the developed algorithm. Steps involved in developing this algorithm were vector graphic image formation and analysis, thresholding, binarization, filtering as well as roundness test. Performance evaluation to 50 single cyst images gave accuracy of 92%, while for multiple cysts images, the accuracy was about 86.89% when tested to 25 multiple cysts images. This developed algorithm may be used in developing a computerized system such as computer aided diagnosis system to help medical experts in diagnosis of kidney cysts.

  6. The Impact of a Line Probe Assay Based Diagnostic Algorithm on Time to Treatment Initiation and Treatment Outcomes for Multidrug Resistant TB Patients in Arkhangelsk Region, Russia.

    PubMed

    Eliseev, Platon; Balantcev, Grigory; Nikishova, Elena; Gaida, Anastasia; Bogdanova, Elena; Enarson, Donald; Ornstein, Tara; Detjen, Anne; Dacombe, Russell; Gospodarevskaya, Elena; Phillips, Patrick P J; Mann, Gillian; Squire, Stephen Bertel; Mariandyshev, Andrei

    2016-01-01

    In the Arkhangelsk region of Northern Russia, multidrug-resistant (MDR) tuberculosis (TB) rates in new cases are amongst the highest in the world. In 2014, MDR-TB rates reached 31.7% among new cases and 56.9% among retreatment cases. The development of new diagnostic tools allows for faster detection of both TB and MDR-TB and should lead to reduced transmission by earlier initiation of anti-TB therapy. The PROVE-IT (Policy Relevant Outcomes from Validating Evidence on Impact) Russia study aimed to assess the impact of the implementation of line probe assay (LPA) as part of an LPA-based diagnostic algorithm for patients with presumptive MDR-TB focusing on time to treatment initiation with time from first-care seeking visit to the initiation of MDR-TB treatment rather than diagnostic accuracy as the primary outcome, and to assess treatment outcomes. We hypothesized that the implementation of LPA would result in faster time to treatment initiation and better treatment outcomes. A culture-based diagnostic algorithm used prior to LPA implementation was compared to an LPA-based algorithm that replaced BacTAlert and Löwenstein Jensen (LJ) for drug sensitivity testing. A total of 295 MDR-TB patients were included in the study, 163 diagnosed with the culture-based algorithm, 132 with the LPA-based algorithm. Among smear positive patients, the implementation of the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 50 and 66 days compared to the culture-based algorithm (BacTAlert and LJ respectively, p<0.001). In smear negative patients, the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 78 days when compared to the culture-based algorithm (LJ, p<0.001). However, several weeks were still needed for treatment initiation in LPA-based algorithm, 24 days in smear positive, and 62 days in smear negative patients. Overall treatment outcomes were better in LPA-based algorithm compared to culture-based algorithm (p = 0.003). Treatment success rates at 20 months of treatment were higher in patients diagnosed with the LPA-based algorithm (65.2%) as compared to those diagnosed with the culture-based algorithm (44.8%). Mortality was also lower in the LPA-based algorithm group (7.6%) compared to the culture-based algorithm group (15.9%). There was no statistically significant difference in smear and culture conversion rates between the two algorithms. The results of the study suggest that the introduction of LPA leads to faster time to MDR diagnosis and earlier treatment initiation as well as better treatment outcomes for patients with MDR-TB. These findings also highlight the need for further improvements within the health system to reduce both patient and diagnostic delays to truly optimize the impact of new, rapid diagnostics.

  7. Putaminal volume and diffusion in early familial Creutzfeldt-Jakob disease.

    PubMed

    Seror, Ilana; Lee, Hedok; Cohen, Oren S; Hoffmann, Chen; Prohovnik, Isak

    2010-01-15

    The putamen is centrally implicated in the pathophysiology of Creutzfeldt-Jakob Disease (CJD). To our knowledge, its volume has never been measured in this disease. We investigated whether gross putaminal atrophy can be detected by MRI in early stages, when the diffusion is already reduced. Twelve familial CJD patients with the E200K mutation and 22 healthy controls underwent structural and diffusion MRI scans. The putamen was identified in anatomical scans by two methods: manual tracing by a blinded investigator, and automatic parcellation by a computerized segmentation procedure (FSL FIRST). For each method, volume and mean Apparent Diffusion Coefficient (ADC) were calculated. ADC was significantly lower in CJD patients (697+/-64 microm(2)/s vs. 750+/-31 microm(2)/s, p<0.005), as expected, but the volume was not reduced. The computerized FIRST delineation yielded comparable ADC values to the manual method, but computerized volumes were smaller than manual tracing values. We conclude that significant diffusion reduction in the putamen can be detected by delineating the structure manually or with a computerized algorithm. Our findings confirm and extend previous voxel-based and observational studies. Putaminal volume was not reduced in our early-stage patients, thus confirming that diffusion abnormalities precede detectible atrophy in this structure.

  8. Nike Facility Diagnostics and Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Chan, Yung; Aglitskiy, Yefim; Karasik, Max; Kehne, David; Obenschain, Steve; Oh, Jaechul; Serlin, Victor; Weaver, Jim

    2013-10-01

    The Nike laser-target facility is a 56-beam krypton fluoride system that can deliver 2 to 3 kJ of laser energy at 248 nm onto targets inside a two meter diameter vacuum chamber. Nike is used to study physics and technology issues related to laser direct-drive ICF fusion, including hydrodynamic and laser-plasma instabilities, material behavior at extreme pressures, and optical and x-ray diagnostics for laser-heated targets. A suite of laser and target diagnostics are fielded on the Nike facility, including high-speed, high-resolution x-ray and visible imaging cameras, spectrometers and photo-detectors. A centrally-controlled, distributed computerized data acquisition system provides robust data management and near real-time analysis feedback capability during target shots. Work supported by DOE/NNSA.

  9. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD Without ID: A Multi-site Study.

    PubMed

    Pugliese, Cara E; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B; White, Susan W; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D; Schultz, Robert T; Martin, Alex; Anthony, Laura Gutermuth

    2015-12-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verbal IQ below 85 or above 115, and ages 16 and older. Best practice diagnostic procedures should include the Module 4 in conjunction with other assessment tools. Balancing needs for sensitivity and specificity depending on the purpose of assessment (e.g., clinical vs. research) and demographic characteristics mentioned above will enhance its utility.

  10. The autism diagnostic observation schedule, module 4: revised algorithm and standardized severity scores.

    PubMed

    Hus, Vanessa; Lord, Catherine

    2014-08-01

    The recently published Autism Diagnostic Observation Schedule, 2nd edition (ADOS-2) includes revised diagnostic algorithms and standardized severity scores for modules used to assess younger children. A revised algorithm and severity scores are not yet available for Module 4, used with verbally fluent adults. The current study revises the Module 4 algorithm and calibrates raw overall and domain totals to provide metrics of autism spectrum disorder (ASD) symptom severity. Sensitivity and specificity of the revised Module 4 algorithm exceeded 80 % in the overall sample. Module 4 calibrated severity scores provide quantitative estimates of ASD symptom severity that are relatively independent of participant characteristics. These efforts increase comparability of ADOS scores across modules and should facilitate efforts to examine symptom trajectories from toddler to adulthood.

  11. Implementation of an Algorithm for Prosthetic Joint Infection: Deviations and Problems.

    PubMed

    Mühlhofer, Heinrich M L; Kanz, Karl-Georg; Pohlig, Florian; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; von Eisenhart-Rothe, Ruediger; Schauwecker, Johannes

    The outcome of revision surgery in arthroplasty is based on a precise diagnosis. In addition, the treatment varies based on whether the prosthetic failure is caused by aseptic or septic loosening. Algorithms can help to identify periprosthetic joint infections (PJI) and standardize diagnostic steps, however, algorithms tend to oversimplify the treatment of complex cases. We conducted a process analysis during the implementation of a PJI algorithm to determine problems and deviations associated with the implementation of this algorithm. Fifty patients who were treated after implementing a standardized algorithm were monitored retrospectively. Their treatment plans and diagnostic cascades were analyzed for deviations from the implemented algorithm. Each diagnostic procedure was recorded, compared with the algorithm, and evaluated statistically. We detected 52 deviations while treating 50 patients. In 25 cases, no discrepancy was observed. Synovial fluid aspiration was not performed in 31.8% of patients (95% confidence interval [CI], 18.1%-45.6%), while white blood cell counts (WBCs) and neutrophil differentiation were assessed in 54.5% of patients (95% CI, 39.8%-69.3%). We also observed that the prolonged incubation of cultures was not requested in 13.6% of patients (95% CI, 3.5%-23.8%). In seven of 13 cases (63.6%; 95% CI, 35.2%-92.1%), arthroscopic biopsy was performed; 6 arthroscopies were performed in discordance with the algorithm (12%; 95% CI, 3%-21%). Self-critical analysis of diagnostic processes and monitoring of deviations using algorithms are important and could increase the quality of treatment by revealing recurring faults.

  12. Explaining and Controlling for the Psychometric Properties of Computer-Generated Figural Matrix Items

    ERIC Educational Resources Information Center

    Freund, Philipp Alexander; Hofer, Stefan; Holling, Heinz

    2008-01-01

    Figural matrix items are a popular task type for assessing general intelligence (Spearman's g). Items of this kind can be constructed rationally, allowing the implementation of computerized generation algorithms. In this study, the influence of different task parameters on the degree of difficulty in matrix items was investigated. A sample of N =…

  13. Computerized Algorithms: Evaluation of Capability to Predict Graduation from Air Force Training.

    DTIC Science & Technology

    1980-09-01

    Distribution of the ASVAB Administrative Aptitude Test Scores for the 1976 AFSC 64530 Population ..................................... 73 ASO Distribution...2.34 2.15 PDA 4.11 3.12 I83 ... Table .. (on LAioi; M.iix of the Ind ’pedttit Variab Independent us .l Variable Merh Adm Gel Elec AF)T Ed Aig Bi Math

  14. An expert system for headache diagnosis: the Computerized Headache Assessment tool (CHAT).

    PubMed

    Maizels, Morris; Wolfe, William J

    2008-01-01

    Migraine is a highly prevalent chronic disorder associated with significant morbidity. Chronic daily headache syndromes, while less common, are less likely to be recognized, and impair quality of life to an even greater extent than episodic migraine. A variety of screening and diagnostic tools for migraine have been proposed and studied. Few investigators have developed and evaluated computerized programs to diagnose headache. To develop and determine the accuracy and utility of a computerized headache assessment tool (CHAT). CHAT was designed to identify all of the major primary headache disorders, distinguish daily from episodic types, and recognize medication overuse. CHAT was developed using an expert systems approach to headache diagnosis, with initial branch points determined by headache frequency and duration. Appropriate clinical criteria are presented relevant to brief and longer-lasting headaches. CHAT was posted on a web site using Microsoft active server pages and a SQL-server database server. A convenience sample of patients who presented to the adult urgent care department with headache, and patients in a family practice waiting room, were solicited to participate. Those who completed the on-line questionnaire were contacted for a diagnostic interview. One hundred thirty-five patients completed CHAT and 117 completed a diagnostic interview. CHAT correctly identified 35/35 (100%) patients with episodic migraine and 42/49 (85.7%) of patients with transformed migraine. CHAT also correctly identified 11/11 patients with chronic tension-type headache, 2/2 with episodic tension-type headache, and 1/1 with episodic cluster headache. Medication overuse was correctly recognized in 43/52 (82.7%). The most common misdiagnoses by CHAT were seen in patients with transformed migraine or new daily persistent headache. Fifty patients were referred to their primary care physician and 62 to the headache clinic. Of 29 patients referred to the PCP with a confirmed diagnosis of migraine, 25 made a follow-up appointment, the PCP diagnosed migraine in 19, and initiated migraine-specific therapy or prophylaxis in 17. The described expert system displays high diagnostic accuracy for migraine and other primary headache disorders, including daily headache syndromes and medication overuse. As part of a disease management program, CHAT led to patients receiving appropriate diagnoses and therapy. Limitations of the system include patient willingness to utilize the program, introducing such a process into the culture of medical care, and the difficult distinction of transformed migraine.

  15. Design of a solar array simulator for the NASA EOS testbed

    NASA Technical Reports Server (NTRS)

    Butler, Steve J.; Sable, Dan M.; Lee, Fred C.; Cho, Bo H.

    1992-01-01

    The present spacecraft solar array simulator addresses both dc and ac characteristics as well as changes in illumination and temperature and performance degradation over the course of array service life. The computerized control system used allows simulation of a complete orbit cycle, in addition to automated diagnostics. The simulator is currently interfaced with the NASA EOS testbed.

  16. Nonlinear optical microscopy: use of second harmonic generation and two-photon microscopy for automated quantitative liver fibrosis studies.

    PubMed

    Sun, Wanxin; Chang, Shi; Tai, Dean C S; Tan, Nancy; Xiao, Guangfa; Tang, Huihuan; Yu, Hanry

    2008-01-01

    Liver fibrosis is associated with an abnormal increase in an extracellular matrix in chronic liver diseases. Quantitative characterization of fibrillar collagen in intact tissue is essential for both fibrosis studies and clinical applications. Commonly used methods, histological staining followed by either semiquantitative or computerized image analysis, have limited sensitivity, accuracy, and operator-dependent variations. The fibrillar collagen in sinusoids of normal livers could be observed through second-harmonic generation (SHG) microscopy. The two-photon excited fluorescence (TPEF) images, recorded simultaneously with SHG, clearly revealed the hepatocyte morphology. We have systematically optimized the parameters for the quantitative SHG/TPEF imaging of liver tissue and developed fully automated image analysis algorithms to extract the information of collagen changes and cell necrosis. Subtle changes in the distribution and amount of collagen and cell morphology are quantitatively characterized in SHG/TPEF images. By comparing to traditional staining, such as Masson's trichrome and Sirius red, SHG/TPEF is a sensitive quantitative tool for automated collagen characterization in liver tissue. Our system allows for enhanced detection and quantification of sinusoidal collagen fibers in fibrosis research and clinical diagnostics.

  17. Diagnostic rules and algorithms for the diagnosis of non-acute heart failure in patients 80 years of age and older: a diagnostic accuracy and validation study.

    PubMed

    Smeets, Miek; Degryse, Jan; Janssens, Stefan; Matheï, Catharina; Wallemacq, Pierre; Vanoverschelde, Jean-Louis; Aertgeerts, Bert; Vaes, Bert

    2016-10-06

    Different diagnostic algorithms for non-acute heart failure (HF) exist. Our aim was to compare the ability of these algorithms to identify HF in symptomatic patients aged 80 years and older and identify those patients at highest risk for mortality. Diagnostic accuracy and validation study. General practice, Belgium. 365 patients with HF symptoms aged 80 years and older (BELFRAIL cohort). Participants underwent a full clinical assessment, including a detailed echocardiographic examination at home. The diagnostic accuracy of 4 different algorithms was compared using an intention-to-diagnose analysis. The European Society of Cardiology (ESC) definition of HF was used as the reference standard for HF diagnosis. Kaplan-Meier curves for 5-year all-cause mortality were plotted and HRs and corresponding 95% CIs were calculated to compare the mortality risk predicting abilities of the different algorithms. Net reclassification improvement (NRI) was calculated. The prevalence of HF was 20% (n=74). The 2012 ESC algorithm yielded the highest sensitivity (92%, 95% CI 83% to 97%) as well as the highest referral rate (71%, n=259), whereas the Oudejans algorithm yielded the highest specificity (73%, 95% CI 68% to 78%) and the lowest referral rate (36%, n=133). These differences could be ascribed to differences in N-terminal probrain natriuretic peptide cut-off values (125 vs 400 pg/mL). The Kelder and Oudejans algorithms exhibited NRIs of 12% (95% CI 0.7% to 22%, p=0.04) and 22% (95% CI 9% to 32%, p<0.001), respectively, compared with the ESC algorithm. All algorithms detected patients at high risk for mortality (HR 1.9, 95% CI 1.4 to 2.5; Kelder) to 2.3 (95% CI 1.7 to 3.1; Oudejans). No significant differences were observed among the algorithms with respect to mortality risk predicting abilities. Choosing a diagnostic algorithm for non-acute HF in elderly patients represents a trade-off between sensitivity and specificity, mainly depending on differences between cut-off values for natriuretic peptides. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  18. A diagnostic algorithm for atypical spitzoid tumors: guidelines for immunohistochemical and molecular assessment.

    PubMed

    Cho-Vega, Jeong Hee

    2016-07-01

    Atypical spitzoid tumors are a morphologically diverse group of rare melanocytic lesions most frequently seen in children and young adults. As atypical spitzoid tumors bear striking resemblance to Spitz nevus and spitzoid melanomas clinically and histopathologically, it is crucial to determine its malignant potential and predict its clinical behavior. To date, many researchers have attempted to differentiate atypical spitzoid tumors from unequivocal melanomas based on morphological, immonohistochemical, and molecular diagnostic differences. A diagnostic algorithm is proposed here to assess the malignant potential of atypical spitzoid tumors by using a combination of immunohistochemical and cytogenetic/molecular tests. Together with classical morphological evaluation, this algorithm includes a set of immunohistochemistry assays (p16(Ink4a), a dual-color Ki67/MART-1, and HMB45), fluorescence in situ hybridization (FISH) with five probes (6p25, 8q24, 11q13, CEN9, and 9p21), and an array-based comparative genomic hybridization. This review discusses details of the algorithm, the rationale of each test used in the algorithm, and utility of this algorithm in routine dermatopathology practice. This algorithmic approach will provide a comprehensive diagnostic tool that complements conventional histological criteria and will significantly contribute to improve the diagnosis and prediction of the clinical behavior of atypical spitzoid tumors.

  19. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  20. Cytological Evaluation of Thyroid Lesions by Nuclear Morphology and Nuclear Morphometry.

    PubMed

    Yashaswini, R; Suresh, T N; Sagayaraj, A

    2017-01-01

    Fine needle aspiration (FNA) of the thyroid gland is an effective diagnostic method. The Bethesda system for reporting thyroid cytopathology classifies them into six categories and gives implied risk for malignancy and management protocol in each category. Though the system gives specific criteria, diagnostic dilemma still exists. Using nuclear morphometry, we can quantify the number of parameters, such as those related to nuclear size and shape. The evaluation of nuclear morphometry is not well established in thyroid cytology. To classify thyroid lesions on fine needle aspiration cytology (FNAC) using Bethesda system and to evaluate the significance of nuclear parameters in improving the prediction of thyroid malignancy. In the present study, 120 FNAC cases of thyroid lesions with histological diagnosis were included. Computerized nuclear morphometry was done on 81 cases which had confirmed cytohistological correlation, using Aperio computer software. One hundred nuclei from each case were outlined and eight nuclear parameters were analyzed. In the present study, thyroid lesions were common in female with M: F ratio of 1:5 and most commonly in 40-60 yrs. Under Bethesda system, 73 (60.83%) were category II; 14 (11.6%) were category III, 3 (2.5%) were category IV, 8 (6.6%) were category V, and 22 (18.3%) were category VI, which were malignant on histopathological correlation. Sensitivity, specificity, and diagnostic accuracy of Bethesda reporting system are 62.5, 84.38, and 74.16%, respectively. Minimal nuclear diameter, maximal nuclear diameter, nuclear perimeter, and nuclear area were higher in malignant group compared to nonneoplastic and benign group. The Bethesda system is a useful standardized system of reporting thyroid cytopathology. It gives implied risk of malignancy. Nuclear morphometry by computerized image analysis can be utilized as an additional diagnostic tool.

  1. Interactive computerized learning program exposes veterinary students to challenging international animal-health problems.

    PubMed

    Conrad, Patricia A; Hird, Dave; Arzt, Jonathan; Hayes, Rick H; Magliano, Dave; Kasper, Janine; Morfin, Saul; Pinney, Stephen

    2007-01-01

    This article describes a computerized case-based CD-ROM (CD) on international animal health that was developed to give veterinary students an opportunity to "virtually" work alongside veterinarians and other veterinary students as they try to solve challenging disease problems relating to tuberculosis in South African wildlife, bovine abortion in Mexico, and neurologic disease in horses in Rapa Nui, Chile. Each of the three case modules presents, in a highly interactive format, a problem or mystery that must be solved by the learner. As well as acquiring information via video clips and text about the specific health problem, learners obtain information about the different countries, animal-management practices, diagnostic methods, related disease-control issues, economic factors, and the opinions of local experts. After assimilating this information, the learner must define the problem and formulate an action plan or make a recommendation or diagnosis. The computerized program invokes three principles of adult education: active learning, learner-centered education, and experiential learning. A medium that invokes these principles is a potentially efficient learning tool and template for developing other case-based problem-solving computerized programs. The program is accessible on the World Wide Web at . A broadband Internet connection is recommended, since the modules make extensive use of embedded video and audio clips. Information on how to obtain the CD is also provided.

  2. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-site Study

    PubMed Central

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verbal IQ below 85 or above 115, and ages 16 and older. Best practice diagnostic procedures should include the Module 4 in conjunction with other assessment tools. Balancing needs for sensitivity and specificity depending on the purpose of assessment (e.g., clinical vs. research) and demographic characteristics mentioned above will enhance its utility. PMID:26385796

  3. Development of a Computerized Adaptive Test of Children's Gross Motor Skills.

    PubMed

    Huang, Chien-Yu; Tung, Li-Chen; Chou, Yeh-Tai; Wu, Hing-Man; Chen, Kuan-Lin; Hsieh, Ching-Lin

    2018-03-01

    To (1) develop a computerized adaptive test for gross motor skills (GM-CAT) as a diagnostic test and an outcome measure, using the gross motor skills subscale of the Comprehensive Developmental Inventory for Infants and Toddlers (CDIIT-GM) as the candidate item bank; and (2) examine the psychometric properties and the efficiency of the GM-CAT. Retrospective study. A developmental center of a medical center. Children with and without developmental delay (N=1738). Not applicable. The CDIIT-GM contains 56 universal items on gross motor skills assessing children's antigravity control, locomotion, and body movement coordination. The item bank of the GM-CAT had 44 items that met the dichotomous Rasch model's assumptions. High Rasch person reliabilities were found for each estimated gross motor skill for the GM-CAT (Rasch person reliabilities =.940-.995, SE=.68-2.43). For children aged 6 to 71 months, the GM-CAT had good concurrent validity (r values =.97-.98), adequate to excellent diagnostic accuracy (area under receiver operating characteristics curve =.80-.98), and moderate to large responsiveness (effect size =.65-5.82). The averages of items administered for the GM-CAT were 7 to 11, depending on the age group. The results of this study support the use of the GM-CAT as a diagnostic and outcome measure to estimate children's gross motor skills in both research and clinical settings. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  4. Image quality enhancement for skin cancer optical diagnostics

    NASA Astrophysics Data System (ADS)

    Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey

    2017-12-01

    The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.

  5. Evidence-Based Diagnostic Algorithm for Glioma: Analysis of the Results of Pathology Panel Review and Molecular Parameters of EORTC 26951 and 26882 Trials.

    PubMed

    Kros, Johan M; Huizer, Karin; Hernández-Laín, Aurelio; Marucci, Gianluca; Michotte, Alex; Pollo, Bianca; Rushing, Elisabeth J; Ribalta, Teresa; French, Pim; Jaminé, David; Bekka, Nawal; Lacombe, Denis; van den Bent, Martin J; Gorlia, Thierry

    2015-06-10

    With the rapid discovery of prognostic and predictive molecular parameters for glioma, the status of histopathology in the diagnostic process should be scrutinized. Our project aimed to construct a diagnostic algorithm for gliomas based on molecular and histologic parameters with independent prognostic values. The pathology slides of 636 patients with gliomas who had been included in EORTC 26951 and 26882 trials were reviewed using virtual microscopy by a panel of six neuropathologists who independently scored 18 histologic features and provided an overall diagnosis. The molecular data for IDH1, 1p/19q loss, EGFR amplification, loss of chromosome 10 and chromosome arm 10q, gain of chromosome 7, and hypermethylation of the promoter of MGMT were available for some of the cases. The slides were divided in discovery (n = 426) and validation sets (n = 210). The diagnostic algorithm resulting from analysis of the discovery set was validated in the latter. In 66% of cases, consensus of overall diagnosis was present. A diagnostic algorithm consisting of two molecular markers and one consensus histologic feature was created by conditional inference tree analysis. The order of prognostic significance was: 1p/19q loss, EGFR amplification, and astrocytic morphology, which resulted in the identification of four diagnostic nodes. Validation of the nodes in the validation set confirmed the prognostic value (P < .001). We succeeded in the creation of a timely diagnostic algorithm for anaplastic glioma based on multivariable analysis of consensus histopathology and molecular parameters. © 2015 by American Society of Clinical Oncology.

  6. The Autism Diagnostic Observation Schedule, Module 4: Revised Algorithm and Standardized Severity Scores

    PubMed Central

    Hus, Vanessa; Lord, Catherine

    2014-01-01

    The Autism Diagnostic Observation Schedule, 2nd Edition includes revised diagnostic algorithms and standardized severity scores for modules used to assess children and adolescents of varying language abilities. Comparable revisions have not yet been applied to the Module 4, used with verbally fluent adults. The current study revises the Module 4 algorithm and calibrates raw overall and domain totals to provide metrics of ASD symptom severity. Sensitivity and specificity of the revised Module 4 algorithm exceeded 80% in the overall sample. Module 4 calibrated severity scores provide quantitative estimates of ASD symptom severity that are relatively independent of participant characteristics. These efforts increase comparability of ADOS scores across modules and should facilitate efforts to increase understanding of adults with ASD. PMID:24590409

  7. The Autism Diagnostic Observation Schedule, Module 4: Application of the Revised Algorithms in an Independent, Well-Defined, Dutch Sample (N = 93)

    ERIC Educational Resources Information Center

    de Bildt, Annelies; Sytema, Sjoerd; Meffert, Harma; Bastiaansen, Jojanneke A. C. J.

    2016-01-01

    This study examined the discriminative ability of the revised Autism Diagnostic Observation Schedule module 4 algorithm (Hus and Lord in "J Autism Dev Disord" 44(8):1996-2012, 2014) in 93 Dutch males with Autism Spectrum Disorder (ASD), schizophrenia, psychopathy or controls. Discriminative ability of the revised algorithm ASD cut-off…

  8. Automated image alignment and segmentation to follow progression of geographic atrophy in age-related macular degeneration.

    PubMed

    Ramsey, David J; Sunness, Janet S; Malviya, Poorva; Applegate, Carol; Hager, Gregory D; Handa, James T

    2014-07-01

    To develop a computer-based image segmentation method for standardizing the quantification of geographic atrophy (GA). The authors present an automated image segmentation method based on the fuzzy c-means clustering algorithm for the detection of GA lesions. The method is evaluated by comparing computerized segmentation against outlines of GA drawn by an expert grader for a longitudinal series of fundus autofluorescence images with paired 30° color fundus photographs for 10 patients. The automated segmentation method showed excellent agreement with an expert grader for fundus autofluorescence images, achieving a performance level of 94 ± 5% sensitivity and 98 ± 2% specificity on a per-pixel basis for the detection of GA area, but performed less well on color fundus photographs with a sensitivity of 47 ± 26% and specificity of 98 ± 2%. The segmentation algorithm identified 75 ± 16% of the GA border correctly in fundus autofluorescence images compared with just 42 ± 25% for color fundus photographs. The results of this study demonstrate a promising computerized segmentation method that may enhance the reproducibility of GA measurement and provide an objective strategy to assist an expert in the grading of images.

  9. Effects of Methylphenidate and Bupropion on DHEA-S and Cortisol Plasma Levels in Attention-Deficit Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Lee, Moon-Soo; Yang, Jae-Won; Ko, Young-Hoon; Han, Changsu; Kim, Seung-Hyun; Lee, Min-Soo; Joe, Sook-Haeng; Jung, In-Kwa

    2008-01-01

    We evaluated plasma levels of DHEA-S and cortisol before and after treating ADHD patients with one of two medications: methylphenidate (n = 12) or bupropion (n = 10). Boys with ADHD (combined type) were evaluated with the Korean ADHD rating scale (K-ARS) and the computerized ADHD diagnostic system (ADS). All assessments were measured at baseline…

  10. Direct estimates of national neonatal and child cause–specific mortality proportions in Niger by expert algorithm and physician–coded analysis of verbal autopsy interviews

    PubMed Central

    Kalter, Henry D.; Roubanatou, Abdoulaye–Mamadou; Koffi, Alain; Black, Robert E.

    2015-01-01

    Background This study was one of a set of verbal autopsy investigations undertaken by the WHO/UNCEF–supported Child Health Epidemiology Reference Group (CHERG) to derive direct estimates of the causes of neonatal and child deaths in high priority countries of sub–Saharan Africa. The objective of the study was to determine the cause distributions of neonatal (0–27 days) and child (1–59 months) mortality in Niger. Methods Verbal autopsy interviews were conducted of random samples of 453 neonatal deaths and 620 child deaths from 2007 to 2010 identified by the 2011 Niger National Mortality Survey. The cause of each death was assigned using two methods: computerized expert algorithms arranged in a hierarchy and physician completion of a death certificate for each child. The findings of the two methods were compared to each other, and plausibility checks were conducted to assess which is the preferred method. Comparison of some direct measures from this study with CHERG modeled cause of death estimates are discussed. Findings The cause distributions of neonatal deaths as determined by expert algorithms and the physician were similar, with the same top three causes by both methods and all but two other causes within one rank of each other. Although child causes of death differed more, the reasons often could be discerned by analyzing algorithmic criteria alongside the physician’s application of required minimal diagnostic criteria. Including all algorithmic (primary and co–morbid) and physician (direct, underlying and contributing) diagnoses in the comparison minimized the differences, with kappa coefficients greater than 0.40 for five of 11 neonatal diagnoses and nine of 13 child diagnoses. By algorithmic diagnosis, early onset neonatal infection was significantly associated (χ2 = 13.2, P < 0.001) with maternal infection, and the geographic distribution of child meningitis deaths closely corresponded with that for meningitis surveillance cases and deaths. Conclusions Verbal autopsy conducted in the context of a national mortality survey can provide useful estimates of the cause distributions of neonatal and child deaths. While the current study found reasonable agreement between the expert algorithm and physician analyses, it also demonstrated greater plausibility for two algorithmic diagnoses and validation work is needed to ascertain the findings. Direct, large–scale measurement of causes of death complement, can strengthen, and in some settings may be preferred over modeled estimates. PMID:25969734

  11. Convergence and divergence of neurocognitive patterns in schizophrenia and depression.

    PubMed

    Liang, Sugai; Brown, Matthew R G; Deng, Wei; Wang, Qiang; Ma, Xiaohong; Li, Mingli; Hu, Xun; Juhas, Michal; Li, Xinmin; Greiner, Russell; Greenshaw, Andrew J; Li, Tao

    2018-02-01

    Neurocognitive impairments are frequently observed in schizophrenia and major depressive disorder (MDD). However, it remains unclear whether reported neurocognitive abnormalities could objectively identify an individual as having schizophrenia or MDD. The current study included 220 first-episode patients with schizophrenia, 110 patients with MDD and 240 demographically matched healthy controls (HC). All participants performed the short version of the Wechsler Adult Intelligence Scale-Revised in China; the immediate and delayed logical memory of the Wechsler Memory Scale-Revised in China; and seven tests from the computerized Cambridge Neurocognitive Test Automated Battery to evaluate neurocognitive performance. The three-class AdaBoost tree-based ensemble algorithm was employed to identify neurocognitive endophenotypes that may distinguish between subjects in the categories of schizophrenia, depression and HC. Hierarchical cluster analysis was applied to further explore the neurocognitive patterns in each group. The AdaBoost algorithm identified individual's diagnostic class with an average accuracy of 77.73% (80.81% for schizophrenia, 53.49% for depression and 86.21% for HC). The average area under ROC curve was 0.92 (0.96 in schizophrenia, 0.86 in depression and 0.92 in HC). Hierarchical cluster analysis revealed for MDD and schizophrenia, convergent altered neurocognition patterns related to shifting, sustained attention, planning, working memory and visual memory. Divergent neurocognition patterns for MDD and schizophrenia related to motor speed, general intelligence, perceptual sensitivity and reversal learning were identified. Neurocognitive abnormalities could predict whether the individual has schizophrenia, depression or neither with relatively high accuracy. Additionally, the neurocognitive features showed promise as endophenotypes for discriminating between schizophrenia and depression. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Decreasing triage time: effects of implementing a step-wise ESI algorithm in an EHR.

    PubMed

    Villa, Stephen; Weber, Ellen J; Polevoi, Steven; Fee, Christopher; Maruoka, Andrew; Quon, Tina

    2018-06-01

    To determine if adapting a widely-used triage scale into a computerized algorithm in an electronic health record (EHR) shortens emergency department (ED) triage time. Before-and-after quasi-experimental study. Urban, tertiary care hospital ED. Consecutive adult patient visits between July 2011 and June 2013. A step-wise algorithm, based on the Emergency Severity Index (ESI-5) was programmed into the triage module of a commercial EHR. Duration of triage (triage interval) for all patients and change in percentage of high acuity patients (ESI 1 and 2) completing triage within 15 min, 12 months before-and-after implementation of the algorithm. Multivariable analysis adjusted for confounders; interrupted time series demonstrated effects over time. Secondary outcomes examined quality metrics and patient flow. About 32 546 patient visits before and 33 032 after the intervention were included. Post-intervention patients were slightly older, census was higher and admission rate slightly increased. Median triage interval was 5.92 min (interquartile ranges, IQR 4.2-8.73) before and 2.8 min (IQR 1.88-4.23) after the intervention (P < 0.001). Adjusted mean triage interval decreased 3.4 min (95% CI: -3.6, -3.2). The proportion of high acuity patients completing triage within 15 min increased from 63.9% (95% CI 62.5, 65.2%) to 75.0% (95% CI 73.8, 76.1). Monthly time series demonstrated immediate and sustained improvement following the intervention. Return visits within 72 h and door-to-balloon time were unchanged. Total length of stay was similar. The computerized triage scale improved speed of triage, allowing more high acuity patients to be seen within recommended timeframes, without notable impact on quality.

  13. Efficiently measuring dimensions of the externalizing spectrum model: Development of the Externalizing Spectrum Inventory-Computerized Adaptive Test (ESI-CAT).

    PubMed

    Sunderland, Matthew; Slade, Tim; Krueger, Robert F; Markon, Kristian E; Patrick, Christopher J; Kramer, Mark D

    2017-07-01

    The development of the Externalizing Spectrum Inventory (ESI) was motivated by the need to comprehensively assess the interrelated nature of externalizing psychopathology and personality using an empirically driven framework. The ESI measures 23 theoretically distinct yet related unidimensional facets of externalizing, which are structured under 3 superordinate factors representing general externalizing, callous aggression, and substance abuse. One limitation of the ESI is its length at 415 items. To facilitate the use of the ESI in busy clinical and research settings, the current study sought to examine the efficiency and accuracy of a computerized adaptive version of the ESI. Data were collected over 3 waves and totaled 1,787 participants recruited from undergraduate psychology courses as well as male and female state prisons. A series of 6 algorithms with different termination rules were simulated to determine the efficiency and accuracy of each test under 3 different assumed distributions. Scores generated using an optimal adaptive algorithm evidenced high correlations (r > .9) with scores generated using the full ESI, brief ESI item-based factor scales, and the 23 facet scales. The adaptive algorithms for each facet administered a combined average of 115 items, a 72% decrease in comparison to the full ESI. Similarly, scores on the item-based factor scales of the ESI-brief form (57 items) were generated using on average of 17 items, a 70% decrease. The current study successfully demonstrates that an adaptive algorithm can generate similar scores for the ESI and the 3 item-based factor scales using a fraction of the total item pool. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Computerized Respiratory Sounds: Novel Outcomes for Pulmonary Rehabilitation in COPD.

    PubMed

    Jácome, Cristina; Marques, Alda

    2017-02-01

    Computerized respiratory sounds are a simple and noninvasive measure to assess lung function. Nevertheless, their potential to detect changes after pulmonary rehabilitation (PR) is unknown and needs clarification if respiratory acoustics are to be used in clinical practice. Thus, this study investigated the short- and mid-term effects of PR on computerized respiratory sounds in subjects with COPD. Forty-one subjects with COPD completed a 12-week PR program and a 3-month follow-up. Secondary outcome measures included dyspnea, self-reported sputum, FEV 1 , exercise tolerance, self-reported physical activity, health-related quality of life, and peripheral muscle strength. Computerized respiratory sounds, the primary outcomes, were recorded at right/left posterior chest using 2 stethoscopes. Air flow was recorded with a pneumotachograph. Normal respiratory sounds, crackles, and wheezes were analyzed with validated algorithms. There was a significant effect over time in all secondary outcomes, with the exception of FEV 1 and of the impact domain of the St George Respiratory Questionnaire. Inspiratory and expiratory median frequencies of normal respiratory sounds in the 100-300 Hz band were significantly lower immediately (-2.3 Hz [95% CI -4 to -0.7] and -1.9 Hz [95% CI -3.3 to -0.5]) and at 3 months (-2.1 Hz [95% CI -3.6 to -0.7] and -2 Hz [95% CI -3.6 to -0.5]) post-PR. The mean number of expiratory crackles (-0.8, 95% CI -1.3 to -0.3) and inspiratory wheeze occupation rate (median 5.9 vs 0) were significantly lower immediately post-PR. Computerized respiratory sounds were sensitive to short- and mid-term effects of PR in subjects with COPD. These findings are encouraging for the clinical use of respiratory acoustics. Future research is needed to strengthen these findings and explore the potential of computerized respiratory sounds to assess the effectiveness of other clinical interventions in COPD. Copyright © 2017 by Daedalus Enterprises.

  15. The Diagnostic Challenge Competition: Probabilistic Techniques for Fault Diagnosis in Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Ricks, Brian W.; Mengshoel, Ole J.

    2009-01-01

    Reliable systems health management is an important research area of NASA. A health management system that can accurately and quickly diagnose faults in various on-board systems of a vehicle will play a key role in the success of current and future NASA missions. We introduce in this paper the ProDiagnose algorithm, a diagnostic algorithm that uses a probabilistic approach, accomplished with Bayesian Network models compiled to Arithmetic Circuits, to diagnose these systems. We describe the ProDiagnose algorithm, how it works, and the probabilistic models involved. We show by experimentation on two Electrical Power Systems based on the ADAPT testbed, used in the Diagnostic Challenge Competition (DX 09), that ProDiagnose can produce results with over 96% accuracy and less than 1 second mean diagnostic time.

  16. Clinical study of quantitative diagnosis of early cervical cancer based on the classification of acetowhitening kinetics

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Cheung, Tak-Hong; Yim, So-Fan; Qu, Jianan Y.

    2010-03-01

    A quantitative colposcopic imaging system for the diagnosis of early cervical cancer is evaluated in a clinical study. This imaging technology based on 3-D active stereo vision and motion tracking extracts diagnostic information from the kinetics of acetowhitening process measured from the cervix of human subjects in vivo. Acetowhitening kinetics measured from 137 cervical sites of 57 subjects are analyzed and classified using multivariate statistical algorithms. Cross-validation methods are used to evaluate the performance of the diagnostic algorithms. The results show that an algorithm for screening precancer produced 95% sensitivity (SE) and 96% specificity (SP) for discriminating normal and human papillomavirus (HPV)-infected tissues from cervical intraepithelial neoplasia (CIN) lesions. For a diagnostic algorithm, 91% SE and 90% SP are achieved for discriminating normal tissue, HPV infected tissue, and low-grade CIN lesions from high-grade CIN lesions. The results demonstrate that the quantitative colposcopic imaging system could provide objective screening and diagnostic information for early detection of cervical cancer.

  17. Development and validation of an automated ventilator-associated event electronic surveillance system: A report of a successful implementation.

    PubMed

    Hebert, Courtney; Flaherty, Jennifer; Smyer, Justin; Ding, Jing; Mangino, Julie E

    2018-03-01

    Surveillance is an important tool for infection control; however, this task can often be time-consuming and take away from infection prevention activities. With the increasing availability of comprehensive electronic health records, there is an opportunity to automate these surveillance activities. The objective of this article is to describe the implementation of an electronic algorithm for ventilator-associated events (VAEs) at a large academic medical center METHODS: This article reports on a 6-month manual validation of a dashboard for VAEs. We developed a computerized algorithm for automatically detecting VAEs and compared the output of this algorithm to the traditional, manual method of VAE surveillance. Manual surveillance by the infection preventionists identified 13 possible and 11 probable ventilator-associated pneumonias (VAPs), and the VAE dashboard identified 16 possible and 13 probable VAPs. The dashboard had 100% sensitivity and 100% accuracy when compared with manual surveillance for possible and probable VAP. We report on the successfully implemented VAE dashboard. Workflow of the infection preventionists was simplified after implementation of the dashboard with subjective time-savings reported. Implementing a computerized dashboard for VAE surveillance at a medical center with a comprehensive electronic health record is feasible; however, this required significant initial and ongoing work on the part of data analysts and infection preventionists. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  18. Computer simulation of a pilot in V/STOL aircraft control loops

    NASA Technical Reports Server (NTRS)

    Vogt, William G.; Mickle, Marlin H.; Zipf, Mark E.; Kucuk, Senol

    1989-01-01

    The objective was to develop a computerized adaptive pilot model for the computer model of the research aircraft, the Harrier II AV-8B V/STOL with special emphasis on propulsion control. In fact, two versions of the adaptive pilot are given. The first, simply called the Adaptive Control Model (ACM) of a pilot includes a parameter estimation algorithm for the parameters of the aircraft and an adaption scheme based on the root locus of the poles of the pilot controlled aircraft. The second, called the Optimal Control Model of the pilot (OCM), includes an adaption algorithm and an optimal control algorithm. These computer simulations were developed as a part of the ongoing research program in pilot model simulation supported by NASA Lewis from April 1, 1985 to August 30, 1986 under NASA Grant NAG 3-606 and from September 1, 1986 through November 30, 1988 under NASA Grant NAG 3-729. Once installed, these pilot models permitted the computer simulation of the pilot model to close all of the control loops normally closed by a pilot actually manipulating the control variables. The current version of this has permitted a baseline comparison of various qualitative and quantitative performance indices for propulsion control, the control loops and the work load on the pilot. Actual data for an aircraft flown by a human pilot furnished by NASA was compared to the outputs furnished by the computerized pilot and found to be favorable.

  19. FPGA based charge acquisition algorithm for soft x-ray diagnostics system

    NASA Astrophysics Data System (ADS)

    Wojenski, A.; Kasprowicz, G.; Pozniak, K. T.; Zabolotny, W.; Byszuk, A.; Juszczyk, B.; Kolasinski, P.; Krawczyk, R. D.; Zienkiewicz, P.; Chernyshova, M.; Czarski, T.

    2015-09-01

    Soft X-ray (SXR) measurement systems working in tokamaks or with laser generated plasma can expect high photon fluxes. Therefore it is necessary to focus on data processing algorithms to have the best possible efficiency in term of processed photon events per second. This paper refers to recently designed algorithm and data-flow for implementation of charge data acquisition in FPGA. The algorithms are currently on implementation stage for the soft X-ray diagnostics system. In this paper despite of the charge processing algorithm is also described general firmware overview, data storage methods and other key components of the measurement system. The simulation section presents algorithm performance and expected maximum photon rate.

  20. Computerized provider order entry in the clinical laboratory

    PubMed Central

    Baron, Jason M.; Dighe, Anand S.

    2011-01-01

    Clinicians have traditionally ordered laboratory tests using paper-based orders and requisitions. However, paper orders are becoming increasingly incompatible with the complexities, challenges, and resource constraints of our modern healthcare systems and are being replaced by electronic order entry systems. Electronic systems that allow direct provider input of diagnostic testing or medication orders into a computer system are known as Computerized Provider Order Entry (CPOE) systems. Adoption of laboratory CPOE systems may offer institutions many benefits, including reduced test turnaround time, improved test utilization, and better adherence to practice guidelines. In this review, we outline the functionality of various CPOE implementations, review the reported benefits, and discuss strategies for using CPOE to improve the test ordering process. Further, we discuss barriers to the implementation of CPOE systems that have prevented their more widespread adoption. PMID:21886891

  1. [Assessment of gestures and their psychiatric relevance].

    PubMed

    Bulucz, Judit; Simon, Lajos

    2008-01-01

    Analyzing and investigating non-verbal behavior and gestures has been receiving much attention since the last century. Thanks to the pioneer work of Ekman and Friesen we have a number of descriptive-analytic, categorizing and semantic content related scales and scoring systems. Generation of gestures, the integrative system with speech and the inter-cultural differences are in the focus of interest. Furthermore, analysis of the gestural changes caused by lesions of distinct neurological areas point toward to formation of new diagnostic approaches. The more widespread application of computerized methods resulted in an increasing number of experiments which study gesture generation, reproduction in mechanical and virtual reality. Increasing efforts are directed towards the understanding of human and computerized recognition of human gestures. In this review we describe the results emphasizing the relations of those results with psychiatric and neuropsychiatric disorders, specifically schizophrenia and affective spectrum.

  2. Accurate Identification of Fatty Liver Disease in Data Warehouse Utilizing Natural Language Processing.

    PubMed

    Redman, Joseph S; Natarajan, Yamini; Hou, Jason K; Wang, Jingqi; Hanif, Muzammil; Feng, Hua; Kramer, Jennifer R; Desiderio, Roxanne; Xu, Hua; El-Serag, Hashem B; Kanwal, Fasiha

    2017-10-01

    Natural language processing is a powerful technique of machine learning capable of maximizing data extraction from complex electronic medical records. We utilized this technique to develop algorithms capable of "reading" full-text radiology reports to accurately identify the presence of fatty liver disease. Abdominal ultrasound, computerized tomography, and magnetic resonance imaging reports were retrieved from the Veterans Affairs Corporate Data Warehouse from a random national sample of 652 patients. Radiographic fatty liver disease was determined by manual review by two physicians and verified with an expert radiologist. A split validation method was utilized for algorithm development. For all three imaging modalities, the algorithms could identify fatty liver disease with >90% recall and precision, with F-measures >90%. These algorithms could be used to rapidly screen patient records to establish a large cohort to facilitate epidemiological and clinical studies and examine the clinic course and outcomes of patients with radiographic hepatic steatosis.

  3. [X-ray endoscopic semiotics and diagnostic algorithm of radiation studies of preneoplastic gastric mucosa changes].

    PubMed

    Akberov, R F; Gorshkov, A N

    1997-01-01

    The X-ray endoscopic semiotics of precancerous gastric mucosal changes (epithelial dysplasia, intestinal epithelial rearrangement) was examined by the results of 1574 gastric examination. A diagnostic algorithm was developed for radiation studies in the diagnosis of the above pathology.

  4. Computerized tomography with total variation and with shearlets

    NASA Astrophysics Data System (ADS)

    Garduño, Edgar; Herman, Gabor T.

    2017-04-01

    To reduce the x-ray dose in computerized tomography (CT), many constrained optimization approaches have been proposed aiming at minimizing a regularizing function that measures a lack of consistency with some prior knowledge about the object that is being imaged, subject to a (predetermined) level of consistency with the detected attenuation of x-rays. One commonly investigated regularizing function is total variation (TV), while other publications advocate the use of some type of multiscale geometric transform in the definition of the regularizing function, a particular recent choice for this is the shearlet transform. Proponents of the shearlet transform in the regularizing function claim that the reconstructions so obtained are better than those produced using TV for texture preservation (but may be worse for noise reduction). In this paper we report results related to this claim. In our reported experiments using simulated CT data collection of the head, reconstructions whose shearlet transform has a small ℓ 1-norm are not more efficacious than reconstructions that have a small TV value. Our experiments for making such comparisons use the recently-developed superiorization methodology for both regularizing functions. Superiorization is an automated procedure for turning an iterative algorithm for producing images that satisfy a primary criterion (such as consistency with the observed measurements) into its superiorized version that will produce results that, according to the primary criterion are as good as those produced by the original algorithm, but in addition are superior to them according to a secondary (regularizing) criterion. The method presented for superiorization involving the ℓ 1-norm of the shearlet transform is novel and is quite general: It can be used for any regularizing function that is defined as the ℓ 1-norm of a transform specified by the application of a matrix. Because in the previous literature the split Bregman algorithm is used for similar purposes, a section is included comparing the results of the superiorization algorithm with the split Bregman algorithm.

  5. Computerized Adaptive Assessment of Personality Disorder: Introducing the CAT-PD Project

    PubMed Central

    Simms, Leonard J.; Goldberg, Lewis R.; Roberts, John E.; Watson, David; Welte, John; Rotterman, Jane H.

    2011-01-01

    Assessment of personality disorders (PD) has been hindered by reliance on the problematic categorical model embodied in the most recent Diagnostic and Statistical Model of Mental Disorders (DSM), lack of consensus among alternative dimensional models, and inefficient measurement methods. This article describes the rationale for and early results from an NIMH-funded, multi-year study designed to develop an integrative and comprehensive model and efficient measure of PD trait dimensions. To accomplish these goals, we are in the midst of a five-phase project to develop and validate the model and measure. The results of Phase 1 of the project—which was focused on developing the PD traits to be assessed and the initial item pool—resulted in a candidate list of 59 PD traits and an initial item pool of 2,589 items. Data collection and structural analyses in community and patient samples will inform the ultimate structure of the measure, and computerized adaptive testing (CAT) will permit efficient measurement of the resultant traits. The resultant Computerized Adaptive Test of Personality Disorder (CAT-PD) will be well positioned as a measure of the proposed DSM-5 PD traits. Implications for both applied and basic personality research are discussed. PMID:22804677

  6. 2012 HIV Diagnostics Conference: the molecular diagnostics perspective.

    PubMed

    Branson, Bernard M; Pandori, Mark

    2013-04-01

    2012 HIV Diagnostic Conference Atlanta, GA, USA, 12-14 December 2012. This report highlights the presentations and discussions from the 2012 National HIV Diagnostic Conference held in Atlanta (GA, USA), on 12-14 December 2012. Reflecting changes in the evolving field of HIV diagnostics, the conference provided a forum for evaluating developments in molecular diagnostics and their role in HIV diagnosis. In 2010, the HIV Diagnostics Conference concluded with the proposal of a new diagnostic algorithm which included nucleic acid testing to resolve discordant screening and supplemental antibody test results. The 2012 meeting, picking up where the 2010 meeting left off, focused on scientific presentations that assessed this new algorithm and the role played by RNA testing and new developments in molecular diagnostics, including detection of total and integrated HIV-1 DNA, detection and quantification of HIV-2 RNA, and rapid formats for detection of HIV-1 RNA.

  7. Combination of culture, antigen and toxin detection, and cytotoxin neutralization assay for optimal Clostridium difficile diagnostic testing

    PubMed Central

    Alfa, Michelle J; Sepehri, Shadi

    2013-01-01

    BACKGROUND: There has been a growing interest in developing an appropriate laboratory diagnostic algorithm for Clostridium difficile, mainly as a result of increases in both the number and severity of cases of C difficile infection in the past decade. A C difficile diagnostic algorithm is necessary because diagnostic kits, mostly for the detection of toxins A and B or glutamate dehydrogenase (GDH) antigen, are not sufficient as stand-alone assays for optimal diagnosis of C difficile infection. In addition, conventional reference methods for C difficile detection (eg, toxigenic culture and cytotoxin neutralization [CTN] assays) are not routinely practiced in diagnostic laboratory settings. OBJECTIVE: To review the four-step algorithm used at Diagnostic Services of Manitoba sites for the laboratory diagnosis of toxigenic C difficile. RESULT: One year of retrospective C difficile data using the proposed algorithm was reported. Of 5695 stool samples tested, 9.1% (n=517) had toxigenic C difficile. Sixty per cent (310 of 517) of toxigenic C difficile stools were detected following the first two steps of the algorithm. CTN confirmation of GDH-positive, toxin A- and B-negative assays resulted in detection of an additional 37.7% (198 of 517) of toxigenic C difficile. Culture of the third specimen, from patients who had two previous negative specimens, detected an additional 2.32% (12 of 517) of toxigenic C difficile samples. DISCUSSION: Using GDH antigen as the screening and toxin A and B as confirmatory test for C difficile, 85% of specimens were reported negative or positive within 4 h. Without CTN confirmation for GDH antigen and toxin A and B discordant results, 37% (195 of 517) of toxigenic C difficile stools would have been missed. Following the algorithm, culture was needed for only 2.72% of all specimens submitted for C difficile testing. CONCLUSION: The overview of the data illustrated the significance of each stage of this four-step C difficile algorithm and emphasized the value of using CTN assay and culture as parts of an algorithm that ensures accurate diagnosis of toxigenic C difficile. PMID:24421808

  8. Coagulation algorithms with size binning

    NASA Technical Reports Server (NTRS)

    Statton, David M.; Gans, Jason; Williams, Eric

    1994-01-01

    The Smoluchowski equation describes the time evolution of an aerosol particle size distribution due to aggregation or coagulation. Any algorithm for computerized solution of this equation requires a scheme for describing the continuum of aerosol particle sizes as a discrete set. One standard form of the Smoluchowski equation accomplishes this by restricting the particle sizes to integer multiples of a basic unit particle size (the monomer size). This can be inefficient when particle concentrations over a large range of particle sizes must be calculated. Two algorithms employing a geometric size binning convention are examined: the first assumes that the aerosol particle concentration as a function of size can be considered constant within each size bin; the second approximates the concentration as a linear function of particle size within each size bin. The output of each algorithm is compared to an analytical solution in a special case of the Smoluchowski equation for which an exact solution is known . The range of parameters more appropriate for each algorithm is examined.

  9. Computerized Dental Comparison: A Critical Review of Dental Coding and Ranking Algorithms Used in Victim Identification.

    PubMed

    Adams, Bradley J; Aschheim, Kenneth W

    2016-01-01

    Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.

  10. A longitudinal study of adult-onset asthma incidence among HMO members

    PubMed Central

    Sama, Susan R; Hunt, Phillip R; Cirillo, CIH Priscilla; Marx, Arminda; Rosiello, Richard A; Henneberger, Paul K; Milton, Donald K

    2003-01-01

    Background HMO databases offer an opportunity for community based epidemiologic studies of asthma incidence, etiology and treatment. The incidence of asthma in HMO populations and the utility of HMO data, including use of computerized algorithms and manual review of medical charts for determining etiologic factors has not been fully explored. Methods We identified adult-onset asthma, using computerized record searches in a New England HMO. Monthly, our software applied exclusion and inclusion criteria to identify an "at-risk" population and "potential cases". Electronic and paper medical records from the past year were then reviewed for each potential case. Persons with other respiratory diseases or insignificant treatment for asthma were excluded. Confirmed adult-onset asthma (AOA) cases were defined as those potential cases with either new-onset asthma or reactivated mild intermittent asthma that had been quiescent for at least one year. We validated the methods by reviewing charts of selected subjects rejected by the algorithm. Results The algorithm was 93 to 99.3% sensitive and 99.6% specific. Sixty-three percent (n = 469) of potential cases were confirmed as AOA. Two thirds of confirmed cases were women with an average age of 34.8 (SD 11.8), and 45% had no evidence of previous asthma diagnosis. The annualized monthly rate of AOA ranged from 4.1 to 11.4 per 1000 at-risk members. Physicians most commonly attribute asthma to infection (59%) and allergy (14%). New-onset cases were more likely attributed to infection, while reactivated cases were more associated with allergies. Medical charts included a discussion of work exposures in relation to asthma in only 32 (7%) cases. Twenty-three of these (72%) indicated there was an association between asthma and workplace exposures for an overall rate of work-related asthma of 4.9%. Conclusion Computerized HMO records can be successfully used to identify AOA. Manual review of these records is important to confirm case status and is useful in evaluation of provider consideration of etiologies. We demonstrated that clinicians attribute most AOA to infection and tend to ignore the contribution of environmental and occupational exposures. PMID:12952547

  11. A longitudinal study of adult-onset asthma incidence among HMO members.

    PubMed

    Sama, Susan R; Hunt, Phillip R; Cirillo, C I H Priscilla; Marx, Arminda; Rosiello, Richard A; Henneberger, Paul K; Milton, Donald K

    2003-08-07

    HMO databases offer an opportunity for community based epidemiologic studies of asthma incidence, etiology and treatment. The incidence of asthma in HMO populations and the utility of HMO data, including use of computerized algorithms and manual review of medical charts for determining etiologic factors has not been fully explored. We identified adult-onset asthma, using computerized record searches in a New England HMO. Monthly, our software applied exclusion and inclusion criteria to identify an "at-risk" population and "potential cases". Electronic and paper medical records from the past year were then reviewed for each potential case. Persons with other respiratory diseases or insignificant treatment for asthma were excluded. Confirmed adult-onset asthma (AOA) cases were defined as those potential cases with either new-onset asthma or reactivated mild intermittent asthma that had been quiescent for at least one year. We validated the methods by reviewing charts of selected subjects rejected by the algorithm. The algorithm was 93 to 99.3% sensitive and 99.6% specific. Sixty-three percent (n = 469) of potential cases were confirmed as AOA. Two thirds of confirmed cases were women with an average age of 34.8 (SD 11.8), and 45% had no evidence of previous asthma diagnosis. The annualized monthly rate of AOA ranged from 4.1 to 11.4 per 1000 at-risk members. Physicians most commonly attribute asthma to infection (59%) and allergy (14%). New-onset cases were more likely attributed to infection, while reactivated cases were more associated with allergies. Medical charts included a discussion of work exposures in relation to asthma in only 32 (7%) cases. Twenty-three of these (72%) indicated there was an association between asthma and workplace exposures for an overall rate of work-related asthma of 4.9%. Computerized HMO records can be successfully used to identify AOA. Manual review of these records is important to confirm case status and is useful in evaluation of provider consideration of etiologies. We demonstrated that clinicians attribute most AOA to infection and tend to ignore the contribution of environmental and occupational exposures.

  12. Autism Diagnostic Interview-Revised (ADI-R) Algorithms for Toddlers and Young Preschoolers: Application in a Non-US Sample of 1,104 Children

    ERIC Educational Resources Information Center

    de Bildt, Annelies; Sytema, Sjoerd; Zander, Eric; Bölte, Sven; Sturm, Harald; Yirmiya, Nurit; Yaari, Maya; Charman, Tony; Salomone, Erica; LeCouteur, Ann; Green, Jonathan; Bedia, Ricardo Canal; Primo, Patricia García; van Daalen, Emma; de Jonge, Maretha V.; Guðmundsdóttir, Emilía; Jóhannsdóttir, Sigurrós; Raleva, Marija; Boskovska, Meri; Rogé, Bernadette; Baduel, Sophie; Moilanen, Irma; Yliherva, Anneli; Buitelaar, Jan; Oosterling, Iris J.

    2015-01-01

    The current study aimed to investigate the Autism Diagnostic Interview-Revised (ADI-R) algorithms for toddlers and young preschoolers (Kim and Lord, "J Autism Dev Disord" 42(1):82-93, 2012) in a non-US sample from ten sites in nine countries (n = 1,104). The construct validity indicated a good fit of the algorithms. The diagnostic…

  13. Systematic review of dermoscopy and digital dermoscopy/ artificial intelligence for the diagnosis of melanoma.

    PubMed

    Rajpara, S M; Botello, A P; Townend, J; Ormerod, A D

    2009-09-01

    Dermoscopy improves diagnostic accuracy of the unaided eye for melanoma, and digital dermoscopy with artificial intelligence or computer diagnosis has also been shown useful for the diagnosis of melanoma. At present there is no clear evidence regarding the diagnostic accuracy of dermoscopy compared with artificial intelligence. To evaluate the diagnostic accuracy of dermoscopy and digital dermoscopy/artificial intelligence for melanoma diagnosis and to compare the diagnostic accuracy of the different dermoscopic algorithms with each other and with digital dermoscopy/artificial intelligence for the detection of melanoma. A literature search on dermoscopy and digital dermoscopy/artificial intelligence for melanoma diagnosis was performed using several databases. Titles and abstracts of the retrieved articles were screened using a literature evaluation form. A quality assessment form was developed to assess the quality of the included studies. Heterogeneity among the studies was assessed. Pooled data were analysed using meta-analytical methods and comparisons between different algorithms were performed. Of 765 articles retrieved, 30 studies were eligible for meta-analysis. Pooled sensitivity for artificial intelligence was slightly higher than for dermoscopy (91% vs. 88%; P = 0.076). Pooled specificity for dermoscopy was significantly better than artificial intelligence (86% vs. 79%; P < 0.001). Pooled diagnostic odds ratio was 51.5 for dermoscopy and 57.8 for artificial intelligence, which were not significantly different (P = 0.783). There were no significance differences in diagnostic odds ratio among the different dermoscopic diagnostic algorithms. Dermoscopy and artificial intelligence performed equally well for diagnosis of melanocytic skin lesions. There was no significant difference in the diagnostic performance of various dermoscopy algorithms. The three-point checklist, the seven-point checklist and Menzies score had better diagnostic odds ratios than the others; however, these results need to be confirmed by a large-scale high-quality population-based study.

  14. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer.

    PubMed

    Ehteshami Bejnordi, Babak; Veta, Mitko; Johannes van Diest, Paul; van Ginneken, Bram; Karssemeijer, Nico; Litjens, Geert; van der Laak, Jeroen A W M; Hermsen, Meyke; Manson, Quirine F; Balkenhol, Maschenka; Geessink, Oscar; Stathonikos, Nikolaos; van Dijk, Marcory Crf; Bult, Peter; Beca, Francisco; Beck, Andrew H; Wang, Dayong; Khosla, Aditya; Gargeya, Rishab; Irshad, Humayun; Zhong, Aoxiao; Dou, Qi; Li, Quanzheng; Chen, Hao; Lin, Huang-Jing; Heng, Pheng-Ann; Haß, Christian; Bruni, Elia; Wong, Quincy; Halici, Ugur; Öner, Mustafa Ümit; Cetin-Atalay, Rengul; Berseth, Matt; Khvatkov, Vitali; Vylegzhanin, Alexei; Kraus, Oren; Shaban, Muhammad; Rajpoot, Nasir; Awan, Ruqayya; Sirinukunwattana, Korsuk; Qaiser, Talha; Tsang, Yee-Wah; Tellez, David; Annuscheit, Jonas; Hufnagl, Peter; Valkonen, Mira; Kartasalo, Kimmo; Latonen, Leena; Ruusuvuori, Pekka; Liimatainen, Kaisa; Albarqouni, Shadi; Mungal, Bharti; George, Ami; Demirci, Stefanie; Navab, Nassir; Watanabe, Seiryo; Seno, Shigeto; Takenaka, Yoichi; Matsuda, Hideo; Ahmady Phoulady, Hady; Kovalev, Vassili; Kalinovsky, Alexander; Liauchuk, Vitali; Bueno, Gloria; Fernandez-Carrobles, M Milagro; Serrano, Ismael; Deniz, Oscar; Racoceanu, Daniel; Venâncio, Rui

    2017-12-12

    Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists' diagnoses in a diagnostic setting. Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.

  15. [Clinical management of adrenal incidentalomas: results of a survey].

    PubMed

    Moreno-Fernández, Jesús; García-Manzanares, Alvaro; Sánchez-Covisa, Miguel Aguirre; García, E Inés Rosa Gómez

    2009-12-01

    Incidentalomas are clinically silent adrenal masses that are discovered incidentally during diagnostic testing for clinical conditions unrelated to suspicion of adrenal disease. Several decision algorithms are used in the management of adrenal masses. We evaluated the routine use of these algorithms through a clinical activity questionnaire. The questionnaire included data on the work center, initial hormonal and radiological study, imaging and hormonal tests performed to complete the study, surgical indications and clinical follow-up. Thirty-three endocrinologists (79%) attending the annual congress of the Castilla-La Mancha Society of Endocrinology, Nutrition and Diabetes completed the questionnaire. Forty-six percent considered tumoral size to be the most important factor suggesting malignancy in the initial evaluation of adrenal incidentalomas, the limit being 4 cm for 78% of the endocrinologists. Imaging study was completed by magnetic resonance imaging by 39%. All the physicians always performed screening for hypercortisolism and pheochromocytoma. Other assessments always conducted in all incidentalomas included hyperaldosteronism (76%), sex hormone-producing tumor (51%) and congenital adrenal hyperplasia (30%). Seventy-nine percent of respondents began to refer incidentalomas larger than 4 cm for surgical treatment, and 46% referred all tumors larger than 6 cm for surgical treatment. With regard to hormonal function, patients with pheochromocytoma, Cushing's syndrome, hyperaldosteronism with poorly controlled blood pressure or sex hormoneproducing tumors were more frequently referred for surgery. Seventy-six percent of endocrinologists performed clinical follow-up in adrenal incidentalomas larger than 4 cm, preferably through computerized tomography (81%), and repeated studies for hormonal hypercortisolism (97%), primary hyperaldosteronism (42%) and pheochromocytoma (76%) over a 4-5 year period (67%). Clinical practice varied among the endocrinologists surveyed, although a certain uniformity in relation to the main guidelines was observed. A tendency to request a greater number of diagnostic tests for initial hormone assessment and clinical follow-up was detected. Assessment, decision-making and medical monitoring in adrenal incidentalomas remain unclear and consequently further studies are required. Copyright 2009 Sociedad Española de Endocrinología y Nutrición. Published by Elsevier Espana. All rights reserved.

  16. Recommendations for the inclusion of Fabry disease as a rare febrile condition in existing algorithms for fever of unknown origin.

    PubMed

    Manna, Raffaele; Cauda, Roberto; Feriozzi, Sandro; Gambaro, Giovanni; Gasbarrini, Antonio; Lacombe, Didier; Livneh, Avi; Martini, Alberto; Ozdogan, Huri; Pisani, Antonio; Riccio, Eleonora; Verrecchia, Elena; Dagna, Lorenzo

    2017-10-01

    Fever of unknown origin (FUO) is a rather rare clinical syndrome representing a major diagnostic challenge. The occurrence of more than three febrile attacks with fever-free intervals of variable duration during 6 months of observation has recently been proposed as a subcategory of FUO, Recurrent FUO (RFUO). A substantial number of patients with RFUO have auto-inflammatory genetic fevers, but many patients remain undiagnosed. We hypothesize that this undiagnosed subgroup may be comprised of, at least in part, a number of rare genetic febrile diseases such as Fabry disease. We aimed to identify key features or potential diagnostic clues for Fabry disease as a model of rare genetic febrile diseases causing RFUO, and to develop diagnostic guidelines for RFUO, using Fabry disease as an example of inserting other rare diseases in the existing FUO algorithms. An international panel of specialists in recurrent fevers and rare diseases, including internists, infectious disease specialists, rheumatologists, gastroenterologists, nephrologists, and medical geneticists convened to review the existing diagnostic algorithms, and to suggest recommendations for arriving at accurate diagnoses on the basis of available literature and clinical experience. By combining specific features of rare diseases with other diagnostic considerations, guidelines have been designed to raise awareness and identify rare diseases among other causes of FUO. The proposed guidelines may be useful for the inclusion of rare diseases in the diagnostic algorithms for FUO. A wide spectrum of patients will be needed to validate the algorithm in different clinical settings.

  17. Computerized study with 201T1 of the cold thyroid node.

    PubMed

    Palermo, F; Saitta, B; Coghetto, F; Tiberio, M; Caldato, L

    1982-02-01

    Because of its physical and potassium-metabolic characteristics 201T1 is more suitable than 131Cs for radioisotopic studies of the cold thyroid nodule, with the further diagnostic possibility of quantitatively assessing intranodular behavior for a specific differentiation among different kinds of neoformations. Using a gamma-camera on line with a computer data processing device, sequential scintiscans were recorded for the first 20-30 min after i.v. administration of 15-20 microCi/kg of radiothallium; delayed sequences were taken at 40-60 min if intranodular uptake appeared. A quantitative appraisal was made of the differential 201T1 uptake-ratio between nodule and healthy thyroid tissue (density-index) and the multiparameter analysis of thyroid time/activity curves generated on the relative regions of interest (ROIs). This computerized study, in 120 out of 293 patients submitted to this radiothallium test, has shown a) diagnostic agreement between clinical-histological and radioisotopic findings in 76 out of 79 colloid-cystic or degenerative neoformations, in all 16 malignant and in 23 out of 25 hyperplastic benign nodules; b) significant statistical difference of the density-index in solid versus cystic but not between benign and malignant nodules; c) different 201T1 kinetics behaviour in different kinds of solid thyroid lesions with a satisfactory statistical difference of the radiothallium nodular disappearance-index.

  18. Evaluation of Internet-Based Clinical Decision Support Systems

    PubMed Central

    Thomas, Karl W; Dayton, Charles S

    1999-01-01

    Background Scientifically based clinical guidelines have become increasingly used to educate physicians and improve quality of care. While individual guidelines are potentially useful, repeated studies have shown that guidelines are ineffective in changing physician behavior. The Internet has evolved as a potentially useful tool for guideline education, dissemination, and implementation because of its open standards and its ability to provide concise, relevant clinical information at the location and time of need. Objective Our objective was to develop and test decision support systems (DSS) based on clinical guidelines which could be delivered over the Internet for two disease models: asthma and tuberculosis (TB) preventive therapy. Methods Using open standards of HTML and CGI, we developed an acute asthma severity assessment DSS and a preventative tuberculosis treatment DSS based on content from national guidelines that are recognized as standards of care. Both DSS's are published on the Internet and operate through a decision algorithm developed from the parent guidelines with clinical information provided by the user at the point of clinical care. We tested the effectiveness of each DSS in influencing physician decisions using clinical scenario testing. Results We first validated the asthma algorithm by comparing asthma experts' decisions with the decisions reached by nonpulmonary nurses using the computerized DSS. Using the DSS, nurses scored the same as experts (89% vs. 88%; p = NS). Using the same scenario test instrument, we next compared internal medicine residents using the DSS with residents using a printed version of the National Asthma Education Program-2 guidelines. Residents using the computerized DSS scored significantly better than residents using the paper-based guidelines (92% vs. 84%; p <0.002). We similarly compared residents using the computerized TB DSS to residents using a printed reference card; the residents using the computerized DSS scored significantly better (95.8% vs. 56.6% correct; p<0.001). Conclusions Previous work has shown that guidelines disseminated through traditional educational interventions have minimal impact on physician behavior. Although computerized DSS have been effective in altering physician behavior, many of these systems are not widely available. We have developed two clinical DSS's based on national guidelines and published them on the Internet. Both systems improved physician compliance with national guidelines when tested in clinical scenarios. By providing information that is coupled to relevant activity, we expect that these widely available DSS's will serve as effective educational tools to positively impact physician behavior. PMID:11720915

  19. Research diagnostic criteria for temporomandibular disorders (RDC/TMD): development of image analysis criteria and examiner reliability for image analysis.

    PubMed

    Ahmad, Mansur; Hollender, Lars; Anderson, Quentin; Kartha, Krishnan; Ohrbach, Richard; Truelove, Edmond L; John, Mike T; Schiffman, Eric L

    2009-06-01

    As part of the Multisite Research Diagnostic Criteria For Temporomandibular Disorders (RDC/TMD) Validation Project, comprehensive temporomandibular joint diagnostic criteria were developed for image analysis using panoramic radiography, magnetic resonance imaging (MRI), and computerized tomography (CT). Interexaminer reliability was estimated using the kappa (kappa) statistic, and agreement between rater pairs was characterized by overall, positive, and negative percent agreement. Computerized tomography was the reference standard for assessing validity of other imaging modalities for detecting osteoarthritis (OA). For the radiologic diagnosis of OA, reliability of the 3 examiners was poor for panoramic radiography (kappa = 0.16), fair for MRI (kappa = 0.46), and close to the threshold for excellent for CT (kappa = 0.71). Using MRI, reliability was excellent for diagnosing disc displacements (DD) with reduction (kappa = 0.78) and for DD without reduction (kappa = 0.94) and good for effusion (kappa = 0.64). Overall percent agreement for pairwise ratings was >or=82% for all conditions. Positive percent agreement for diagnosing OA was 19% for panoramic radiography, 59% for MRI, and 84% for CT. Using MRI, positive percent agreement for diagnoses of any DD was 95% and of effusion was 81%. Negative percent agreement was >or=88% for all conditions. Compared with CT, panoramic radiography and MRI had poor and marginal sensitivity, respectively, but excellent specificity in detecting OA. Comprehensive image analysis criteria for the RDC/TMD Validation Project were developed, which can reliably be used for assessing OA using CT and for disc position and effusion using MRI.

  20. Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil

    2010-01-01

    We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach

  1. A Framework to Debug Diagnostic Matrices

    NASA Technical Reports Server (NTRS)

    Kodal, Anuradha; Robinson, Peter; Patterson-Hine, Ann

    2013-01-01

    Diagnostics is an important concept in system health and monitoring of space operations. Many of the existing diagnostic algorithms utilize system knowledge in the form of diagnostic matrix (D-matrix, also popularly known as diagnostic dictionary, fault signature matrix or reachability matrix) gleaned from physical models. But, sometimes, this may not be coherent to obtain high diagnostic performance. In such a case, it is important to modify this D-matrix based on knowledge obtained from other sources such as time-series data stream (simulated or maintenance data) within the context of a framework that includes the diagnostic/inference algorithm. A systematic and sequential update procedure, diagnostic modeling evaluator (DME) is proposed to modify D-matrix and wrapper logic considering least expensive solution first. This iterative procedure includes conditions ranging from modifying 0s and 1s in the matrix, or adding/removing the rows (failure sources) columns (tests). We will experiment this framework on datasets from DX challenge 2009.

  2. Eosinophilic pustular folliculitis: A proposal of diagnostic and therapeutic algorithms.

    PubMed

    Nomura, Takashi; Katoh, Mayumi; Yamamoto, Yosuke; Miyachi, Yoshiki; Kabashima, Kenji

    2016-11-01

    Eosinophilic pustular folliculitis (EPF) is a sterile inflammatory dermatosis of unknown etiology. In addition to classic EPF, which affects otherwise healthy individuals, an immunocompromised state can cause immunosuppression-associated EPF (IS-EPF), which may be referred to dermatologists in inpatient services for assessments. Infancy-associated EPF (I-EPF) is the least characterized subtype, being observed mainly in non-Japanese infants. Diagnosis of EPF is challenging because its lesions mimic those of other common diseases, such as acne and dermatomycosis. Furthermore, there is no consensus regarding the treatment for each subtype of EPF. Here, we created procedure algorithms that facilitate the diagnosis and selection of therapeutic options on the basis of published work available in the public domain. Our diagnostic algorithm comprised a simple flowchart to direct physicians toward proper diagnosis. Recommended regimens were summarized in an easy-to-comprehend therapeutic algorithm for each subtype of EPF. These algorithms would facilitate the diagnostic and therapeutic procedure of EPF. © 2016 Japanese Dermatological Association.

  3. Computerized cytometry and wavelet analysis of follicular lesions for detecting malignancy: A pilot study in thyroid cytology.

    PubMed

    Gilshtein, Hayim; Mekel, Michal; Malkin, Leonid; Ben-Izhak, Ofer; Sabo, Edmond

    2017-01-01

    The cytologic diagnosis of indeterminate lesions of the thyroid involves much uncertainty, and the final diagnosis often requires operative resection. Computerized cytomorphometry and wavelets analysis were examined to evaluate their ability to better discriminate between benign and malignant lesions based on cytology slides. Cytologic reports from patients who underwent thyroid operation in a single, tertiary referral center were retrieved. Patients with Bethesda III and IV lesions were divided according to their final histopathology. Cytomorphometry and wavelet analysis were performed on the digitized images of the cytology slides. Cytology slides of 40 patients were analyzed. Seven patients had a histologic diagnosis of follicular malignancy, 13 had follicular adenomas, and 20 had a benign goiter. Computerized cytomorphometry with a combination of descriptors of nuclear size, shape, and texture was able to predict quantitatively adenoma versus malignancy within the indeterminate group with 95% accuracy. An automated wavelets analysis with a neural network algorithm reached an accuracy of 96% in identifying correctly malignant vs. benign lesions based on cytology. Computerized analysis of cytology slides seems to be more accurate in defining indeterminate thyroid lesions compared with conventional cytologic analysis, which is based on visual characteristics on cytology as well as the expertise of the cytologist. This pilot study needs to be validated with a greater number of samples. Providing a successful validation, we believe that such methods carry promise for better patient treatment. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Enhancing and Customizing Laboratory Information Systems to Improve/Enhance Pathologist Workflow.

    PubMed

    Hartman, Douglas J

    2015-06-01

    Optimizing pathologist workflow can be difficult because it is affected by many variables. Surgical pathologists must complete many tasks that culminate in a final pathology report. Several software systems can be used to enhance/improve pathologist workflow. These include voice recognition software, pre-sign-out quality assurance, image utilization, and computerized provider order entry. Recent changes in the diagnostic coding and the more prominent role of centralized electronic health records represent potential areas for increased ways to enhance/improve the workflow for surgical pathologists. Additional unforeseen changes to the pathologist workflow may accompany the introduction of whole-slide imaging technology to the routine diagnostic work. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Enhancing and Customizing Laboratory Information Systems to Improve/Enhance Pathologist Workflow.

    PubMed

    Hartman, Douglas J

    2016-03-01

    Optimizing pathologist workflow can be difficult because it is affected by many variables. Surgical pathologists must complete many tasks that culminate in a final pathology report. Several software systems can be used to enhance/improve pathologist workflow. These include voice recognition software, pre-sign-out quality assurance, image utilization, and computerized provider order entry. Recent changes in the diagnostic coding and the more prominent role of centralized electronic health records represent potential areas for increased ways to enhance/improve the workflow for surgical pathologists. Additional unforeseen changes to the pathologist workflow may accompany the introduction of whole-slide imaging technology to the routine diagnostic work. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Clinically expedient reporting of rapid diagnostic test information.

    PubMed

    Doern, G V

    1986-03-01

    With the development of rapid diagnostic tests in the clinical microbiology laboratory has come an awareness of the importance of rapid results reporting. Clearly, the potential clinical impact of rapid diagnostic tests is dependent on expeditious reporting. Traditional manual reporting systems are encumbered by the necessity of transcription of test information onto hard copy reports and then the subsequent distribution of such reports into the hands of the user. Laboratory computers when linked directly to CRTs located in nursing stations, ambulatory clinics, or physician's offices, both inside and outside of the hospital, permit essentially instantaneous transfer of test results from the laboratory to the clinician. Computer-assisted results reporting, while representing a significant advance over manual reporting systems is not, however, without problems. Concerns include validation of test information, authorization of users with access to test information, mechanical integrity, and cost. These issues notwithstanding, computerized results reporting will undoubtedly play a central role in optimizing the clinical impact of rapid diagnostic tests.

  7. Real-time Raman spectroscopy for in vivo, online gastric cancer diagnosis during clinical endoscopic examination.

    PubMed

    Duraipandian, Shiyamala; Sylvest Bergholt, Mads; Zheng, Wei; Yu Ho, Khek; Teh, Ming; Guan Yeoh, Khay; Bok Yan So, Jimmy; Shabbir, Asim; Huang, Zhiwei

    2012-08-01

    Optical spectroscopic techniques including reflectance, fluorescence and Raman spectroscopy have shown promising potential for in vivo precancer and cancer diagnostics in a variety of organs. However, data-analysis has mostly been limited to post-processing and off-line algorithm development. In this work, we develop a fully automated on-line Raman spectral diagnostics framework integrated with a multimodal image-guided Raman technique for real-time in vivo cancer detection at endoscopy. A total of 2748 in vivo gastric tissue spectra (2465 normal and 283 cancer) were acquired from 305 patients recruited to construct a spectral database for diagnostic algorithms development. The novel diagnostic scheme developed implements on-line preprocessing, outlier detection based on principal component analysis statistics (i.e., Hotelling's T2 and Q-residuals) for tissue Raman spectra verification as well as for organ specific probabilistic diagnostics using different diagnostic algorithms. Free-running optical diagnosis and processing time of < 0.5 s can be achieved, which is critical to realizing real-time in vivo tissue diagnostics during clinical endoscopic examination. The optimized partial least squares-discriminant analysis (PLS-DA) models based on the randomly resampled training database (80% for learning and 20% for testing) provide the diagnostic accuracy of 85.6% [95% confidence interval (CI): 82.9% to 88.2%] [sensitivity of 80.5% (95% CI: 71.4% to 89.6%) and specificity of 86.2% (95% CI: 83.6% to 88.7%)] for the detection of gastric cancer. The PLS-DA algorithms are further applied prospectively on 10 gastric patients at gastroscopy, achieving the predictive accuracy of 80.0% (60/75) [sensitivity of 90.0% (27/30) and specificity of 73.3% (33/45)] for in vivo diagnosis of gastric cancer. The receiver operating characteristics curves further confirmed the efficacy of Raman endoscopy together with PLS-DA algorithms for in vivo prospective diagnosis of gastric cancer. This work successfully moves biomedical Raman spectroscopic technique into real-time, on-line clinical cancer diagnosis, especially in routine endoscopic diagnostic applications.

  8. Real-time Raman spectroscopy for in vivo, online gastric cancer diagnosis during clinical endoscopic examination

    NASA Astrophysics Data System (ADS)

    Duraipandian, Shiyamala; Sylvest Bergholt, Mads; Zheng, Wei; Yu Ho, Khek; Teh, Ming; Guan Yeoh, Khay; Bok Yan So, Jimmy; Shabbir, Asim; Huang, Zhiwei

    2012-08-01

    Optical spectroscopic techniques including reflectance, fluorescence and Raman spectroscopy have shown promising potential for in vivo precancer and cancer diagnostics in a variety of organs. However, data-analysis has mostly been limited to post-processing and off-line algorithm development. In this work, we develop a fully automated on-line Raman spectral diagnostics framework integrated with a multimodal image-guided Raman technique for real-time in vivo cancer detection at endoscopy. A total of 2748 in vivo gastric tissue spectra (2465 normal and 283 cancer) were acquired from 305 patients recruited to construct a spectral database for diagnostic algorithms development. The novel diagnostic scheme developed implements on-line preprocessing, outlier detection based on principal component analysis statistics (i.e., Hotelling's T2 and Q-residuals) for tissue Raman spectra verification as well as for organ specific probabilistic diagnostics using different diagnostic algorithms. Free-running optical diagnosis and processing time of < 0.5 s can be achieved, which is critical to realizing real-time in vivo tissue diagnostics during clinical endoscopic examination. The optimized partial least squares-discriminant analysis (PLS-DA) models based on the randomly resampled training database (80% for learning and 20% for testing) provide the diagnostic accuracy of 85.6% [95% confidence interval (CI): 82.9% to 88.2%] [sensitivity of 80.5% (95% CI: 71.4% to 89.6%) and specificity of 86.2% (95% CI: 83.6% to 88.7%)] for the detection of gastric cancer. The PLS-DA algorithms are further applied prospectively on 10 gastric patients at gastroscopy, achieving the predictive accuracy of 80.0% (60/75) [sensitivity of 90.0% (27/30) and specificity of 73.3% (33/45)] for in vivo diagnosis of gastric cancer. The receiver operating characteristics curves further confirmed the efficacy of Raman endoscopy together with PLS-DA algorithms for in vivo prospective diagnosis of gastric cancer. This work successfully moves biomedical Raman spectroscopic technique into real-time, on-line clinical cancer diagnosis, especially in routine endoscopic diagnostic applications.

  9. The Autism Diagnostic Observation Schedule, Module 4: Application of the Revised Algorithms in an Independent, Well-Defined, Dutch Sample (n = 93).

    PubMed

    de Bildt, Annelies; Sytema, Sjoerd; Meffert, Harma; Bastiaansen, Jojanneke A C J

    2016-01-01

    This study examined the discriminative ability of the revised Autism Diagnostic Observation Schedule module 4 algorithm (Hus and Lord in J Autism Dev Disord 44(8):1996-2012, 2014) in 93 Dutch males with Autism Spectrum Disorder (ASD), schizophrenia, psychopathy or controls. Discriminative ability of the revised algorithm ASD cut-off resembled the original algorithm ASD cut-off: highly specific for psychopathy and controls, lower sensitivity than Hus and Lord (2014; i.e. ASD .61, AD .53). The revised algorithm AD cut-off improved sensitivity over the original algorithm. Discriminating ASD from schizophrenia was still challenging, but the better-balanced sensitivity (.53) and specificity (.78) of the revised algorithm AD cut-off may aide clinicians' differential diagnosis. Findings support using the revised algorithm, being conceptually conform the other modules, thus improving comparability across the lifespan.

  10. Use of electronic data and existing screening tools to identify clinically significant obstructive sleep apnea.

    PubMed

    Severson, Carl A; Pendharkar, Sachin R; Ronksley, Paul E; Tsai, Willis H

    2015-01-01

    To assess the ability of electronic health data and existing screening tools to identify clinically significant obstructive sleep apnea (OSA), as defined by symptomatic or severe OSA. The present retrospective cohort study of 1041 patients referred for sleep diagnostic testing was undertaken at a tertiary sleep centre in Calgary, Alberta. A diagnosis of clinically significant OSA or an alternative sleep diagnosis was assigned to each patient through blinded independent chart review by two sleep physicians. Predictive variables were identified from online questionnaire data, and diagnostic algorithms were developed. The performance of electronically derived algorithms for identifying patients with clinically significant OSA was determined. Diagnostic performance of these algorithms was compared with versions of the STOP-Bang questionnaire and adjusted neck circumference score (ANC) derived from electronic data. Electronic questionnaire data were highly sensitive (>95%) at identifying clinically significant OSA, but not specific. Sleep diagnostic testing-determined respiratory disturbance index was very specific (specificity ≥95%) for clinically relevant disease, but not sensitive (<35%). Derived algorithms had similar accuracy to the STOP-Bang or ANC, but required fewer questions and calculations. These data suggest that a two-step process using a small number of clinical variables (maximizing sensitivity) and objective diagnostic testing (maximizing specificity) is required to identify clinically significant OSA. When used in an online setting, simple algorithms can identify clinically relevant OSA with similar performance to existing decision rules such as the STOP-Bang or ANC.

  11. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  12. Dual scan CT image recovery from truncated projections

    NASA Astrophysics Data System (ADS)

    Sarkar, Shubhabrata; Wahi, Pankaj; Munshi, Prabhat

    2017-12-01

    There are computerized tomography (CT) scanners available commercially for imaging small objects and they are often categorized as mini-CT X-ray machines. One major limitation of these machines is their inability to scan large objects with good image quality because of the truncation of projection data. An algorithm is proposed in this work which enables such machines to scan large objects while maintaining the quality of the recovered image.

  13. The PHQ-8 as a measure of current depression in the general population.

    PubMed

    Kroenke, Kurt; Strine, Tara W; Spitzer, Robert L; Williams, Janet B W; Berry, Joyce T; Mokdad, Ali H

    2009-04-01

    The eight-item Patient Health Questionnaire depression scale (PHQ-8) is established as a valid diagnostic and severity measure for depressive disorders in large clinical studies. Our objectives were to assess the PHQ-8 as a depression measure in a large, epidemiological population-based study, and to determine the comparability of depression as defined by the PHQ-8 diagnostic algorithm vs. a PHQ-8 cutpoint > or = 10. Random-digit-dialed telephone survey of 198,678 participants in the 2006 Behavioral Risk Factor Surveillance Survey (BRFSS), a population-based survey in the United States. Current depression as defined by either the DSM-IV based diagnostic algorithm (i.e., major depressive or other depressive disorder) of the PHQ-8 or a PHQ-8 score > or = 10; respondent sociodemographic characteristics; number of days of impairment in the past 30 days in multiple domains of health-related quality of life (HRQoL). The prevalence of current depression was similar whether defined by the diagnostic algorithm or a PHQ-8 score > or = 10 (9.1% vs. 8.6%). Depressed patients had substantially more days of impairment across multiple domains of HRQoL, and the impairment was nearly identical in depressed groups defined by either method. Of the 17,040 respondents with a PHQ-8 score > or = 10, major depressive disorder was present in 49.7%, other depressive disorder in 23.9%, depressed mood or anhedonia in another 22.8%, and no evidence of depressive disorder or depressive symptoms in only 3.5%. The PHQ-8 diagnostic algorithm rather than an independent structured psychiatric interview was used as the criterion standard. The PHQ-8 is a useful depression measure for population-based studies, and either its diagnostic algorithm or a cutpoint > or = 10 can be used for defining current depression.

  14. An Efficient Reachability Analysis Algorithm

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2008-01-01

    A document discusses a new algorithm for generating higher-order dependencies for diagnostic and sensor placement analysis when a system is described with a causal modeling framework. This innovation will be used in diagnostic and sensor optimization and analysis tools. Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in-situ platforms. This algorithm will serve as a power tool for technologies that satisfy a key requirement of autonomous spacecraft, including science instruments and in-situ missions.

  15. Prosthetic joint infection development of an evidence-based diagnostic algorithm.

    PubMed

    Mühlhofer, Heinrich M L; Pohlig, Florian; Kanz, Karl-Georg; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; Kelch, Sarah; Harrasser, Norbert; von Eisenhart-Rothe, Rüdiger; Schauwecker, Johannes

    2017-03-09

    Increasing rates of prosthetic joint infection (PJI) have presented challenges for general practitioners, orthopedic surgeons and the health care system in the recent years. The diagnosis of PJI is complex; multiple diagnostic tools are used in the attempt to correctly diagnose PJI. Evidence-based algorithms can help to identify PJI using standardized diagnostic steps. We reviewed relevant publications between 1990 and 2015 using a systematic literature search in MEDLINE and PUBMED. The selected search results were then classified into levels of evidence. The keywords were prosthetic joint infection, biofilm, diagnosis, sonication, antibiotic treatment, implant-associated infection, Staph. aureus, rifampicin, implant retention, pcr, maldi-tof, serology, synovial fluid, c-reactive protein level, total hip arthroplasty (THA), total knee arthroplasty (TKA) and combinations of these terms. From an initial 768 publications, 156 publications were stringently reviewed. Publications with class I-III recommendations (EAST) were considered. We developed an algorithm for the diagnostic approach to display the complex diagnosis of PJI in a clear and logically structured process according to ISO 5807. The evidence-based standardized algorithm combines modern clinical requirements and evidence-based treatment principles. The algorithm provides a detailed transparent standard operating procedure (SOP) for diagnosing PJI. Thus, consistently high, examiner-independent process quality is assured to meet the demands of modern quality management in PJI diagnosis.

  16. A randomized controlled trial of a diagnostic algorithm for symptoms of uncomplicated cystitis at an out-of-hours service

    PubMed Central

    Grude, Nils; Lindbaek, Morten

    2015-01-01

    Objective. To compare the clinical outcome of patients presenting with symptoms of uncomplicated cystitis who were seen by a doctor, with patients who were given treatment following a diagnostic algorithm. Design. Randomized controlled trial. Setting. Out-of-hours service, Oslo, Norway. Intervention. Women with typical symptoms of uncomplicated cystitis were included in the trial in the time period September 2010–November 2011. They were randomized into two groups. One group received standard treatment according to the diagnostic algorithm, the other group received treatment after a regular consultation by a doctor. Subjects. Women (n = 441) aged 16–55 years. Mean age in both groups 27 years. Main outcome measures. Number of days until symptomatic resolution. Results. No significant differences were found between the groups in the basic patient demographics, severity of symptoms, or percentage of urine samples with single culture growth. A median of three days until symptomatic resolution was found in both groups. By day four 79% in the algorithm group and 72% in the regular consultation group were free of symptoms (p = 0.09). The number of patients who contacted a doctor again in the follow-up period and received alternative antibiotic treatment was insignificantly higher (p = 0.08) after regular consultation than after treatment according to the diagnostic algorithm. There were no cases of severe pyelonephritis or hospital admissions during the follow-up period. Conclusion. Using a diagnostic algorithm is a safe and efficient method for treating women with symptoms of uncomplicated cystitis at an out-of-hours service. This simplification of treatment strategy can lead to a more rational use of consultation time and a stricter adherence to National Antibiotic Guidelines for a common disorder. PMID:25961367

  17. A randomized controlled trial of a diagnostic algorithm for symptoms of uncomplicated cystitis at an out-of-hours service.

    PubMed

    Bollestad, Marianne; Grude, Nils; Lindbaek, Morten

    2015-06-01

    To compare the clinical outcome of patients presenting with symptoms of uncomplicated cystitis who were seen by a doctor, with patients who were given treatment following a diagnostic algorithm. Randomized controlled trial. Out-of-hours service, Oslo, Norway. Women with typical symptoms of uncomplicated cystitis were included in the trial in the time period September 2010-November 2011. They were randomized into two groups. One group received standard treatment according to the diagnostic algorithm, the other group received treatment after a regular consultation by a doctor. Women (n = 441) aged 16-55 years. Mean age in both groups 27 years. Number of days until symptomatic resolution. No significant differences were found between the groups in the basic patient demographics, severity of symptoms, or percentage of urine samples with single culture growth. A median of three days until symptomatic resolution was found in both groups. By day four 79% in the algorithm group and 72% in the regular consultation group were free of symptoms (p = 0.09). The number of patients who contacted a doctor again in the follow-up period and received alternative antibiotic treatment was insignificantly higher (p = 0.08) after regular consultation than after treatment according to the diagnostic algorithm. There were no cases of severe pyelonephritis or hospital admissions during the follow-up period. Using a diagnostic algorithm is a safe and efficient method for treating women with symptoms of uncomplicated cystitis at an out-of-hours service. This simplification of treatment strategy can lead to a more rational use of consultation time and a stricter adherence to National Antibiotic Guidelines for a common disorder.

  18. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer

    PubMed Central

    Veta, Mitko; Johannes van Diest, Paul; van Ginneken, Bram; Karssemeijer, Nico; Litjens, Geert; van der Laak, Jeroen A. W. M.; Hermsen, Meyke; Manson, Quirine F; Balkenhol, Maschenka; Geessink, Oscar; Stathonikos, Nikolaos; van Dijk, Marcory CRF; Bult, Peter; Beca, Francisco; Beck, Andrew H; Wang, Dayong; Khosla, Aditya; Gargeya, Rishab; Irshad, Humayun; Zhong, Aoxiao; Dou, Qi; Li, Quanzheng; Chen, Hao; Lin, Huang-Jing; Heng, Pheng-Ann; Haß, Christian; Bruni, Elia; Wong, Quincy; Halici, Ugur; Öner, Mustafa Ümit; Cetin-Atalay, Rengul; Berseth, Matt; Khvatkov, Vitali; Vylegzhanin, Alexei; Kraus, Oren; Shaban, Muhammad; Rajpoot, Nasir; Awan, Ruqayya; Sirinukunwattana, Korsuk; Qaiser, Talha; Tsang, Yee-Wah; Tellez, David; Annuscheit, Jonas; Hufnagl, Peter; Valkonen, Mira; Kartasalo, Kimmo; Latonen, Leena; Ruusuvuori, Pekka; Liimatainen, Kaisa; Albarqouni, Shadi; Mungal, Bharti; George, Ami; Demirci, Stefanie; Navab, Nassir; Watanabe, Seiryo; Seno, Shigeto; Takenaka, Yoichi; Matsuda, Hideo; Ahmady Phoulady, Hady; Kovalev, Vassili; Kalinovsky, Alexander; Liauchuk, Vitali; Bueno, Gloria; Fernandez-Carrobles, M. Milagro; Serrano, Ismael; Deniz, Oscar; Racoceanu, Daniel; Venâncio, Rui

    2017-01-01

    Importance Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists’ diagnoses in a diagnostic setting. Design, Setting, and Participants Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). Conclusions and Relevance In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting. PMID:29234806

  19. Wide-field synovial fluid imaging using polarized lens-free on-chip microscopy for point-of-care diagnostics of gout (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Lee, Seung Yoon; Zhang, Yun; Furst, Daniel; Fitzgerald, John; Ozcan, Aydogan

    2016-03-01

    Gout and pseudogout are forms of crystal arthropathy caused by monosodium urate (MSU) and calcium pyrophosphate dehydrate (CPPD) crystals in the joint, respectively, that can result in painful joints. Detecting the unique-shaped, birefringent MSU/CPPD crystals in a synovial fluid sample using a compensated polarizing microscope has been the gold-standard for diagnosis since the 1960's. However, this can be time-consuming and inaccurate, especially if there are only few crystals in the fluid. The high-cost and bulkiness of conventional microscopes can also be limiting for point-of-care diagnosis. Lens-free on-chip microscopy based on digital holography routinely achieves high-throughput and high-resolution imaging in a cost-effective and field-portable design. Here we demonstrate, for the first time, polarized lens-free on-chip imaging of MSU and CPPD crystals over a wide field-of-view (FOV ~ 20.5 mm2, i.e., <20-fold larger compared a typical 20X objective-lens FOV) for point-of-care diagnostics of gout and pseudogout. Circularly polarizer partially-coherent light is used to illuminate the synovial fluid sample on a glass slide, after which a quarter-wave-plate and an angle-mismatched linear polarizer are used to analyze the transmitted light. Two lens-free holograms of the MSU/CPPD sample are taken, with the sample rotated by 90°, to rule out any non-birefringent objects within the specimen. A phase-recovery algorithm is also used to improve the reconstruction quality, and digital pseudo-coloring is utilized to match the color and contrast of the lens-free image to that of a gold-standard microscope image to ease the examination by a rheumatologist or a laboratory technician, and to facilitate computerized analysis.

  20. Classifying syndromes in Chinese medicine using multi-label learning algorithm with relevant features for each label.

    PubMed

    Xu, Jin; Xu, Zhao-Xia; Lu, Ping; Guo, Rui; Yan, Hai-Xia; Xu, Wen-Jie; Wang, Yi-Qin; Xia, Chun-Ming

    2016-11-01

    To develop an effective Chinese Medicine (CM) diagnostic model of coronary heart disease (CHD) and to confifirm the scientifific validity of CM theoretical basis from an algorithmic viewpoint. Four types of objective diagnostic data were collected from 835 CHD patients by using a self-developed CM inquiry scale for the diagnosis of heart problems, a tongue diagnosis instrument, a ZBOX-I pulse digital collection instrument, and the sound of an attending acquisition system. These diagnostic data was analyzed and a CM diagnostic model was established using a multi-label learning algorithm (REAL). REAL was employed to establish a Xin (Heart) qi defificiency, Xin yang defificiency, Xin yin defificiency, blood stasis, and phlegm fifive-card CM diagnostic model, which had recognition rates of 80.32%, 89.77%, 84.93%, 85.37%, and 69.90%, respectively. The multi-label learning method established using four diagnostic models based on mutual information feature selection yielded good recognition results. The characteristic model parameters were selected by maximizing the mutual information for each card type. The four diagnostic methods used to obtain information in CM, i.e., observation, auscultation and olfaction, inquiry, and pulse diagnosis, can be characterized by these parameters, which is consistent with CM theory.

  1. Early Diagnosis of Breast Cancer.

    PubMed

    Wang, Lulu

    2017-07-05

    Early-stage cancer detection could reduce breast cancer death rates significantly in the long-term. The most critical point for best prognosis is to identify early-stage cancer cells. Investigators have studied many breast diagnostic approaches, including mammography, magnetic resonance imaging, ultrasound, computerized tomography, positron emission tomography and biopsy. However, these techniques have some limitations such as being expensive, time consuming and not suitable for young women. Developing a high-sensitive and rapid early-stage breast cancer diagnostic method is urgent. In recent years, investigators have paid their attention in the development of biosensors to detect breast cancer using different biomarkers. Apart from biosensors and biomarkers, microwave imaging techniques have also been intensely studied as a promising diagnostic tool for rapid and cost-effective early-stage breast cancer detection. This paper aims to provide an overview on recent important achievements in breast screening methods (particularly on microwave imaging) and breast biomarkers along with biosensors for rapidly diagnosing breast cancer.

  2. [Possibilities of the TruScreen for screening of precancer and cancer of the uterine cervix].

    PubMed

    Zlatkov, V

    2009-01-01

    The classic approach of detection of pre-cancer and cancer of uterine cervix includes cytological examination, followed by colposcopy assessment of the detected cytological abnormalities. Real-time devices use in-vivo techniques for the measurement, computerized analysis and classifying of different types of cervical tissues. The aim of the present review is to present the technical characteristics and to discus the diagnostic possibilities of TruScreen-automated optical-electron system for cervical screening. The analysis of the presented in the literature diagnostic value of the method at different grades intraepithelial lesions shows that it has higher sensitivity (67-70%) and lower specificity (81%) in comparison to the Pap test with the following results (45-69% sensitivity and 95% specificity). This makes the method suitable for independent primary screening, as well as for adding the diagnostic assurance of the cytological method.

  3. Seizures in the elderly: development and validation of a diagnostic algorithm.

    PubMed

    Dupont, Sophie; Verny, Marc; Harston, Sandrine; Cartz-Piver, Leslie; Schück, Stéphane; Martin, Jennifer; Puisieux, François; Alecu, Cosmin; Vespignani, Hervé; Marchal, Cécile; Derambure, Philippe

    2010-05-01

    Seizures are frequent in the elderly, but their diagnosis can be challenging. The objective of this work was to develop and validate an expert-based algorithm for the diagnosis of seizures in elderly people. A multidisciplinary group of neurologists and geriatricians developed a diagnostic algorithm using a combination of selected clinical, electroencephalographical and radiological criteria. The algorithm was validated by multicentre retrospective analysis of data of patients referred for specific symptoms and classified by the experts as epileptic patients or not. The algorithm was applied to all the patients, and the diagnosis provided by the algorithm was compared to the clinical diagnosis of the experts. Twenty-nine clinical, electroencephalographical and radiological criteria were selected for the algorithm. According to criteria combination, seizures were classified in four levels of diagnosis: certain, highly probable, possible or improbable. To validate the algorithm, the medical records of 269 elderly patients were analyzed (138 with epileptic seizures, 131 with non-epileptic manifestations). Patients were mainly referred for a transient focal deficit (40%), confusion (38%), unconsciousness (27%). The algorithm best classified certain and probable seizures versus possible and improbable seizures, with 86.2% sensitivity and 67.2% specificity. Using logistical regression, 2 simplified models were developed, the first with 13 criteria (Se 85.5%, Sp 90.1%), and the second with 7 criteria only (Se 84.8%, Sp 88.6%). In conclusion, the present study validated the use of a revised diagnostic algorithm to help diagnosis epileptic seizures in the elderly. A prospective study is planned to further validate this algorithm. Copyright 2010 Elsevier B.V. All rights reserved.

  4. Inaccuracy of Wolff-Parkinson-white accessory pathway localization algorithms in children and patients with congenital heart defects.

    PubMed

    Bar-Cohen, Yaniv; Khairy, Paul; Morwood, James; Alexander, Mark E; Cecchin, Frank; Berul, Charles I

    2006-07-01

    ECG algorithms used to localize accessory pathways (AP) in patients with Wolff-Parkinson-White (WPW) syndrome have been validated in adults, but less is known of their use in children, especially in patients with congenital heart disease (CHD). We hypothesize that these algorithms have low diagnostic accuracy in children and even lower in those with CHD. Pre-excited ECGs in 43 patients with WPW and CHD (median age 5.4 years [0.9-32 years]) were evaluated and compared to 43 consecutive WPW control patients without CHD (median age 14.5 years [1.8-18 years]). Two blinded observers predicted AP location using 2 adult and 1 pediatric WPW algorithms, and a third blinded observer served as a tiebreaker. Predicted locations were compared with ablation-verified AP location to identify (a) exact match for AP location and (b) match for laterality (left-sided vs right-sided AP). In control children, adult algorithms were accurate in only 56% and 60%, while the pediatric algorithm was correct in 77%. In 19 patients with Ebstein's anomaly, diagnostic accuracy was similar to controls with at times an even better ability to predict laterality. In non-Ebstein's CHD, however, the algorithms were markedly worse (29% for the adult algorithms and 42% for the pediatric algorithms). A relatively large degree of interobserver variability was seen (kappa values from 0.30 to 0.58). Adult localization algorithms have poor diagnostic accuracy in young patients with and without CHD. Both adult and pediatric algorithms are particularly misleading in non-Ebstein's CHD patients and should be interpreted with caution.

  5. Comparison of Diagnostic Algorithms for Detecting Toxigenic Clostridium difficile in Routine Practice at a Tertiary Referral Hospital in Korea.

    PubMed

    Moon, Hee-Won; Kim, Hyeong Nyeon; Hur, Mina; Shim, Hee Sook; Kim, Heejung; Yun, Yeo-Min

    2016-01-01

    Since every single test has some limitations for detecting toxigenic Clostridium difficile, multistep algorithms are recommended. This study aimed to compare the current, representative diagnostic algorithms for detecting toxigenic C. difficile, using VIDAS C. difficile toxin A&B (toxin ELFA), VIDAS C. difficile GDH (GDH ELFA, bioMérieux, Marcy-l'Etoile, France), and Xpert C. difficile (Cepheid, Sunnyvale, California, USA). In 271 consecutive stool samples, toxigenic culture, toxin ELFA, GDH ELFA, and Xpert C. difficile were performed. We simulated two algorithms: screening by GDH ELFA and confirmation by Xpert C. difficile (GDH + Xpert) and combined algorithm of GDH ELFA, toxin ELFA, and Xpert C. difficile (GDH + Toxin + Xpert). The performance of each assay and algorithm was assessed. The agreement of Xpert C. difficile and two algorithms (GDH + Xpert and GDH+ Toxin + Xpert) with toxigenic culture were strong (Kappa, 0.848, 0.857, and 0.868, respectively). The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of algorithms (GDH + Xpert and GDH + Toxin + Xpert) were 96.7%, 95.8%, 85.0%, 98.1%, and 94.5%, 95.8%, 82.3%, 98.5%, respectively. There were no significant differences between Xpert C. difficile and two algorithms in sensitivity, specificity, PPV and NPV. The performances of both algorithms for detecting toxigenic C. difficile were comparable to that of Xpert C. difficile. Either algorithm would be useful in clinical laboratories and can be optimized in the diagnostic workflow of C. difficile depending on costs, test volume, and clinical needs.

  6. Fully automated chest wall line segmentation in breast MRI by using context information

    NASA Astrophysics Data System (ADS)

    Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Localio, A. Russell; Schnall, Mitchell D.; Kontos, Despina

    2012-03-01

    Breast MRI has emerged as an effective modality for the clinical management of breast cancer. Evidence suggests that computer-aided applications can further improve the diagnostic accuracy of breast MRI. A critical and challenging first step for automated breast MRI analysis, is to separate the breast as an organ from the chest wall. Manual segmentation or user-assisted interactive tools are inefficient, tedious, and error-prone, which is prohibitively impractical for processing large amounts of data from clinical trials. To address this challenge, we developed a fully automated and robust computerized segmentation method that intensively utilizes context information of breast MR imaging and the breast tissue's morphological characteristics to accurately delineate the breast and chest wall boundary. A critical component is the joint application of anisotropic diffusion and bilateral image filtering to enhance the edge that corresponds to the chest wall line (CWL) and to reduce the effect of adjacent non-CWL tissues. A CWL voting algorithm is proposed based on CWL candidates yielded from multiple sequential MRI slices, in which a CWL representative is generated and used through a dynamic time warping (DTW) algorithm to filter out inferior candidates, leaving the optimal one. Our method is validated by a representative dataset of 20 3D unilateral breast MRI scans that span the full range of the American College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS) fibroglandular density categorization. A promising performance (average overlay percentage of 89.33%) is observed when the automated segmentation is compared to manually segmented ground truth obtained by an experienced breast imaging radiologist. The automated method runs time-efficiently at ~3 minutes for each breast MR image set (28 slices).

  7. Use of machine learning to improve autism screening and diagnostic instruments: effectiveness, efficiency, and multi-instrument fusion

    PubMed Central

    Bone, Daniel; Bishop, Somer; Black, Matthew P.; Goodwin, Matthew S.; Lord, Catherine; Narayanan, Shrikanth S.

    2016-01-01

    Background Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely-used ASD screening and diagnostic tools. Methods The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders (DD), split at age 10. Algorithms were created via a robust ML classifier, support vector machine (SVM), while targeting best-estimate clinical diagnosis of ASD vs. non-ASD. Parameter settings were tuned in multiple levels of cross-validation. Results The created algorithms were more effective (higher performing) than current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. Conclusions ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools. PMID:27090613

  8. Use of machine learning to improve autism screening and diagnostic instruments: effectiveness, efficiency, and multi-instrument fusion.

    PubMed

    Bone, Daniel; Bishop, Somer L; Black, Matthew P; Goodwin, Matthew S; Lord, Catherine; Narayanan, Shrikanth S

    2016-08-01

    Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely used ASD screening and diagnostic tools. The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders, split at age 10. Algorithms were created via a robust ML classifier, support vector machine, while targeting best-estimate clinical diagnosis of ASD versus non-ASD. Parameter settings were tuned in multiple levels of cross-validation. The created algorithms were more effective (higher performing) than the current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight the limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools. © 2016 Association for Child and Adolescent Mental Health.

  9. An algorithm for analytical solution of basic problems featuring elastostatic bodies with cavities and surface flaws

    NASA Astrophysics Data System (ADS)

    Penkov, V. B.; Levina, L. V.; Novikova, O. S.; Shulmin, A. S.

    2018-03-01

    Herein we propose a methodology for structuring a full parametric analytical solution to problems featuring elastostatic media based on state-of-the-art computing facilities that support computerized algebra. The methodology includes: direct and reverse application of P-Theorem; methods of accounting for physical properties of media; accounting for variable geometrical parameters of bodies, parameters of boundary states, independent parameters of volume forces, and remote stress factors. An efficient tool to address the task is the sustainable method of boundary states originally designed for the purposes of computerized algebra and based on the isomorphism of Hilbertian spaces of internal states and boundary states of bodies. We performed full parametric solutions of basic problems featuring a ball with a nonconcentric spherical cavity, a ball with a near-surface flaw, and an unlimited medium with two spherical cavities.

  10. A motion artefact study and locally deforming objects in computerized tomography

    NASA Astrophysics Data System (ADS)

    Hahn, Bernadette N.

    2017-11-01

    Movements of the object during the data collection in computerized tomography can introduce motion artefacts in the reconstructed image. They can be reduced by employing information about the dynamic behaviour within the reconstruction step. However, inaccuracies concerning the movement are inevitable in practice. In this article, we give an explicit characterization of what is visible in an image obtained by a reconstruction algorithm with incorrect motion information. Then, we use this result to study in detail the situation of locally deforming objects, i.e. individual parts of the object have a different dynamic behaviour. In this context, we prove that additional artefacts arise due to the global nature of the Radon transform, even if the motion is exactly known. Based on our analysis, we propose a numerical scheme to reduce these artefacts in the reconstructed image. All our results are illustrated by numerical examples.

  11. Computerized transrectal ultrasound (C-TRUS) of the prostate: detection of cancer in patients with multiple negative systematic random biopsies.

    PubMed

    Loch, Tillmann

    2007-08-01

    This study was designed to compare the diagnostic yield of computerized transrectal ultrasound (C-TRUS) guided biopsies in the detection of prostate cancer in a group of men with a history of multiple systematic random biopsies with no prior evidence of prostate cancer. The question was asked: Can we detect cancer by C-TRUS that has been overlooked by multiple systematic biopsies? The entrance criteria for this study were prior negative systematic random biopsies regardless of number of biopsy sessions or number of individual biopsy cores. Serial static TRUS images were evaluated by C-TRUS, which assessed signal information independent of visual gray scale. Five C-TRUS algorithms were utilized to evaluate the information of the ultrasound signal. Interpretation of the results were documented and the most suspicious regions marked by C-TRUS were biopsied by guiding the needle to the marked location. Five hundred and forty men were biopsied because of an elevated PSA or abnormal digital rectal exam. 132 had a history of prior negative systematic random biopsies (1-7 sessions, median: 2 and between 6 and 72 individual prostate biopsies, median: 12 cores). Additionally, a diagnostic TUR-P of the prostate with benign result was performed in four patients. The PSA ranged from 3.1-36 ng/ml with a median of 9.01 ng/ml. The prostate volume ranged from 6-203 ml with a median of 42 ml. Of the 132 patients with prior negative systematic random biopsies, cancer was found in 66 (50%) by C-TRUS targeted biopsies. In this group the median number of negative biopsy sessions was two and a median of 12 biopsy cores were performed. From literature we would expect a cancer detection rate in this group with systematic biopsies of approximately 7%. We only found five carcinomas with a Gleason Score (GS) of 5, 25 with GS 6, 22 with GS 7, 8 with GS 8 and even 7 with GS 9. The results of this prospective clinical trail indicates that the additional use of the C-TRUS identifies clinical significant cancerous lesions that could not been visualized or detected by systematic random biopsies in a very high percentage. In addition, the results of the study support the efforts to search for strategies that utilize expertise and refinement of imaging modalities rather than elevating the number of random biopsies (f.e. 141 cores in one session) in the detection of prostate cancer.

  12. [The importance of neurological examinations in the age of the technological revolution].

    PubMed

    Berbel-García, A; González-Spínola, J; Martínez-Salio, A; Porta-Etessam, J; Pérez-Martínez, D A; de Toledo, M; Sáiz-Díaz, R A

    Neurologic practice and care have been modified in many important ways during the past ten years, to adapt to the explosion of new information and new technology. Students, residents and practicing physicians have been continuing programs to a model that focuses almost exclusively on the applications to neurologic disorders of the new knowledge obtained from biomedical research. On the other hand high demand for outpatient neurologic care prevents adequate patient's evaluation. Case 1: 65 years old female. Occipital headache diagnosed of tensional origin (normal computerized tomography). Two months later is re-evaluated due to intractable pain and hypoglossal lesion. An amplified computerized tomography revealed a occipital condyle metastasis. Case 2: 21 years old female. Clinical suspicion of demyelinating disease due to repeated facial paresis and sensitive disorder. General exploration and computerized tomography revealed temporo-mandibular joint. Case 3: 60 years old female. Valuation of anticoagulant therapy due to repeated transient ischemic attacks. She suffered from peripheral facial palsy related to auditory cholesteatoma. Neurologic education is nowadays orientated to new technologies. On the other hand, excessive demand prevents adequate valuation and a minute exploration is substituted by complementary evaluations. These situations generate diagnostic mistakes or iatrogenic. It would be important a consideration of the neurologic education profiles and fulfillment of consultations time recommendations for outpatients care.

  13. Separation of left and right lungs using 3D information of sequential CT images and a guided dynamic programming algorithm

    PubMed Central

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104

  14. Separation of left and right lungs using 3-dimensional information of sequential computed tomography images and a guided dynamic programming algorithm.

    PubMed

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.

  15. Using Response-Time Constraints in Item Selection To Control for Differential Speededness in Computerized Adaptive Testing. LSAC Research Report Series.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L.

    This paper proposes an item selection algorithm that can be used to neutralize the effect of time limits in computer adaptive testing. The method is based on a statistical model for the response-time distributions of the test takers on the items in the pool that is updated each time a new item has been administered. Predictions from the model are…

  16. Learning Game Evaluation Functions with a Compound Linear Machine.

    DTIC Science & Technology

    1980-03-01

    Comparison to Non-Learning Shannon Type Programs . . . 50 Comparison to Samuel’s Shannon Type Checker Program . 52 Comparison to an Advice-Taking Shannon...examples of programs or algorithms that play games. The most significant of these is usually held to be A. Samuel’s checker playing program because it is...his checker playing program (GRC, 1978:54-72). Another related study nl,. { .. . performed for the Air Force recommends researching computerized

  17. HIV misdiagnosis in sub-Saharan Africa: performance of diagnostic algorithms at six testing sites

    PubMed Central

    Kosack, Cara S.; Shanks, Leslie; Beelaert, Greet; Benson, Tumwesigye; Savane, Aboubacar; Ng’ang’a, Anne; Andre, Bita; Zahinda, Jean-Paul BN; Fransen, Katrien; Page, Anne-Laure

    2017-01-01

    Abstract Introduction: We evaluated the diagnostic accuracy of HIV testing algorithms at six programmes in five sub-Saharan African countries. Methods: In this prospective multisite diagnostic evaluation study (Conakry, Guinea; Kitgum, Uganda; Arua, Uganda; Homa Bay, Kenya; Doula, Cameroun and Baraka, Democratic Republic of Congo), samples from clients (greater than equal to five years of age) testing for HIV were collected and compared to a state-of-the-art algorithm from the AIDS reference laboratory at the Institute of Tropical Medicine, Belgium. The reference algorithm consisted of an enzyme-linked immuno-sorbent assay, a line-immunoassay, a single antigen-enzyme immunoassay and a DNA polymerase chain reaction test. Results: Between August 2011 and January 2015, over 14,000 clients were tested for HIV at 6 HIV counselling and testing sites. Of those, 2786 (median age: 30; 38.1% males) were included in the study. Sensitivity of the testing algorithms ranged from 89.5% in Arua to 100% in Douala and Conakry, while specificity ranged from 98.3% in Doula to 100% in Conakry. Overall, 24 (0.9%) clients, and as many as 8 per site (1.7%), were misdiagnosed, with 16 false-positive and 8 false-negative results. Six false-negative specimens were retested with the on-site algorithm on the same sample and were found to be positive. Conversely, 13 false-positive specimens were retested: 8 remained false-positive with the on-site algorithm. Conclusions: The performance of algorithms at several sites failed to meet expectations and thresholds set by the World Health Organization, with unacceptably high rates of false results. Alongside the careful selection of rapid diagnostic tests and the validation of algorithms, strictly observing correct procedures can reduce the risk of false results. In the meantime, to identify false-positive diagnoses at initial testing, patients should be retested upon initiating antiretroviral therapy. PMID:28691437

  18. HIV misdiagnosis in sub-Saharan Africa: performance of diagnostic algorithms at six testing sites.

    PubMed

    Kosack, Cara S; Shanks, Leslie; Beelaert, Greet; Benson, Tumwesigye; Savane, Aboubacar; Ng'ang'a, Anne; Andre, Bita; Zahinda, Jean-Paul Bn; Fransen, Katrien; Page, Anne-Laure

    2017-07-03

    We evaluated the diagnostic accuracy of HIV testing algorithms at six programmes in five sub-Saharan African countries. In this prospective multisite diagnostic evaluation study (Conakry, Guinea; Kitgum, Uganda; Arua, Uganda; Homa Bay, Kenya; Doula, Cameroun and Baraka, Democratic Republic of Congo), samples from clients (greater than equal to five years of age) testing for HIV were collected and compared to a state-of-the-art algorithm from the AIDS reference laboratory at the Institute of Tropical Medicine, Belgium. The reference algorithm consisted of an enzyme-linked immuno-sorbent assay, a line-immunoassay, a single antigen-enzyme immunoassay and a DNA polymerase chain reaction test. Between August 2011 and January 2015, over 14,000 clients were tested for HIV at 6 HIV counselling and testing sites. Of those, 2786 (median age: 30; 38.1% males) were included in the study. Sensitivity of the testing algorithms ranged from 89.5% in Arua to 100% in Douala and Conakry, while specificity ranged from 98.3% in Doula to 100% in Conakry. Overall, 24 (0.9%) clients, and as many as 8 per site (1.7%), were misdiagnosed, with 16 false-positive and 8 false-negative results. Six false-negative specimens were retested with the on-site algorithm on the same sample and were found to be positive. Conversely, 13 false-positive specimens were retested: 8 remained false-positive with the on-site algorithm. The performance of algorithms at several sites failed to meet expectations and thresholds set by the World Health Organization, with unacceptably high rates of false results. Alongside the careful selection of rapid diagnostic tests and the validation of algorithms, strictly observing correct procedures can reduce the risk of false results. In the meantime, to identify false-positive diagnoses at initial testing, patients should be retested upon initiating antiretroviral therapy.

  19. Reliability of implant surgical guides based on soft-tissue models.

    PubMed

    Maney, Pooja; Simmons, David E; Palaiologou, Archontia; Kee, Edwin

    2012-12-01

    The purpose of this study was to determine the accuracy of implant surgical guides fabricated on diagnostic casts. Guides were fabricated with radiopaque rods representing implant positions. Cone beam computerized tomograms were taken with guides in place. Accuracy was evaluated using software to simulate implant placement. Twenty-two sites (47%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). Soft-tissue models do not always provide sufficient accuracy for fabricating implant surgical guides.

  20. A survey of simulators for palpation training.

    PubMed

    Zhang, Yan; Phillips, Roger; Ward, James; Pisharody, Sandhya

    2009-01-01

    Palpation is a widely used diagnostic method in medical practice. The sensitivity of palpation is highly dependent upon the skill of clinicians, which is often difficult to master. There is a need of simulators in palpation training. This paper summarizes important work and the latest achievements in simulation for palpation training. Three types of simulators; physical models, Virtual Reality (VR) based simulations, and hybrid (computerized and physical) simulators, are surveyed. Comparisons among different kinds of simulators are presented.

  1. High resolution, MRI-based, segmented, computerized head phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zubal, I.G.; Harrell, C.R.; Smith, E.O.

    1999-01-01

    The authors have created a high-resolution software phantom of the human brain which is applicable to voxel-based radiation transport calculations yielding nuclear medicine simulated images and/or internal dose estimates. A software head phantom was created from 124 transverse MRI images of a healthy normal individual. The transverse T2 slices, recorded in a 256x256 matrix from a GE Signa 2 scanner, have isotropic voxel dimensions of 1.5 mm and were manually segmented by the clinical staff. Each voxel of the phantom contains one of 62 index numbers designating anatomical, neurological, and taxonomical structures. The result is stored as a 256x256x128 bytemore » array. Internal volumes compare favorably to those described in the ICRP Reference Man. The computerized array represents a high resolution model of a typical human brain and serves as a voxel-based anthropomorphic head phantom suitable for computer-based modeling and simulation calculations. It offers an improved realism over previous mathematically described software brain phantoms, and creates a reference standard for comparing results of newly emerging voxel-based computations. Such voxel-based computations lead the way to developing diagnostic and dosimetry calculations which can utilize patient-specific diagnostic images. However, such individualized approaches lack fast, automatic segmentation schemes for routine use; therefore, the high resolution, typical head geometry gives the most realistic patient model currently available.« less

  2. The potential of positron emission tomography/computerized tomography (PET/CT) scanning as a detector of high-risk patients with oral infection during preoperative staging.

    PubMed

    Yamashiro, Keisuke; Nakano, Makoto; Sawaki, Koichi; Okazaki, Fumihiko; Hirata, Yasuhisa; Takashiba, Shogo

    2016-08-01

    It is sometimes difficult to determine during the preoperative period whether patients have oral infections; these patients need treatment to prevent oral infection-related complications from arising during medical therapies, such as cancer therapy and surgery. One of the reasons for this difficulty is that basic medical tests do not identify oral infections, including periodontitis and periapical periodontitis. In this report, we investigated the potential of positron emission tomography/computerized tomography (PET/CT) as a diagnostic tool in these patients. We evaluated eight patients during the preoperative period. All patients underwent PET/CT scanning and were identified as having the signs of oral infection, as evidenced by (18)F-fludeoxyglucose (FDG) localization in the oral regions. Periodontal examination and orthopantomogram evaluation showed severe infection or bone resorption in the oral regions. (18)F-FDG was localized in oral lesions, such as severe periodontitis, apical periodontitis, and pericoronitis of the third molar. The densities of (18)F-FDG were proportional to the degree of inflammation. PET/CT is a potential diagnostic tool for oral infections. It may be particularly useful in patients during preoperative staging, as they frequently undergo scanning at this time, and those identified as having oral infections at this time require treatment before cancer therapy or surgery. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Sex-specific performance of pre-imaging diagnostic algorithms for pulmonary embolism.

    PubMed

    van Mens, T E; van der Pol, L M; van Es, N; Bistervels, I M; Mairuhu, A T A; van der Hulle, T; Klok, F A; Huisman, M V; Middeldorp, S

    2018-05-01

    Essentials Decision rules for pulmonary embolism are used indiscriminately despite possible sex-differences. Various pre-imaging diagnostic algorithms have been investigated in several prospective studies. When analysed at an individual patient data level the algorithms perform similarly in both sexes. Estrogen use and male sex were associated with a higher prevalence in suspected pulmonary embolism. Background In patients suspected of pulmonary embolism (PE), clinical decision rules are combined with D-dimer testing to rule out PE, avoiding the need for imaging in those at low risk. Despite sex differences in several aspects of the disease, including its diagnosis, these algorithms are used indiscriminately in women and men. Objectives To compare the performance, defined as efficiency and failure rate, of three pre-imaging diagnostic algorithms for PE between women and men: the Wells rule with fixed or with age-adjusted D-dimer cut-off, and a recently validated algorithm (YEARS). A secondary aim was to determine the sex-specific prevalence of PE. Methods Individual patient data were obtained from six studies using the Wells rule (fixed D-dimer, n = 5; age adjusted, n = 1) and from one study using the YEARS algorithm. All studies prospectively enrolled consecutive patients with suspected PE. Main outcomes were efficiency (proportion of patients in which the algorithm ruled out PE without imaging) and failure rate (proportion of patients with PE not detected by the algorithm). Outcomes were estimated using (multilevel) logistic regression models. Results The main outcomes showed no sex differences in any of the separate algorithms. With all three, the prevalence of PE was lower in women (OR, 0.66, 0.68 and 0.74). In women, estrogen use, adjusted for age, was associated with lower efficiency and higher prevalence and D-dimer levels. Conclusions The investigated pre-imaging diagnostic algorithms for patients suspected of PE show no sex differences in performance. Male sex and estrogen use are both associated with a higher probability of having the disease. © 2018 International Society on Thrombosis and Haemostasis.

  4. The effects of automated artifact removal algorithms on electroencephalography-based Alzheimer's disease diagnosis

    PubMed Central

    Cassani, Raymundo; Falk, Tiago H.; Fraga, Francisco J.; Kanda, Paulo A. M.; Anghinah, Renato

    2014-01-01

    Over the last decade, electroencephalography (EEG) has emerged as a reliable tool for the diagnosis of cortical disorders such as Alzheimer's disease (AD). EEG signals, however, are susceptible to several artifacts, such as ocular, muscular, movement, and environmental. To overcome this limitation, existing diagnostic systems commonly depend on experienced clinicians to manually select artifact-free epochs from the collected multi-channel EEG data. Manual selection, however, is a tedious and time-consuming process, rendering the diagnostic system “semi-automated.” Notwithstanding, a number of EEG artifact removal algorithms have been proposed in the literature. The (dis)advantages of using such algorithms in automated AD diagnostic systems, however, have not been documented; this paper aims to fill this gap. Here, we investigate the effects of three state-of-the-art automated artifact removal (AAR) algorithms (both alone and in combination with each other) on AD diagnostic systems based on four different classes of EEG features, namely, spectral, amplitude modulation rate of change, coherence, and phase. The three AAR algorithms tested are statistical artifact rejection (SAR), blind source separation based on second order blind identification and canonical correlation analysis (BSS-SOBI-CCA), and wavelet enhanced independent component analysis (wICA). Experimental results based on 20-channel resting-awake EEG data collected from 59 participants (20 patients with mild AD, 15 with moderate-to-severe AD, and 24 age-matched healthy controls) showed the wICA algorithm alone outperforming other enhancement algorithm combinations across three tasks: diagnosis (control vs. mild vs. moderate), early detection (control vs. mild), and disease progression (mild vs. moderate), thus opening the doors for fully-automated systems that can assist clinicians with early detection of AD, as well as disease severity progression assessment. PMID:24723886

  5. Partial dependence of breast tumor malignancy on ultrasound image features derived from boosted trees

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Zhang, Su; Li, Wenying; Chen, Yaqing; Lu, Hongtao; Chen, Wufan; Chen, Yazhu

    2010-04-01

    Various computerized features extracted from breast ultrasound images are useful in assessing the malignancy of breast tumors. However, the underlying relationship between the computerized features and tumor malignancy may not be linear in nature. We use the decision tree ensemble trained by the cost-sensitive boosting algorithm to approximate the target function for malignancy assessment and to reflect this relationship qualitatively. Partial dependence plots are employed to explore and visualize the effect of features on the output of the decision tree ensemble. In the experiments, 31 image features are extracted to quantify the sonographic characteristics of breast tumors. Patient age is used as an external feature because of its high clinical importance. The area under the receiver-operating characteristic curve of the tree ensembles can reach 0.95 with sensitivity of 0.95 (61/64) at the associated specificity 0.74 (77/104). The partial dependence plots of the four most important features are demonstrated to show the influence of the features on malignancy, and they are in accord with the empirical observations. The results can provide visual and qualitative references on the computerized image features for physicians, and can be useful for enhancing the interpretability of computer-aided diagnosis systems for breast ultrasound.

  6. Development of a computerized assessment of clinician adherence to a treatment guideline for patients with bipolar disorder.

    PubMed

    Dennehy, Ellen B; Suppes, Trisha; John Rush, A; Lynn Crismon, M; Witte, B; Webster, J

    2004-01-01

    The adoption of treatment guidelines for complex psychiatric illness is increasing. Treatment decisions in psychiatry depend on a number of variables, including severity of symptoms, past treatment history, patient preferences, medication tolerability, and clinical response. While patient outcomes may be improved by the use of treatment guidelines, there is no agreed upon standard by which to assess the degree to which clinician behavior corresponds to those recommendations. This report presents a method to assess clinician adherence to the complex multidimensional treatment guideline for bipolar disorder utilized in the Texas Medication Algorithm Project. The steps involved in the development of this system are presented, including the reliance on standardized documentation, defining core variables of interest, selecting criteria for operationalization of those variables, and computerization of the assessment of adherence. The computerized assessment represents an improvement over other assessment methods, which have relied on laborious and costly chart reviews to extract clinical information and to analyze provider behavior. However, it is limited by the specificity of decisions that guided the adherence scoring process. Preliminary findings using this system with 2035 clinical visits conducted for the bipolar disorder module of TMAP Phase 3 are presented. These data indicate that this system of guideline adherence monitoring is feasible.

  7. Automated cellular pathology in noninvasive confocal microscopy

    NASA Astrophysics Data System (ADS)

    Ting, Monica; Krueger, James; Gareau, Daniel

    2014-03-01

    A computer algorithm was developed to automatically identify and count melanocytes and keratinocytes in 3D reflectance confocal microscopy (RCM) images of the skin. Computerized pathology increases our understanding and enables prevention of superficial spreading melanoma (SSM). Machine learning involved looking at the images to measure the size of cells through a 2-D Fourier transform and developing an appropriate mask with the erf() function to model the cells. Implementation involved processing the images to identify cells whose image segments provided the least difference when subtracted from the mask. With further simplification of the algorithm, the program may be directly implemented on the RCM images to indicate the presence of keratinocytes in seconds and to quantify the keratinocytes size in the en face plane as a function of depth. Using this system, the algorithm can identify any irregularities in maturation and differentiation of keratinocytes, thereby signaling the possible presence of cancer.

  8. Generation algorithm of craniofacial structure contour in cephalometric images

    NASA Astrophysics Data System (ADS)

    Mondal, Tanmoy; Jain, Ashish; Sardana, H. K.

    2010-02-01

    Anatomical structure tracing on cephalograms is a significant way to obtain cephalometric analysis. Computerized cephalometric analysis involves both manual and automatic approaches. The manual approach is limited in accuracy and repeatability. In this paper we have attempted to develop and test a novel method for automatic localization of craniofacial structure based on the detected edges on the region of interest. According to the grey scale feature at the different region of the cephalometric images, an algorithm for obtaining tissue contour is put forward. Using edge detection with specific threshold an improved bidirectional contour tracing approach is proposed by an interactive selection of the starting edge pixels, the tracking process searches repetitively for an edge pixel at the neighborhood of previously searched edge pixel to segment images, and then craniofacial structures are obtained. The effectiveness of the algorithm is demonstrated by the preliminary experimental results obtained with the proposed method.

  9. A Recommendation Algorithm for Automating Corollary Order Generation

    PubMed Central

    Klann, Jeffrey; Schadow, Gunther; McCoy, JM

    2009-01-01

    Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards. PMID:20351875

  10. A recommendation algorithm for automating corollary order generation.

    PubMed

    Klann, Jeffrey; Schadow, Gunther; McCoy, J M

    2009-11-14

    Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards.

  11. Histopathological Image Analysis: A Review

    PubMed Central

    Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent

    2010-01-01

    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804

  12. N-terminal pro-B-type natriuretic peptide diagnostic algorithm versus American Heart Association algorithm for Kawasaki disease.

    PubMed

    Dionne, Audrey; Meloche-Dumas, Léamarie; Desjardins, Laurent; Turgeon, Jean; Saint-Cyr, Claire; Autmizguine, Julie; Spigelblatt, Linda; Fournier, Anne; Dahdah, Nagib

    2017-03-01

    Diagnosis of Kawasaki disease (KD) can be challenging in the absence of a confirmatory test or pathognomonic finding, especially when clinical criteria are incomplete. We recently proposed serum N-terminal pro-B-type natriuretic peptide (NT-proBNP) as an adjunctive diagnostic test. We retrospectively tested a new algorithm to help KD diagnosis based on NT-proBNP, coronary artery dilation (CAD) at onset, and abnormal serum albumin or C-reactive protein (CRP). The goal was to assess the performance of the algorithm and compare its performance with that of the 2004 American Heart Association (AHA)/American Academy of Pediatrics (AAP) algorithm. The algorithm was tested on 124 KD patients with NT-proBNP measured on admission at the present institutions between 2007 and 2013. Age at diagnosis was 3.4 ± 3.0 years, with a median of five diagnostic criteria; and 55 of the 124 patients (44%) had incomplete KD. CA complications occurred in 64 (52%), with aneurysm in 14 (11%). Using this algorithm, 120/124 (97%) were to be treated, based on high NT-proBNP alone for 79 (64%); on onset CAD for 14 (11%); and on high CRP or low albumin for 27 (22%). Using the AHA/AAP algorithm, 22/47 (47%) of the eligible patients with incomplete KD would not have been referred for treatment, compared with 3/55 (5%) with the NT-proBNP algorithm (P < 0.001). This NT-proBNP-based algorithm is efficient to identify and treat patients with KD, including those with incomplete KD. This study paves the way for a prospective validation trial of the algorithm. © 2016 Japan Pediatric Society.

  13. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: A postmortem study

    PubMed Central

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee

    2013-01-01

    Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536

  14. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: a postmortem study.

    PubMed

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee

    2013-12-01

    Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.

  15. Automated identification of sleep states from EEG signals by means of ensemble empirical mode decomposition and random under sampling boosting.

    PubMed

    Hassan, Ahnaf Rashik; Bhuiyan, Mohammed Imamul Hassan

    2017-03-01

    Automatic sleep staging is essential for alleviating the burden of the physicians of analyzing a large volume of data by visual inspection. It is also a precondition for making an automated sleep monitoring system feasible. Further, computerized sleep scoring will expedite large-scale data analysis in sleep research. Nevertheless, most of the existing works on sleep staging are either multichannel or multiple physiological signal based which are uncomfortable for the user and hinder the feasibility of an in-home sleep monitoring device. So, a successful and reliable computer-assisted sleep staging scheme is yet to emerge. In this work, we propose a single channel EEG based algorithm for computerized sleep scoring. In the proposed algorithm, we decompose EEG signal segments using Ensemble Empirical Mode Decomposition (EEMD) and extract various statistical moment based features. The effectiveness of EEMD and statistical features are investigated. Statistical analysis is performed for feature selection. A newly proposed classification technique, namely - Random under sampling boosting (RUSBoost) is introduced for sleep stage classification. This is the first implementation of EEMD in conjunction with RUSBoost to the best of the authors' knowledge. The proposed feature extraction scheme's performance is investigated for various choices of classification models. The algorithmic performance of our scheme is evaluated against contemporary works in the literature. The performance of the proposed method is comparable or better than that of the state-of-the-art ones. The proposed algorithm gives 88.07%, 83.49%, 92.66%, 94.23%, and 98.15% for 6-state to 2-state classification of sleep stages on Sleep-EDF database. Our experimental outcomes reveal that RUSBoost outperforms other classification models for the feature extraction framework presented in this work. Besides, the algorithm proposed in this work demonstrates high detection accuracy for the sleep states S1 and REM. Statistical moment based features in the EEMD domain distinguish the sleep states successfully and efficaciously. The automated sleep scoring scheme propounded herein can eradicate the onus of the clinicians, contribute to the device implementation of a sleep monitoring system, and benefit sleep research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Effect of Watermarking on Diagnostic Preservation of Atherosclerotic Ultrasound Video in Stroke Telemedicine.

    PubMed

    Dey, Nilanjan; Bose, Soumyo; Das, Achintya; Chaudhuri, Sheli Sinha; Saba, Luca; Shafique, Shoaib; Nicolaides, Andrew; Suri, Jasjit S

    2016-04-01

    Embedding of diagnostic and health care information requires secure encryption and watermarking. This research paper presents a comprehensive study for the behavior of some well established watermarking algorithms in frequency domain for the preservation of stroke-based diagnostic parameters. Two different sets of watermarking algorithms namely: two correlation-based (binary logo hiding) and two singular value decomposition (SVD)-based (gray logo hiding) watermarking algorithms are used for embedding ownership logo. The diagnostic parameters in atherosclerotic plaque ultrasound video are namely: (a) bulb identification and recognition which consists of identifying the bulb edge points in far and near carotid walls; (b) carotid bulb diameter; and (c) carotid lumen thickness all along the carotid artery. The tested data set consists of carotid atherosclerotic movies taken under IRB protocol from University of Indiana Hospital, USA-AtheroPoint™ (Roseville, CA, USA) joint pilot study. ROC (receiver operating characteristic) analysis was performed on the bulb detection process that showed an accuracy and sensitivity of 100 % each, respectively. The diagnostic preservation (DPsystem) for SVD-based approach was above 99 % with PSNR (Peak signal-to-noise ratio) above 41, ensuring the retention of diagnostic parameter devalorization as an effect of watermarking. Thus, the fully automated proposed system proved to be an efficient method for watermarking the atherosclerotic ultrasound video for stroke application.

  17. Gearbox vibration diagnostic analyzer

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This report describes the Gearbox Vibration Diagnostic Analyzer installed in the NASA Lewis Research Center's 500 HP Helicopter Transmission Test Stand to monitor gearbox testing. The vibration of the gearbox is analyzed using diagnostic algorithms to calculate a parameter indicating damaged components.

  18. Anatomy-Based Algorithms for Detecting Oral Cancer Using Reflectance and Fluorescence Spectroscopy

    PubMed Central

    McGee, Sasha; Mardirossian, Vartan; Elackattu, Alphi; Mirkovic, Jelena; Pistey, Robert; Gallagher, George; Kabani, Sadru; Yu, Chung-Chieh; Wang, Zimmern; Badizadegan, Kamran; Grillone, Gregory; Feld, Michael S.

    2010-01-01

    Objectives We used reflectance and fluorescence spectroscopy to noninvasively and quantitatively distinguish benign from dysplastic/malignant oral lesions. We designed diagnostic algorithms to account for differences in the spectral properties among anatomic sites (gingiva, buccal mucosa, etc). Methods In vivo reflectance and fluorescence spectra were collected from 71 patients with oral lesions. The tissue was then biopsied and the specimen evaluated by histopathology. Quantitative parameters related to tissue morphology and biochemistry were extracted from the spectra. Diagnostic algorithms specific for combinations of sites with similar spectral properties were developed. Results Discrimination of benign from dysplastic/malignant lesions was most successful when algorithms were designed for individual sites (area under the receiver operator characteristic curve [ROC-AUC], 0.75 for the lateral surface of the tongue) and was least accurate when all sites were combined (ROC-AUC, 0.60). The combination of sites with similar spectral properties (floor of mouth and lateral surface of the tongue) yielded an ROC-AUC of 0.71. Conclusions Accurate spectroscopic detection of oral disease must account for spectral variations among anatomic sites. Anatomy-based algorithms for single sites or combinations of sites demonstrated good diagnostic performance in distinguishing benign lesions from dysplastic/malignant lesions and consistently performed better than algorithms developed for all sites combined. PMID:19999369

  19. The sensitivity and negative predictive value of a pediatric cervical spine clearance algorithm that minimizes computerized tomography.

    PubMed

    Arbuthnot, Mary; Mooney, David P

    2017-01-01

    It is crucial to identify cervical spine injuries while minimizing ionizing radiation. This study analyzes the sensitivity and negative predictive value of a pediatric cervical spine clearance algorithm. We performed a retrospective review of all children <21years old who were admitted following blunt trauma and underwent cervical spine clearance utilizing our institution's cervical spine clearance algorithm over a 10-year period. Age, gender, International Classification of Diseases 9th Edition diagnosis codes, presence or absence of cervical collar on arrival, Injury Severity Score, and type of cervical spine imaging obtained were extracted from the trauma registry and electronic medical record. Descriptive statistics were used and the sensitivity and negative predictive value of the algorithm were calculated. Approximately 125,000 children were evaluated in the Emergency Department and 11,331 were admitted. Of the admitted children, 1023 patients arrived in a cervical collar without advanced cervical spine imaging and were evaluated using the cervical spine clearance algorithm. Algorithm sensitivity was 94.4% and the negative predictive value was 99.9%. There was one missed injury, a spinous process tip fracture in a teenager maintained in a collar. Our algorithm was associated with a low missed injury rate and low CT utilization rate, even in children <3years old. IV. Published by Elsevier Inc.

  20. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  1. Creating normograms of dural sinuses in healthy persons using computer-assisted detection for analysis and comparison of cross-section dural sinuses in the brain.

    PubMed

    Anconina, Reut; Zur, Dinah; Kesler, Anat; Lublinsky, Svetlana; Toledano, Ronen; Novack, Victor; Benkobich, Elya; Novoa, Rosa; Novic, Evelyne Farkash; Shelef, Ilan

    2017-06-01

    Dural sinuses vary in size and shape in many pathological conditions with abnormal intracranial pressure. Size and shape normograms of dural brain sinuses are not available. The creation of such normograms may enable computer-assisted comparison to pathologic exams and facilitate diagnoses. The purpose of this study was to quantitatively evaluate normal magnetic resonance venography (MRV) studies in order to create normograms of dural sinuses using a computerized algorithm for vessel cross-sectional analysis. This was a retrospective analysis of MRV studies of 30 healthy persons. Data were analyzed using a specially developed Matlab algorithm for vessel cross-sectional analysis. The cross-sectional area and shape measurements were evaluated to create normograms. Mean cross-sectional size was 53.27±13.31 for the right transverse sinus (TS), 46.87+12.57 for the left TS (p=0.089) and 36.65+12.38 for the superior sagittal sinus. Normograms were created. The distribution of cross-sectional areas along the vessels showed distinct patterns and a parallel course for the median, 25th, 50th and 75th percentiles. In conclusion, using a novel computerized method for vessel cross-sectional analysis we were able to quantitatively characterize dural sinuses of healthy persons and create normograms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Validated methods for identifying tuberculosis patients in health administrative databases: systematic review.

    PubMed

    Ronald, L A; Ling, D I; FitzGerald, J M; Schwartzman, K; Bartlett-Esquilant, G; Boivin, J-F; Benedetti, A; Menzies, D

    2017-05-01

    An increasing number of studies are using health administrative databases for tuberculosis (TB) research. However, there are limitations to using such databases for identifying patients with TB. To summarise validated methods for identifying TB in health administrative databases. We conducted a systematic literature search in two databases (Ovid Medline and Embase, January 1980-January 2016). We limited the search to diagnostic accuracy studies assessing algorithms derived from drug prescription, International Classification of Diseases (ICD) diagnostic code and/or laboratory data for identifying patients with TB in health administrative databases. The search identified 2413 unique citations. Of the 40 full-text articles reviewed, we included 14 in our review. Algorithms and diagnostic accuracy outcomes to identify TB varied widely across studies, with positive predictive value ranging from 1.3% to 100% and sensitivity ranging from 20% to 100%. Diagnostic accuracy measures of algorithms using out-patient, in-patient and/or laboratory data to identify patients with TB in health administrative databases vary widely across studies. Use solely of ICD diagnostic codes to identify TB, particularly when using out-patient records, is likely to lead to incorrect estimates of case numbers, given the current limitations of ICD systems in coding TB.

  3. CT diagnosis of a clinically unsuspected acute appendicitis complicating infectious mononucleosis.

    PubMed

    Zissin, R; Brautbar, O; Shapiro-Feinberg, M

    2001-01-01

    Acute appendicitis is a rare complication of infectious mononucleosis (IM). We describe a patient with IM and splenic rupture with a computerized tomography (CT) diagnosis of acute appendicitis during the acute phase of the infectious disease. Diagnostic imaging features of acute appendicitis were found on an abdominal CT performed for the evaluation of postoperative fever. Histologic examination confirmed the CT diagnosis of the clinically unsuspected acute appendicitis. Our case is unique both for the rarity of this complication and the lack of clinical symptoms.

  4. [Treatment of inflammatory complications of colic diverticular disease at the emergency surgical care hospital].

    PubMed

    Reznitsky, P A; Yartsev, P A; Shavrina, N V

    To assess an effectiveness of minimally invasive and laparoscopic technologies in treatment of inflammatory complications of colic diverticular disease. The study included 150 patients who were divided into control and main groups. Survey included ultrasound, X-ray examination and abdominal computerized tomography. In the main group standardized treatment algorithm including minimally invasive and laparoscopic technologies was used. In the main group 79 patients underwent conservative treatment, minimally invasive (ultrasound-assisted percutaneous drainage of abscesses) and laparoscopic surgery that was successful in 78 (98.7%) patients. Standardized algorithm reduces time of treatment, incidence of postoperative complications, mortality and the risk of recurrent inflammatory complications of colic diverticular disease. Also postoperative quality of life was improved.

  5. Combined algorithmic and GPU acceleration for ultra-fast circular conebeam backprojection

    NASA Astrophysics Data System (ADS)

    Brokish, Jeffrey; Sack, Paul; Bresler, Yoram

    2010-04-01

    In this paper, we describe the first implementation and performance of a fast O(N3logN) hierarchical backprojection algorithm for cone beam CT with a circular trajectory1,developed on a modern Graphics Processing Unit (GPU). The resulting tomographic backprojection system for 3D cone beam geometry combines speedup through algorithmic improvements provided by the hierarchical backprojection algorithm with speedup from a massively parallel hardware accelerator. For data parameters typical in diagnostic CT and using a mid-range GPU card, we report reconstruction speeds of up to 360 frames per second, and relative speedup of almost 6x compared to conventional backprojection on the same hardware. The significance of these results is twofold. First, they demonstrate that the reduction in operation counts demonstrated previously for the FHBP algorithm can be translated to a comparable run-time improvement in a massively parallel hardware implementation, while preserving stringent diagnostic image quality. Second, the dramatic speedup and throughput numbers achieved indicate the feasibility of systems based on this technology, which achieve real-time 3D reconstruction for state-of-the art diagnostic CT scanners with small footprint, high-reliability, and affordable cost.

  6. A new full-field digital mammography system with and without the use of an advanced post-processing algorithm: comparison of image quality and diagnostic performance.

    PubMed

    Ahn, Hye Shin; Kim, Sun Mi; Jang, Mijung; Yun, Bo La; Kim, Bohyoung; Ko, Eun Sook; Han, Boo-Kyung; Chang, Jung Min; Yi, Ann; Cho, Nariya; Moon, Woo Kyung; Choi, Hye Young

    2014-01-01

    To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige®), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of "Mammogram enhancement ver. 2.0"; group B (SMB), specimen mammography with application of "Mammogram enhancement ver. 2.0". Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software.

  7. Colonoscopy video quality assessment using hidden Markov random fields

    NASA Astrophysics Data System (ADS)

    Park, Sun Young; Sargent, Dusty; Spofford, Inbar; Vosburgh, Kirby

    2011-03-01

    With colonoscopy becoming a common procedure for individuals aged 50 or more who are at risk of developing colorectal cancer (CRC), colon video data is being accumulated at an ever increasing rate. However, the clinically valuable information contained in these videos is not being maximally exploited to improve patient care and accelerate the development of new screening methods. One of the well-known difficulties in colonoscopy video analysis is the abundance of frames with no diagnostic information. Approximately 40% - 50% of the frames in a colonoscopy video are contaminated by noise, acquisition errors, glare, blur, and uneven illumination. Therefore, filtering out low quality frames containing no diagnostic information can significantly improve the efficiency of colonoscopy video analysis. To address this challenge, we present a quality assessment algorithm to detect and remove low quality, uninformative frames. The goal of our algorithm is to discard low quality frames while retaining all diagnostically relevant information. Our algorithm is based on a hidden Markov model (HMM) in combination with two measures of data quality to filter out uninformative frames. Furthermore, we present a two-level framework based on an embedded hidden Markov model (EHHM) to incorporate the proposed quality assessment algorithm into a complete, automated diagnostic image analysis system for colonoscopy video.

  8. New method for detection of gastric cancer by hyperspectral imaging: a pilot study

    NASA Astrophysics Data System (ADS)

    Kiyotoki, Shu; Nishikawa, Jun; Okamoto, Takeshi; Hamabe, Kouichi; Saito, Mari; Goto, Atsushi; Fujita, Yusuke; Hamamoto, Yoshihiko; Takeuchi, Yusuke; Satori, Shin; Sakaida, Isao

    2013-02-01

    We developed a new, easy, and objective method to detect gastric cancer using hyperspectral imaging (HSI) technology combining spectroscopy and imaging A total of 16 gastroduodenal tumors removed by endoscopic resection or surgery from 14 patients at Yamaguchi University Hospital, Japan, were recorded using a hyperspectral camera (HSC) equipped with HSI technology Corrected spectral reflectance was obtained from 10 samples of normal mucosa and 10 samples of tumors for each case The 16 cases were divided into eight training cases (160 training samples) and eight test cases (160 test samples) We established a diagnostic algorithm with training samples and evaluated it with test samples Diagnostic capability of the algorithm for each tumor was validated, and enhancement of tumors by image processing using the HSC was evaluated The diagnostic algorithm used the 726-nm wavelength, with a cutoff point established from training samples The sensitivity, specificity, and accuracy rates of the algorithm's diagnostic capability in the test samples were 78.8% (63/80), 92.5% (74/80), and 85.6% (137/160), respectively Tumors in HSC images of 13 (81.3%) cases were well enhanced by image processing Differences in spectral reflectance between tumors and normal mucosa suggested that tumors can be clearly distinguished from background mucosa with HSI technology.

  9. Vertigo in childhood: proposal for a diagnostic algorithm based upon clinical experience.

    PubMed

    Casani, A P; Dallan, I; Navari, E; Sellari Franceschini, S; Cerchiai, N

    2015-06-01

    The aim of this paper is to analyse, after clinical experience with a series of patients with established diagnoses and review of the literature, all relevant anamnestic features in order to build a simple diagnostic algorithm for vertigo in childhood. This study is a retrospective chart review. A series of 37 children underwent complete clinical and instrumental vestibular examination. Only neurological disorders or genetic diseases represented exclusion criteria. All diagnoses were reviewed after applying the most recent diagnostic guidelines. In our experience, the most common aetiology for dizziness is vestibular migraine (38%), followed by acute labyrinthitis/neuritis (16%) and somatoform vertigo (16%). Benign paroxysmal vertigo was diagnosed in 4 patients (11%) and paroxysmal torticollis was diagnosed in a 1-year-old child. In 8% (3 patients) of cases, the dizziness had a post-traumatic origin: 1 canalolithiasis of the posterior semicircular canal and 2 labyrinthine concussions, respectively. Menière's disease was diagnosed in 2 cases. A bilateral vestibular failure of unknown origin caused chronic dizziness in 1 patient. In conclusion, this algorithm could represent a good tool for guiding clinical suspicion to correct diagnostic assessment in dizzy children where no neurological findings are detectable. The algorithm has just a few simple steps, based mainly on two aspects to be investigated early: temporal features of vertigo and presence of hearing impairment. A different algorithm has been proposed for cases in which a traumatic origin is suspected.

  10. Anonymity and Electronics: Adapting Preparation for Radiology Resident Examination.

    PubMed

    Chapman, Teresa; Reid, Janet R; O'Conner, Erin E

    2017-06-01

    Diagnostic radiology resident assessment has evolved from a traditional oral examination to computerized testing. Teaching faculty struggle to reconcile the differences between traditional teaching methods and residents' new preferences for computerized testing models generated by new examination styles. We aim to summarize the collective experiences of senior residents at three different teaching hospitals who participated in case review sessions using a computer-based, interactive, anonymous teaching tool, rather than the Socratic method. Feedback was collected from radiology residents following participation in a senior resident case review session using Nearpod, which allows residents to anonymously respond to the teaching material. Subjective resident feedback was uniformly enthusiastic. Ninety percent of residents favor a case-based board review incorporating multiple-choice questions, and 94% favor an anonymous response system. Nearpod allows for inclusion of multiple-choice questions while also providing direct feedback to the teaching faculty, helping to direct the instruction and clarify residents' gaps in knowledge before the Core Examination. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  11. Privacy considerations in the context of an Australian observational database.

    PubMed

    Duszynski, K M; Beilby, J J; Marley, J E; Walker, D C; Pratt, N L

    2001-12-01

    Observational databases are increasingly acknowledged for their value in clinical investigation. Australian general practice in particular presents an exciting opportunity to examine treatment in a natural setting. The paper explores issues such as privacy and confidentiality--foremost considerations when conducting this form of pharmacoepidemiological research. Australian legislation is currently addressing these exact issues in order to establish clear directives regarding ethical concerns. The development of a pharmacoepidemiological database arising from the integration of computerized Australian general practice records is described in addition, to the challenges associated with creating a database which considers patient privacy. The database known as 'Medic-GP', presently contains more than 950,000 clinical notes (including consultations, pathology, diagnostic imaging and adverse reactions) over a 5-year time period and relates to 55,000 patients. The paper then details a retrospective study which utilized the database to examine the interaction between antibiotic prescribing and patient outcomes from a community perspective, following a policy intervention. This study illustrates the application of computerized general practice records in research.

  12. Contrast-enhanced multidetector computerized tomography for odontogenic cysts and cystic-appearing tumors of the jaws: is it useful?

    PubMed

    Kakimoto, Naoya; Chindasombatjaroen, Jira; Tomita, Seiki; Shimamoto, Hiroaki; Uchiyama, Yuka; Hasegawa, Yoko; Kishino, Mitsunobu; Murakami, Shumei; Furukawa, Souhei

    2013-01-01

    The purpose of this study was to investigate the usefulness of computerized tomography (CT), particularly contrast-enhanced CT, in differentiation of jaw cysts and cystic-appearing tumors. We retrospectively analyzed contrast-enhanced CT images of 90 patients with odontogenic jaw cysts or cystic-appearing tumors. The lesion size and CT values were measured and the short axis to long axis (S/L) ratio, contrast enhancement (CE) ratio, and standard deviation ratio were calculated. The lesion size and the S/L ratio of keratocystic odontogenic tumors were significantly different from those of radicular cysts and follicular cysts. There were no significant differences in the CE ratio among the lesions. Multidetector CT provided diagnostic information about the size of odontogenic cysts and cystic-appearing tumors of the jaws that was related to the lesion type, but showed no relation between CE ratio and the type of these lesions. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. [Some approaches to the countermeasure system for a mars exploration mission].

    PubMed

    Kozlovskaia, I B; Egorov, A D; Son'kin, V D

    2010-01-01

    In article discussed physiological and methodical principles of the organization of training process and his (its) computerization during Martian flight in conditions of autonomous activity of the crew, providing interaction with onboard medical means, self-maintained by crew of the their health, performance of preventive measures, diagnostic studies and, in case of necessity, carrying out of treatment. In super long autonomous flights essentially become complicated the control of ground experts over of crew members conditions, that testifies to necessity of a computerization of control process by a state of health of crew, including carrying out of preventive actions. The situation becomes complicated impossibility of reception and transfer aboard the necessary information in real time and emergency returning of crew to the Earth. In these conditions realization of problems of physical preventive maintenance should be solved by means of the onboard automated expert system, providing management by trainings of each crew members, directed on optimization of their psychophysical condition.

  14. A collaborative approach to developing an electronic health record phenotyping algorithm for drug-induced liver injury

    PubMed Central

    Overby, Casey Lynnette; Pathak, Jyotishman; Gottesman, Omri; Haerian, Krystl; Perotte, Adler; Murphy, Sean; Bruce, Kevin; Johnson, Stephanie; Talwalkar, Jayant; Shen, Yufeng; Ellis, Steve; Kullo, Iftikhar; Chute, Christopher; Friedman, Carol; Bottinger, Erwin; Hripcsak, George; Weng, Chunhua

    2013-01-01

    Objective To describe a collaborative approach for developing an electronic health record (EHR) phenotyping algorithm for drug-induced liver injury (DILI). Methods We analyzed types and causes of differences in DILI case definitions provided by two institutions—Columbia University and Mayo Clinic; harmonized two EHR phenotyping algorithms; and assessed the performance, measured by sensitivity, specificity, positive predictive value, and negative predictive value, of the resulting algorithm at three institutions except that sensitivity was measured only at Columbia University. Results Although these sites had the same case definition, their phenotyping methods differed by selection of liver injury diagnoses, inclusion of drugs cited in DILI cases, laboratory tests assessed, laboratory thresholds for liver injury, exclusion criteria, and approaches to validating phenotypes. We reached consensus on a DILI phenotyping algorithm and implemented it at three institutions. The algorithm was adapted locally to account for differences in populations and data access. Implementations collectively yielded 117 algorithm-selected cases and 23 confirmed true positive cases. Discussion Phenotyping for rare conditions benefits significantly from pooling data across institutions. Despite the heterogeneity of EHRs and varied algorithm implementations, we demonstrated the portability of this algorithm across three institutions. The performance of this algorithm for identifying DILI was comparable with other computerized approaches to identify adverse drug events. Conclusions Phenotyping algorithms developed for rare and complex conditions are likely to require adaptive implementation at multiple institutions. Better approaches are also needed to share algorithms. Early agreement on goals, data sources, and validation methods may improve the portability of the algorithms. PMID:23837993

  15. Anismus: the cause of constipation? Results of investigation and treatment.

    PubMed

    Duthie, G S; Bartolo, D C

    1992-01-01

    Anismus, or failure of the somatic sphincter apparatus to relax at defecation, has been implicated as a major contributor to the problem of obstructed defecation. Current diagnostic methods depend on laboratory measurements of attempted defecation and the most complex, dynamic proctography has been the mainstay of diagnosis. Using a new computerized ambulatory method of recording sphincter function in these patients at home, we report an 80% reduction in our diagnostic rate suggesting that conventional tests fail to accurately diagnose this condition, probably because they poorly represent the natural physiology of defecation. Treatment of this distressing condition is more complex and a variety of surgical and pharmacological measures have failed. Biofeedback retraining of anorectal function of these patients has been very successful and represents the management of choice.

  16. What is the actual epidemiology of familial hypercholesterolemia in Italy? Evidence from a National Primary Care Database.

    PubMed

    Guglielmi, Valeria; Bellia, Alfonso; Pecchioli, Serena; Medea, Gerardo; Parretti, Damiano; Lauro, Davide; Sbraccia, Paolo; Federici, Massimo; Cricelli, Iacopo; Cricelli, Claudio; Lapi, Francesco

    2016-11-15

    There are some inconsistencies on prevalence estimates of familial hypercholesterolemia (FH) in general population across Europe due to variable application of its diagnostic criteria. We aimed to investigate the FH epidemiology in Italy applying the Dutch Lipid Clinical Network (DLCN) score, and two alternative diagnostic algorithms to a primary care database. We performed a retrospective population-based study using the Health Search IMS Health Longitudinal Patient Database (HSD) and including active (alive and currently registered with their general practitioners (GPs)) patients on December 31, 2014. Cases of FH were identified by applying DLCN score. Two further algorithms, based on either ICD9CM coding for FH or some clinical items adopted by the DLCN, were tested towards DLCN itself as gold standard. We estimated a prevalence of 0.01% for "definite" and 0.18% for "definite" plus "probable" cases as per the DLCN. Algorithms 1 and 2 reported a FH prevalence of 0.9 and 0.13%, respectively. Both algorithms resulted in consistent specificity (1: 99.10%; 2: 99.9%) towards DLCN, but Algorithm 2 considerably better identified true positive (sensitivity=85.90%) than Algorithm 1 (sensitivity=10.10%). The application of DLCN or valid diagnostic alternatives in the Italian primary care setting provides estimates of FH prevalence consistent with those reported in other screening studies in Caucasian population. These diagnostic criteria should be therefore fostered among GPs. In the perspective of FH new therapeutic options, the epidemiological picture of FH is even more relevant to foresee the costs and to plan affordable reimbursement programs in Italy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Diagnostic Utility of the ADI-R and DSM-5 in the Assessment of Latino Children and Adolescents

    ERIC Educational Resources Information Center

    Magaña, Sandy; Vanegas, Sandra B.

    2017-01-01

    Latino children in the US are systematically underdiagnosed with Autism Spectrum Disorder (ASD); therefore, it is important that recent changes to the diagnostic process do not exacerbate this pattern of under-identification. Previous research has found that the Autism Diagnostic Interview-Revised (ADI-R) algorithm, based on the Diagnostic and…

  18. Current challenges in diagnostic imaging of venous thromboembolism.

    PubMed

    Huisman, Menno V; Klok, Frederikus A

    2015-01-01

    Because the clinical diagnosis of deep-vein thrombosis and pulmonary embolism is nonspecific, integrated diagnostic approaches for patients with suspected venous thromboembolism have been developed over the years, involving both non-invasive bedside tools (clinical decision rules and D-dimer blood tests) for patients with low pretest probability and diagnostic techniques (compression ultrasound for deep-vein thrombosis and computed tomography pulmonary angiography for pulmonary embolism) for those with a high pretest probability. This combination has led to standardized diagnostic algorithms with proven safety for excluding venous thrombotic disease. At the same time, it has become apparent that, as a result of the natural history of venous thrombosis, there are special patient populations in which the current standard diagnostic algorithms are not sufficient. In this review, we present 3 evidence-based patient cases to underline recent developments in the imaging diagnosis of venous thromboembolism. © 2015 by The American Society of Hematology. All rights reserved.

  19. [Chronic diarrhoea: Definition, classification and diagnosis].

    PubMed

    Fernández-Bañares, Fernando; Accarino, Anna; Balboa, Agustín; Domènech, Eugeni; Esteve, Maria; Garcia-Planella, Esther; Guardiola, Jordi; Molero, Xavier; Rodríguez-Luna, Alba; Ruiz-Cerulla, Alexandra; Santos, Javier; Vaquero, Eva

    2016-10-01

    Chronic diarrhoea is a common presenting symptom in both primary care medicine and in specialized gastroenterology clinics. It is estimated that >5% of the population has chronic diarrhoea and nearly 40% of these patients are older than 60 years. Clinicians often need to select the best diagnostic approach to these patients and choose between the multiple diagnostic tests available. In 2014 the Catalan Society of Gastroenterology formed a working group with the main objective of creating diagnostic algorithms based on clinical practice and to evaluate diagnostic tests and the scientific evidence available for their use. The GRADE system was used to classify scientific evidence and strength of recommendations. The consensus document contains 28 recommendations and 6 diagnostic algorithms. The document also describes criteria for referral from primary to specialized care. Copyright © 2015 Elsevier España, S.L.U. y AEEH y AEG. All rights reserved.

  20. The diagnostic management of upper extremity deep vein thrombosis: A review of the literature.

    PubMed

    Kraaijpoel, Noémie; van Es, Nick; Porreca, Ettore; Büller, Harry R; Di Nisio, Marcello

    2017-08-01

    Upper extremity deep vein thrombosis (UEDVT) accounts for 4% to 10% of all cases of deep vein thrombosis. UEDVT may present with localized pain, erythema, and swelling of the arm, but may also be detected incidentally by diagnostic imaging tests performed for other reasons. Prompt and accurate diagnosis is crucial to prevent pulmonary embolism and long-term complications as the post-thrombotic syndrome of the arm. Unlike the diagnostic management of deep vein thrombosis (DVT) of the lower extremities, which is well established, the work-up of patients with clinically suspected UEDVT remains uncertain with limited evidence from studies of small size and poor methodological quality. Currently, only one prospective study evaluated the use of an algorithm, similar to the one used for DVT of the lower extremities, for the diagnostic workup of clinically suspected UEDVT. The algorithm combined clinical probability assessment, D-dimer testing and ultrasonography and appeared to safely and effectively exclude UEDVT. However, before recommending its use in routine clinical practice, external validation of this strategy and improvements of the efficiency are needed, especially in high-risk subgroups in whom the performance of the algorithm appeared to be suboptimal, such as hospitalized or cancer patients. In this review, we critically assess the accuracy and efficacy of current diagnostic tools and provide clinical guidance for the diagnostic management of clinically suspected UEDVT. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Diagnostic Performance of SRU and ATA Thyroid Nodule Classification Algorithms as Tested With a 1 Million Virtual Thyroid Nodule Model.

    PubMed

    Boehnke, Mitchell; Patel, Nayana; McKinney, Kristin; Clark, Toshimasa

    The Society of Radiologists in Ultrasound (SRU 2005) and American Thyroid Association (ATA 2009 and ATA 2015) have published algorithms regarding thyroid nodule management. Kwak et al. and other groups have described models that estimate thyroid nodules' malignancy risk. The aim of our study is to use Kwak's model to evaluate the tradeoffs of both sensitivity and specificity of SRU 2005, ATA 2009 and ATA 2015 management algorithms. 1,000,000 thyroid nodules were modeled in MATLAB. Ultrasound characteristics were modeled after published data. Malignancy risk was estimated per Kwak's model and assigned as a binary variable. All nodules were then assessed using the published management algorithms. With the malignancy variable as condition positivity and algorithms' recommendation for FNA as test positivity, diagnostic performance was calculated. Modeled nodule characteristics mimic those of Kwak et al. 12.8% nodules were assigned as malignant (malignancy risk range of 2.0-98%). FNA was recommended for 41% of nodules by SRU 2005, 66% by ATA 2009, and 82% by ATA 2015. Sensitivity and specificity is significantly different (< 0.0001): 49% and 60% for SRU; 81% and 36% for ATA 2009; and 95% and 20% for ATA 2015. SRU 2005, ATA 2009 and ATA 2015 algorithms are used routinely in clinical practice to determine whether thyroid nodule biopsy is indicated. We demonstrate significant differences in these algorithms' diagnostic performance, which result in a compromise between sensitivity and specificity. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Neuropsychological Measures in the Diagnosis of ADHD in Preschool: Can Developmental Research Inform Diagnostic Practice?

    PubMed

    Merkt, Julia; Siniatchkin, Michael; Petermann, Franz

    2016-03-22

    The diagnosis of ADHD in preschool is challenging. Behavioral ratings are less reliable, but the value of neuropsychological tests in the diagnosis of ADHD has been debated. This article provides an overview of neuropsychological measures utilized in preschoolers with ADHD (3-5 years). In addition, the manuscript discusses the extent to which these measures have been tested for their diagnostic capacity. The diagnostic utility of computerized continuous performance tests and working memory subtests from IQ-batteries has been demonstrated in a number of studies by assessing their psychometric properties, sensitivity, and specificity. However, findings from developmental and basic research attempting to describe risk factors that explain variance in ADHD show the most consistent associations of ADHD with measures of delay aversion. Results from developmental research could benefit studies that improve ADHD diagnosis at the individual level. It might be helpful to consider testing as a structured situation for behavioral observation by the clinician. © The Author(s) 2016.

  3. Radiation levels and image quality in patients undergoing chest X-ray examinations

    NASA Astrophysics Data System (ADS)

    de Oliveira, Paulo Márcio Campos; do Carmo Santana, Priscila; de Sousa Lacerda, Marco Aurélio; da Silva, Teógenes Augusto

    2017-11-01

    Patient dose monitoring for different radiographic procedures has been used as a parameter to evaluate the performance of radiology services; skin entrance absorbed dose values for each type of examination were internationally established and recommended aiming patient protection. In this work, a methodology for dose evaluation was applied to three diagnostic services: one with a conventional film and two with digital computerized radiography processing techniques. The x-ray beam parameters were selected and "doses" (specifically the entrance surface and incident air kerma) were evaluated based on images approved in European criteria during postero-anterior (PA) and lateral (LAT) incidences. Data were collected from 200 patients related to 200 PA and 100 LAT incidences. Results showed that doses distributions in the three diagnostic services were very different; the best relation between dose and image quality was found in the institution with the chemical film processing. This work contributed for disseminating the radiation protection culture by emphasizing the need of a continuous dose reduction without losing the quality of the diagnostic image.

  4. Supervisory control and diagnostics system for the mirror fusion test facility: overview and status 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGoldrick, P.R.

    1981-01-01

    The Mirror Fusion Test Facility (MFTF) is a complex facility requiring a highly-computerized Supervisory Control and Diagnostics System (SCDS) to monitor and provide control over ten subsystems; three of which require true process control. SCDS will provide physicists with a method of studying machine and plasma behavior by acquiring and processing up to four megabytes of plasma diagnostic information every five minutes. A high degree of availability and throughput is provided by a distributed computer system (nine 32-bit minicomputers on shared memory). Data, distributed across SCDS, is managed by a high-bandwidth Distributed Database Management System. The MFTF operators' control roommore » consoles use color television monitors with touch sensitive screens; this is a totally new approach. The method of handling deviations to normal machine operation and how the operator should be notified and assisted in the resolution of problems has been studied and a system designed.« less

  5. A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images.

    PubMed

    Acharya, U Rajendra; Bhat, Shreya; Koh, Joel E W; Bhandary, Sulatha V; Adeli, Hojjat

    2017-09-01

    Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Character recognition using a neural network model with fuzzy representation

    NASA Technical Reports Server (NTRS)

    Tavakoli, Nassrin; Seniw, David

    1992-01-01

    The degree to which digital images are recognized correctly by computerized algorithms is highly dependent upon the representation and the classification processes. Fuzzy techniques play an important role in both processes. In this paper, the role of fuzzy representation and classification on the recognition of digital characters is investigated. An experimental Neural Network model with application to character recognition was developed. Through a set of experiments, the effect of fuzzy representation on the recognition accuracy of this model is presented.

  7. Embedded control system for computerized franking machine

    NASA Astrophysics Data System (ADS)

    Shi, W. M.; Zhang, L. B.; Xu, F.; Zhan, H. W.

    2007-12-01

    This paper presents a novel control system for franking machine. A methodology for operating a franking machine using the functional controls consisting of connection, configuration and franking electromechanical drive is studied. A set of enabling technologies to synthesize postage management software architectures driven microprocessor-based embedded systems is proposed. The cryptographic algorithm that calculates mail items is analyzed to enhance the postal indicia accountability and security. The study indicated that the franking machine is reliability, performance and flexibility in printing mail items.

  8. Spectral areas and ratios classifier algorithm for pancreatic tissue classification using optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Chandra, Malavika; Scheiman, James; Simeone, Diane; McKenna, Barbara; Purdy, Julianne; Mycek, Mary-Ann

    2010-01-01

    Pancreatic adenocarcinoma is one of the leading causes of cancer death, in part because of the inability of current diagnostic methods to reliably detect early-stage disease. We present the first assessment of the diagnostic accuracy of algorithms developed for pancreatic tissue classification using data from fiber optic probe-based bimodal optical spectroscopy, a real-time approach that would be compatible with minimally invasive diagnostic procedures for early cancer detection in the pancreas. A total of 96 fluorescence and 96 reflectance spectra are considered from 50 freshly excised tissue sites-including human pancreatic adenocarcinoma, chronic pancreatitis (inflammation), and normal tissues-on nine patients. Classification algorithms using linear discriminant analysis are developed to distinguish among tissues, and leave-one-out cross-validation is employed to assess the classifiers' performance. The spectral areas and ratios classifier (SpARC) algorithm employs a combination of reflectance and fluorescence data and has the best performance, with sensitivity, specificity, negative predictive value, and positive predictive value for correctly identifying adenocarcinoma being 85, 89, 92, and 80%, respectively.

  9. [Utility of methoxy isobutyl isonitrile (MIBI) scintigraphy, ultrasound and computerized axial tomography in preoperative topographic diagnosis of hiperparathyroidism].

    PubMed

    Gómez Palacios, Angel; Gómez Zábala, Jesús; Gutiérrez, María Teresa; Expósito, Amaya; Barrios, Borja; Zorraquino, Angel; Taibo, Miguel Angel; Iturburu, Ignacio

    2006-12-01

    1. To assess the sensitivity of scintigraphy using methoxy isobutyl isonitrile (MIBI). 2. To compare its resolution with that of ultrasound (US) and computerized axial tomography (CAT). 3. To use its diagnostic reliability to determine whether selective approaches can be used to treat hyperparathyroidism (HPT). A study of 76 patients who underwent surgery for HPT between 1996 and 2005 was performed. MIBI scintigraphy and cervical US were used for whole-body scanning in all patients; CAT was used in 47 patients. Intraoperative and postoperative biopsies were used for final evaluation of the tests, after visualization and surgical extirpation. The results of scintigraphy were positive in 65 patients (85.52%). The diagnosis was correct in all of the single images. Multiple images were due to hyperplasia and parathyroid adenomas with thyroid disease (5.2%). Three images, incorrectly classified as negative (3.94%), were positive. The sensitivity of US was 63% and allowed detection of three MIBI-negative adenomas (4%). CAT was less sensitive (55%), but detected a further three MIBI-negative adenomas (4%). 1. The sensitivity of MIBI reached 89.46%. In the absence of thyroid nodules, MIBI diagnosed 100% of single lesions. Pathological thyroid processes produced false-positive results (5.2%) and there were diagnostic errors (4%). 2. MIBI scintigraphy was more sensitive than US and CAT. 3. Positive, single image scintigraphy allows a selective cervical approach. US and CAT may help to save a further 8% of patients (with negative scintigraphy).

  10. Online Calibration of Polytomous Items Under the Generalized Partial Credit Model

    PubMed Central

    Zheng, Yi

    2016-01-01

    Online calibration is a technology-enhanced architecture for item calibration in computerized adaptive tests (CATs). Many CATs are administered continuously over a long term and rely on large item banks. To ensure test validity, these item banks need to be frequently replenished with new items, and these new items need to be pretested before being used operationally. Online calibration dynamically embeds pretest items in operational tests and calibrates their parameters as response data are gradually obtained through the continuous test administration. This study extends existing formulas, procedures, and algorithms for dichotomous item response theory models to the generalized partial credit model, a popular model for items scored in more than two categories. A simulation study was conducted to investigate the developed algorithms and procedures under a variety of conditions, including two estimation algorithms, three pretest item selection methods, three seeding locations, two numbers of score categories, and three calibration sample sizes. Results demonstrated acceptable estimation accuracy of the two estimation algorithms in some of the simulated conditions. A variety of findings were also revealed for the interacted effects of included factors, and recommendations were made respectively. PMID:29881063

  11. Diagnosis of autism through EEG processed by advanced computational algorithms: A pilot study.

    PubMed

    Grossi, Enzo; Olivieri, Chiara; Buscema, Massimo

    2017-04-01

    Multi-Scale Ranked Organizing Map coupled with Implicit Function as Squashing Time algorithm(MS-ROM/I-FAST) is a new, complex system based on Artificial Neural networks (ANNs) able to extract features of interest in computerized EEG through the analysis of few minutes of their EEG without any preliminary pre-processing. A proof of concept study previously published showed accuracy values ranging from 94%-98% in discerning subjects with Mild Cognitive Impairment and/or Alzheimer's Disease from healthy elderly people. The presence of deviant patterns in simple resting state EEG recordings in autism, consistent with the atypical organization of the cerebral cortex present, prompted us in applying this potent analytical systems in search of a EEG signature of the disease. The aim of the study is to assess how effectively this methodology distinguishes subjects with autism from typically developing ones. Fifteen definite ASD subjects (13 males; 2 females; age range 7-14; mean value = 10.4) and ten typically developing subjects (4 males; 6 females; age range 7-12; mean value 9.2) were included in the study. Patients received Autism diagnoses according to DSM-V criteria, subsequently confirmed by the ADOS scale. A segment of artefact-free EEG lasting 60 seconds was used to compute input values for subsequent analyses. MS-ROM/I-FAST coupled with a well-documented evolutionary system able to select predictive features (TWIST) created an invariant features vector input of EEG on which supervised machine learning systems acted as blind classifiers. The overall predictive capability of machine learning system in sorting out autistic cases from normal control amounted consistently to 100% with all kind of systems employed using training-testing protocol and to 84% - 92.8% using Leave One Out protocol. The similarities among the ANN weight matrixes measured with apposite algorithms were not affected by the age of the subjects. This suggests that the ANNs do not read age-related EEG patterns, but rather invariant features related to the brain's underlying disconnection signature. This pilot study seems to open up new avenues for the development of non-invasive diagnostic testing for the early detection of ASD. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. [Diagnostic work-up of pulmonary nodules : Management of pulmonary nodules detected with low‑dose CT screening].

    PubMed

    Wormanns, D

    2016-09-01

    Pulmonary nodules are the most frequent pathological finding in low-dose computed tomography (CT) scanning for early detection of lung cancer. Early stages of lung cancer are often manifested as pulmonary nodules; however, the very commonly occurring small nodules are predominantly benign. These benign nodules are responsible for the high percentage of false positive test results in screening studies. Appropriate diagnostic algorithms are necessary to reduce false positive screening results and to improve the specificity of lung cancer screening. Such algorithms are based on some of the basic principles comprehensively described in this article. Firstly, the diameter of nodules allows a differentiation between large (>8 mm) probably malignant and small (<8 mm) probably benign nodules. Secondly, some morphological features of pulmonary nodules in CT can prove their benign nature. Thirdly, growth of small nodules is the best non-invasive predictor of malignancy and is utilized as a trigger for further diagnostic work-up. Non-invasive testing using positron emission tomography (PET) and contrast enhancement as well as invasive diagnostic tests (e.g. various procedures for cytological and histological diagnostics) are briefly described in this article. Different nodule morphology using CT (e.g. solid and semisolid nodules) is associated with different biological behavior and different algorithms for follow-up are required. Currently, no obligatory algorithm is available in German-speaking countries for the management of pulmonary nodules, which reflects the current state of knowledge. The main features of some international and American recommendations are briefly presented in this article from which conclusions for the daily clinical use are derived.

  13. Cost-effective Diagnostic Checklists for Meningitis in Resource Limited Settings

    PubMed Central

    Durski, Kara N.; Kuntz, Karen M.; Yasukawa, Kosuke; Virnig, Beth A.; Meya, David B.; Boulware, David R.

    2013-01-01

    Background Checklists can standardize patient care, reduce errors, and improve health outcomes. For meningitis in resource-limited settings, with high patient loads and limited financial resources, CNS diagnostic algorithms may be useful to guide diagnosis and treatment. However, the cost-effectiveness of such algorithms is unknown. Methods We used decision analysis methodology to evaluate the costs, diagnostic yield, and cost-effectiveness of diagnostic strategies for adults with suspected meningitis in resource limited settings with moderate/high HIV prevalence. We considered three strategies: 1) comprehensive “shotgun” approach of utilizing all routine tests; 2) “stepwise” strategy with tests performed in a specific order with additional TB diagnostics; 3) “minimalist” strategy of sequential ordering of high-yield tests only. Each strategy resulted in one of four meningitis diagnoses: bacterial (4%), cryptococcal (59%), TB (8%), or other (aseptic) meningitis (29%). In model development, we utilized prevalence data from two Ugandan sites and published data on test performance. We validated the strategies with data from Malawi, South Africa, and Zimbabwe. Results The current comprehensive testing strategy resulted in 93.3% correct meningitis diagnoses costing $32.00/patient. A stepwise strategy had 93.8% correct diagnoses costing an average of $9.72/patient, and a minimalist strategy had 91.1% correct diagnoses costing an average of $6.17/patient. The incremental cost effectiveness ratio was $133 per additional correct diagnosis for the stepwise over minimalist strategy. Conclusions Through strategically choosing the order and type of testing coupled with disease prevalence rates, algorithms can deliver more care more efficiently. The algorithms presented herein are generalizable to East Africa and Southern Africa. PMID:23466647

  14. A utility/cost analysis of breast cancer risk prediction algorithms

    NASA Astrophysics Data System (ADS)

    Abbey, Craig K.; Wu, Yirong; Burnside, Elizabeth S.; Wunderlich, Adam; Samuelson, Frank W.; Boone, John M.

    2016-03-01

    Breast cancer risk prediction algorithms are used to identify subpopulations that are at increased risk for developing breast cancer. They can be based on many different sources of data such as demographics, relatives with cancer, gene expression, and various phenotypic features such as breast density. Women who are identified as high risk may undergo a more extensive (and expensive) screening process that includes MRI or ultrasound imaging in addition to the standard full-field digital mammography (FFDM) exam. Given that there are many ways that risk prediction may be accomplished, it is of interest to evaluate them in terms of expected cost, which includes the costs of diagnostic outcomes. In this work we perform an expected-cost analysis of risk prediction algorithms that is based on a published model that includes the costs associated with diagnostic outcomes (true-positive, false-positive, etc.). We assume the existence of a standard screening method and an enhanced screening method with higher scan cost, higher sensitivity, and lower specificity. We then assess expected cost of using a risk prediction algorithm to determine who gets the enhanced screening method under the strong assumption that risk and diagnostic performance are independent. We find that if risk prediction leads to a high enough positive predictive value, it will be cost-effective regardless of the size of the subpopulation. Furthermore, in terms of the hit-rate and false-alarm rate of the of the risk prediction algorithm, iso-cost contours are lines with slope determined by properties of the available diagnostic systems for screening.

  15. Computer based interpretation of infrared spectra-structure of the knowledge-base, automatic rule generation and interpretation

    NASA Astrophysics Data System (ADS)

    Ehrentreich, F.; Dietze, U.; Meyer, U.; Abbas, S.; Schulz, H.

    1995-04-01

    It is a main task within the SpecInfo-Project to develop interpretation tools that can handle a great deal more of the complicated, more specific spectrum-structure-correlations. In the first step the empirical knowledge about the assignment of structural groups and their characteristic IR-bands has been collected from literature and represented in a computer readable well-structured form. Vague, verbal rules are managed by introduction of linguistic variables. The next step was the development of automatic rule generating procedures. We had combined and enlarged the IDIOTS algorithm with the algorithm by Blaffert relying on set theory. The procedures were successfully applied to the SpecInfo database. The realization of the preceding items is a prerequisite for the improvement of the computerized structure elucidation procedure.

  16. Optimization of Cooling Water Flow Rate in Nuclear and Thermal Power Plants Based on a Mathematical Model of Cooling Systems{sup 1}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murav’ev, V. P., E-mail: murval1@mail.ru; Kochetkov, A. V.; Glazova, E. G.

    A mathematical model and algorithms are proposed for automatic calculation of the optimum flow rate of cooling water in nuclear and thermal power plants with cooling systems of arbitrary complexity. An unlimited number of configuration and design variants are assumed with the possibility of obtaining a result for any computational time interval, from monthly to hourly. The structural solutions corresponding to an optimum cooling water flow rate can be used for subsequent engineering-economic evaluation of the best cooling system variant. The computerized mathematical model and algorithms make it possible to determine the availability and degree of structural changes for themore » cooling system in all stages of the life cycle of a plant.« less

  17. Overview and current management of computerized adaptive testing in licensing/certification examinations.

    PubMed

    Seo, Dong Gi

    2017-01-01

    Computerized adaptive testing (CAT) has been implemented in high-stakes examinations such as the National Council Licensure Examination-Registered Nurses in the United States since 1994. Subsequently, the National Registry of Emergency Medical Technicians in the United States adopted CAT for certifying emergency medical technicians in 2007. This was done with the goal of introducing the implementation of CAT for medical health licensing examinations. Most implementations of CAT are based on item response theory, which hypothesizes that both the examinee and items have their own characteristics that do not change. There are 5 steps for implementing CAT: first, determining whether the CAT approach is feasible for a given testing program; second, establishing an item bank; third, pretesting, calibrating, and linking item parameters via statistical analysis; fourth, determining the specification for the final CAT related to the 5 components of the CAT algorithm; and finally, deploying the final CAT after specifying all the necessary components. The 5 components of the CAT algorithm are as follows: item bank, starting item, item selection rule, scoring procedure, and termination criterion. CAT management includes content balancing, item analysis, item scoring, standard setting, practice analysis, and item bank updates. Remaining issues include the cost of constructing CAT platforms and deploying the computer technology required to build an item bank. In conclusion, in order to ensure more accurate estimations of examinees' ability, CAT may be a good option for national licensing examinations. Measurement theory can support its implementation for high-stakes examinations.

  18. Overview and current management of computerized adaptive testing in licensing/certification examinations

    PubMed Central

    2017-01-01

    Computerized adaptive testing (CAT) has been implemented in high-stakes examinations such as the National Council Licensure Examination-Registered Nurses in the United States since 1994. Subsequently, the National Registry of Emergency Medical Technicians in the United States adopted CAT for certifying emergency medical technicians in 2007. This was done with the goal of introducing the implementation of CAT for medical health licensing examinations. Most implementations of CAT are based on item response theory, which hypothesizes that both the examinee and items have their own characteristics that do not change. There are 5 steps for implementing CAT: first, determining whether the CAT approach is feasible for a given testing program; second, establishing an item bank; third, pretesting, calibrating, and linking item parameters via statistical analysis; fourth, determining the specification for the final CAT related to the 5 components of the CAT algorithm; and finally, deploying the final CAT after specifying all the necessary components. The 5 components of the CAT algorithm are as follows: item bank, starting item, item selection rule, scoring procedure, and termination criterion. CAT management includes content balancing, item analysis, item scoring, standard setting, practice analysis, and item bank updates. Remaining issues include the cost of constructing CAT platforms and deploying the computer technology required to build an item bank. In conclusion, in order to ensure more accurate estimations of examinees’ ability, CAT may be a good option for national licensing examinations. Measurement theory can support its implementation for high-stakes examinations. PMID:28811394

  19. Quality Assurance Assessment of Diagnostic and Radiation Therapy–Simulation CT Image Registration for Head and Neck Radiation Therapy: Anatomic Region of Interest–based Comparison of Rigid and Deformable Algorithms

    PubMed Central

    Mohamed, Abdallah S. R.; Ruangskul, Manee-Naad; Awan, Musaddiq J.; Baron, Charles A.; Kalpathy-Cramer, Jayashree; Castillo, Richard; Castillo, Edward; Guerrero, Thomas M.; Kocak-Uzel, Esengul; Yang, Jinzhong; Court, Laurence E.; Kantor, Michael E.; Gunn, G. Brandon; Colen, Rivka R.; Frank, Steven J.; Garden, Adam S.; Rosenthal, David I.

    2015-01-01

    Purpose To develop a quality assurance (QA) workflow by using a robust, curated, manually segmented anatomic region-of-interest (ROI) library as a benchmark for quantitative assessment of different image registration techniques used for head and neck radiation therapy–simulation computed tomography (CT) with diagnostic CT coregistration. Materials and Methods Radiation therapy–simulation CT images and diagnostic CT images in 20 patients with head and neck squamous cell carcinoma treated with curative-intent intensity-modulated radiation therapy between August 2011 and May 2012 were retrospectively retrieved with institutional review board approval. Sixty-eight reference anatomic ROIs with gross tumor and nodal targets were then manually contoured on images from each examination. Diagnostic CT images were registered with simulation CT images rigidly and by using four deformable image registration (DIR) algorithms: atlas based, B-spline, demons, and optical flow. The resultant deformed ROIs were compared with manually contoured reference ROIs by using similarity coefficient metrics (ie, Dice similarity coefficient) and surface distance metrics (ie, 95% maximum Hausdorff distance). The nonparametric Steel test with control was used to compare different DIR algorithms with rigid image registration (RIR) by using the post hoc Wilcoxon signed-rank test for stratified metric comparison. Results A total of 2720 anatomic and 50 tumor and nodal ROIs were delineated. All DIR algorithms showed improved performance over RIR for anatomic and target ROI conformance, as shown for most comparison metrics (Steel test, P < .008 after Bonferroni correction). The performance of different algorithms varied substantially with stratification by specific anatomic structures or category and simulation CT section thickness. Conclusion Development of a formal ROI-based QA workflow for registration assessment demonstrated improved performance with DIR techniques over RIR. After QA, DIR implementation should be the standard for head and neck diagnostic CT and simulation CT allineation, especially for target delineation. © RSNA, 2014 Online supplemental material is available for this article. PMID:25380454

  20. Optimal triage test characteristics to improve the cost-effectiveness of the Xpert MTB/RIF assay for TB diagnosis: a decision analysis.

    PubMed

    van't Hoog, Anna H; Cobelens, Frank; Vassall, Anna; van Kampen, Sanne; Dorman, Susan E; Alland, David; Ellner, Jerrold

    2013-01-01

    High costs are a limitation to scaling up the Xpert MTB/RIF assay (Xpert) for the diagnosis of tuberculosis in resource-constrained settings. A triaging strategy in which a sensitive but not necessarily highly specific rapid test is used to select patients for Xpert may result in a more affordable diagnostic algorithm. To inform the selection and development of particular diagnostics as a triage test we explored combinations of sensitivity, specificity and cost at which a hypothetical triage test will improve affordability of the Xpert assay. In a decision analytical model parameterized for Uganda, India and South Africa, we compared a diagnostic algorithm in which a cohort of patients with presumptive TB received Xpert to a triage algorithm whereby only those with a positive triage test were tested by Xpert. A triage test with sensitivity equal to Xpert, 75% specificity, and costs of US$5 per patient tested reduced total diagnostic costs by 42% in the Uganda setting, and by 34% and 39% respectively in the India and South Africa settings. When exploring triage algorithms with lower sensitivity, the use of an example triage test with 95% sensitivity relative to Xpert, 75% specificity and test costs $5 resulted in similar cost reduction, and was cost-effective by the WHO willingness-to-pay threshold compared to Xpert for all in Uganda, but not in India and South Africa. The gain in affordability of the examined triage algorithms increased with decreasing prevalence of tuberculosis among the cohort. A triage test strategy could potentially improve the affordability of Xpert for TB diagnosis, particularly in low-income countries and with enhanced case-finding. Tests and markers with lower accuracy than desired of a diagnostic test may fall within the ranges of sensitivity, specificity and cost required for triage tests and be developed as such.

  1. Neuroimaging in pediatric traumatic head injury: diagnostic considerations and relationships to neurobehavioral outcome.

    PubMed

    Bigler, E D

    1999-08-01

    Contemporary neuorimaging techniques in child traumatic brain injury are reviewed, with an emphasis on computerized tomography (CT) and magnetic resonance (MR) imaging. A brief overview of MR spectroscopy (MRS), functional MR imaging (fMRI), single-photon emission computed tomography (SPECT), and magnetoencephalography (MEG) is also provided because these techniques will likely constitute important neuroimaging techniques of the future. Numerous figures are provided to illustrate the multifaceted manner in which traumatic deficits can be imaged and the role of neuroimaging information as it relates to TBI outcome.

  2. Dynamic, diagnostic, and pharmacological radionuclide studies of the esophagus in achalasia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozen, P.; Gelfond, M.; Zaltzman, S.

    1982-08-01

    The esophagus was evaluated in 15 patients with achalasia by continuous gamma camera imaging following ingestion of a semi-solid meal labeled with /sup 99m/Tc. The images were displayed and recorded on a simple computerized data processing/display system. Subsequent cine mode images of esophageal emptying demonstrated abnormalities of the body of the esophagus not reflected by the manometric examination. Computer-generated time-activity curves representing specific regions of interest were better than manometry in evaluating the results of myotomy, dilatation, and drug therapy. Isosorbide dinitrate significantly improved esophageal emptying.

  3. Guided eruption of palatally impacted canines through combined use of 3-dimensional computerized tomography scans and the easy cuspid device.

    PubMed

    Caprioglio, Alberto; Siani, Lea; Caprioglio, Claudia

    2007-01-01

    The permanent maxillary canine has a high incidence of impaction. In the clinical treatment of impaction, the first problem is diagnosis and localization. The new diagnostic 3-dimensional systems shown in this article provide valid support in understanding anatomic connections and planning the movements needed for orthodontic correction. Thus, the clinician can reduce the incidence of iatrogenic damage of adjacent structures. This article reviews several biomedical systems for guided eruption of palatally impacted canines and discusses a new device for guided eruption of the surgically disimpacted tooth. This device, called Easy Cuspid, is designed to reduce recognized problems with reaction forces through a simple method. A clinical case of bilateral impaction of the permanent maxillary canines shows the application of the diagnostic method and the biomechanical system, Easy Cuspid.

  4. Evaluation of the Revised Algorithm of Autism Diagnostic Observation Schedule (ADOS) in the Diagnostic Investigation of High-Functioning Children and Adolescents with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Kamp-Becker, Inge; Ghahreman, Mardjan; Heinzel-Gutenbrunner, Monika; Peters, Mira; Remschmidt, Helmut; Becker, Katja

    2013-01-01

    The Autism Diagnostic Observation Schedule (ADOS) is a semi-structured, standardized assessment designed for use in diagnostic evaluation of individuals with suspected autism spectrum disorder (ASD). The ADOS has been effective in categorizing children who definitely have autism or not, but has lower specificity and sometimes sensitivity for…

  5. The sonographic features of malignant mediastinal lymph nodes and a proposal for an algorithmic approach for sampling during endobronchial ultrasound.

    PubMed

    Alici, Ibrahim Onur; Yılmaz Demirci, Nilgün; Yılmaz, Aydın; Karakaya, Jale; Özaydın, Esra

    2016-09-01

    There are several papers on the sonographic features of mediastinal lymph nodes affected by several diseases, but none gives the importance and clinical utility of the features. In order to find out which lymph node should be sampled in a particular nodal station during endobronchial ultrasound, we investigated the diagnostic performances of certain sonographic features and proposed an algorithmic approach. We retrospectively analyzed 1051 lymph nodes and randomly assigned them into a preliminary experimental and a secondary study group. The diagnostic performances of the sonographic features (gray scale, echogeneity, shape, size, margin, presence of necrosis, presence of calcification and absence of central hilar structure) were calculated, and an algorithm for lymph node sampling was obtained with decision tree analysis in the experimental group. Later, a modified algorithm was applied to the patients in the study group to give the accuracy. The demographic characteristics of the patients were not statistically significant between the primary and the secondary groups. All of the features were discriminative between malignant and benign diseases. The modified algorithm sensitivity, specificity, and positive and negative predictive values and diagnostic accuracy for detecting metastatic lymph nodes were 100%, 51.2%, 50.6%, 100% and 67.5%, respectively. In this retrospective analysis, the standardized sonographic classification system and the proposed algorithm performed well in choosing the node that should be sampled in a particular station during endobronchial ultrasound. © 2015 John Wiley & Sons Ltd.

  6. Validation of administrative data used for the diagnosis of upper gastrointestinal events following nonsteroidal anti-inflammatory drug prescription.

    PubMed

    Abraham, N S; Cohen, D C; Rivers, B; Richardson, P

    2006-07-15

    To validate veterans affairs (VA) administrative data for the diagnosis of nonsteroidal anti-inflammatory drug (NSAID)-related upper gastrointestinal events (UGIE) and to develop a diagnostic algorithm. A retrospective study of veterans prescribed an NSAID as identified from the national pharmacy database merged with in-patient and out-patient data, followed by primary chart abstraction. Contingency tables were constructed to allow comparison with a random sample of patients prescribed an NSAID, but without UGIE. Multivariable logistic regression analysis was used to derive a predictive algorithm. Once derived, the algorithm was validated in a separate cohort of veterans. Of 906 patients, 606 had a diagnostic code for UGIE; 300 were a random subsample of 11 744 patients (control). Only 161 had a confirmed UGIE. The positive predictive value (PPV) of diagnostic codes was poor, but improved from 27% to 51% with the addition of endoscopic procedural codes. The strongest predictors of UGIE were an in-patient ICD-9 code for gastric ulcer, duodenal ulcer and haemorrhage combined with upper endoscopy. This algorithm had a PPV of 73% when limited to patients >or=65 years (c-statistic 0.79). Validation of the algorithm revealed a PPV of 80% among patients with an overlapping NSAID prescription. NSAID-related UGIE can be assessed using VA administrative data. The optimal algorithm includes an in-patient ICD-9 code for gastric or duodenal ulcer and gastrointestinal bleeding combined with a procedural code for upper endoscopy.

  7. Is introducing rapid culture into the diagnostic algorithm of smear-negative tuberculosis cost-effective?

    PubMed

    Yakhelef, N; Audibert, M; Varaine, F; Chakaya, J; Sitienei, J; Huerga, H; Bonnet, M

    2014-05-01

    In 2007, the World Health Organization recommended introducing rapid Mycobacterium tuberculosis culture into the diagnostic algorithm of smear-negative pulmonary tuberculosis (TB). To assess the cost-effectiveness of introducing a rapid non-commercial culture method (thin-layer agar), together with Löwenstein-Jensen culture to diagnose smear-negative TB at a district hospital in Kenya. Outcomes (number of true TB cases treated) were obtained from a prospective study evaluating the effectiveness of a clinical and radiological algorithm (conventional) against the alternative algorithm (conventional plus M. tuberculosis culture) in 380 smear-negative TB suspects. The costs of implementing each algorithm were calculated using a 'micro-costing' or 'ingredient-based' method. We then compared the cost and effectiveness of conventional vs. culture-based algorithms and estimated the incremental cost-effectiveness ratio. The costs of conventional and culture-based algorithms per smear-negative TB suspect were respectively €39.5 and €144. The costs per confirmed and treated TB case were respectively €452 and €913. The culture-based algorithm led to diagnosis and treatment of 27 more cases for an additional cost of €1477 per case. Despite the increase in patients started on treatment thanks to culture, the relatively high cost of a culture-based algorithm will make it difficult for resource-limited countries to afford.

  8. Characterization and noninvasive diagnosis of bladder cancer with serum surface enhanced Raman spectroscopy and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Li, Shaoxin; Li, Linfang; Zeng, Qiuyao; Zhang, Yanjiao; Guo, Zhouyi; Liu, Zhiming; Jin, Mei; Su, Chengkang; Lin, Lin; Xu, Junfa; Liu, Songhao

    2015-05-01

    This study aims to characterize and classify serum surface-enhanced Raman spectroscopy (SERS) spectra between bladder cancer patients and normal volunteers by genetic algorithms (GAs) combined with linear discriminate analysis (LDA). Two group serum SERS spectra excited with nanoparticles are collected from healthy volunteers (n = 36) and bladder cancer patients (n = 55). Six diagnostic Raman bands in the regions of 481-486, 682-687, 1018-1034, 1313-1323, 1450-1459 and 1582-1587 cm-1 related to proteins, nucleic acids and lipids are picked out with the GAs and LDA. By the diagnostic models built with the identified six Raman bands, the improved diagnostic sensitivity of 90.9% and specificity of 100% were acquired for classifying bladder cancer patients from normal serum SERS spectra. The results are superior to the sensitivity of 74.6% and specificity of 97.2% obtained with principal component analysis by the same serum SERS spectra dataset. Receiver operating characteristic (ROC) curves further confirmed the efficiency of diagnostic algorithm based on GA-LDA technique. This exploratory work demonstrates that the serum SERS associated with GA-LDA technique has enormous potential to characterize and non-invasively detect bladder cancer through peripheral blood.

  9. American Pancreatic Association Practice Guidelines in Chronic Pancreatitis: evidence-based report on diagnostic guidelines.

    PubMed

    Conwell, Darwin L; Lee, Linda S; Yadav, Dhiraj; Longnecker, Daniel S; Miller, Frank H; Mortele, Koenraad J; Levy, Michael J; Kwon, Richard; Lieb, John G; Stevens, Tyler; Toskes, Phillip P; Gardner, Timothy B; Gelrud, Andres; Wu, Bechien U; Forsmark, Christopher E; Vege, Santhi S

    2014-11-01

    The diagnosis of chronic pancreatitis remains challenging in early stages of the disease. This report defines the diagnostic criteria useful in the assessment of patients with suspected and established chronic pancreatitis. All current diagnostic procedures are reviewed, and evidence-based statements are provided about their utility and limitations. Diagnostic criteria for chronic pancreatitis are classified as definitive, probable, or insufficient evidence. A diagnostic (STEP-wise; survey, tomography, endoscopy, and pancreas function testing) algorithm is proposed that proceeds from a noninvasive to a more invasive approach. This algorithm maximizes specificity (low false-positive rate) in subjects with chronic abdominal pain and equivocal imaging changes. Furthermore, a nomenclature is suggested to further characterize patients with established chronic pancreatitis based on TIGAR-O (toxic, idiopathic, genetic, autoimmune, recurrent, and obstructive) etiology, gland morphology (Cambridge criteria), and physiologic state (exocrine, endocrine function) for uniformity across future multicenter research collaborations. This guideline will serve as a baseline manuscript that will be modified as new evidence becomes available and our knowledge of chronic pancreatitis improves.

  10. Spectroscopic diagnosis of laryngeal carcinoma using near-infrared Raman spectroscopy and random recursive partitioning ensemble techniques.

    PubMed

    Teh, Seng Khoon; Zheng, Wei; Lau, David P; Huang, Zhiwei

    2009-06-01

    In this work, we evaluated the diagnostic ability of near-infrared (NIR) Raman spectroscopy associated with the ensemble recursive partitioning algorithm based on random forests for identifying cancer from normal tissue in the larynx. A rapid-acquisition NIR Raman system was utilized for tissue Raman measurements at 785 nm excitation, and 50 human laryngeal tissue specimens (20 normal; 30 malignant tumors) were used for NIR Raman studies. The random forests method was introduced to develop effective diagnostic algorithms for classification of Raman spectra of different laryngeal tissues. High-quality Raman spectra in the range of 800-1800 cm(-1) can be acquired from laryngeal tissue within 5 seconds. Raman spectra differed significantly between normal and malignant laryngeal tissues. Classification results obtained from the random forests algorithm on tissue Raman spectra yielded a diagnostic sensitivity of 88.0% and specificity of 91.4% for laryngeal malignancy identification. The random forests technique also provided variables importance that facilitates correlation of significant Raman spectral features with cancer transformation. This study shows that NIR Raman spectroscopy in conjunction with random forests algorithm has a great potential for the rapid diagnosis and detection of malignant tumors in the larynx.

  11. Development of PET projection data correction algorithm

    NASA Astrophysics Data System (ADS)

    Bazhanov, P. V.; Kotina, E. D.

    2017-12-01

    Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.

  12. Clinical effectiveness of a Bayesian algorithm for the diagnosis and management of heparin-induced thrombocytopenia.

    PubMed

    Raschke, R A; Gallo, T; Curry, S C; Whiting, T; Padilla-Jones, A; Warkentin, T E; Puri, A

    2017-08-01

    Essentials We previously published a diagnostic algorithm for heparin-induced thrombocytopenia (HIT). In this study, we validated the algorithm in an independent large healthcare system. The accuracy was 98%, sensitivity 82% and specificity 99%. The algorithm has potential to improve accuracy and efficiency in the diagnosis of HIT. Background Heparin-induced thrombocytopenia (HIT) is a life-threatening drug reaction caused by antiplatelet factor 4/heparin (anti-PF4/H) antibodies. Commercial tests to detect these antibodies have suboptimal operating characteristics. We previously developed a diagnostic algorithm for HIT that incorporated 'four Ts' (4Ts) scoring and a stratified interpretation of an anti-PF4/H enzyme-linked immunosorbent assay (ELISA) and yielded a discriminant accuracy of 0.97 (95% confidence interval [CI], 0.93-1.00). Objectives The purpose of this study was to validate the algorithm in an independent patient population and quantitate effects that algorithm adherence could have on clinical care. Methods A retrospective cohort comprised patients who had undergone anti-PF4/H ELISA and serotonin release assay (SRA) testing in our healthcare system from 2010 to 2014. We determined the algorithm recommendation for each patient, compared recommendations with the clinical care received, and enumerated consequences of discrepancies. Operating characteristics were calculated for algorithm recommendations using SRA as the reference standard. Results Analysis was performed on 181 patients, 10 of whom were ruled in for HIT. The algorithm accurately stratified 98% of patients (95% CI, 95-99%), ruling out HIT in 158, ruling in HIT in 10 and recommending an SRA in 13 patients. Algorithm adherence would have obviated 165 SRAs and prevented 30 courses of unnecessary antithrombotic therapy for HIT. Diagnostic sensitivity was 0.82 (95% CI, 0.48-0.98), specificity 0.99 (95% CI, 0.97-1.00), PPV 0.90 (95% CI, 0.56-0.99) and NPV 0.99 (95% CI, 0.96-1.00). Conclusions An algorithm incorporating 4Ts scoring and a stratified interpretation of the anti-PF4/H ELISA has good operating characteristics and the potential to improve management of suspected HIT patients. © 2017 International Society on Thrombosis and Haemostasis.

  13. Validation of Living Donor Nephrectomy Codes

    PubMed Central

    Lam, Ngan N.; Lentine, Krista L.; Klarenbach, Scott; Sood, Manish M.; Kuwornu, Paul J.; Naylor, Kyla L.; Knoll, Gregory A.; Kim, S. Joseph; Young, Ann; Garg, Amit X.

    2018-01-01

    Background: Use of administrative data for outcomes assessment in living kidney donors is increasing given the rarity of complications and challenges with loss to follow-up. Objective: To assess the validity of living donor nephrectomy in health care administrative databases compared with the reference standard of manual chart review. Design: Retrospective cohort study. Setting: 5 major transplant centers in Ontario, Canada. Patients: Living kidney donors between 2003 and 2010. Measurements: Sensitivity and positive predictive value (PPV). Methods: Using administrative databases, we conducted a retrospective study to determine the validity of diagnostic and procedural codes for living donor nephrectomies. The reference standard was living donor nephrectomies identified through the province’s tissue and organ procurement agency, with verification by manual chart review. Operating characteristics (sensitivity and PPV) of various algorithms using diagnostic, procedural, and physician billing codes were calculated. Results: During the study period, there were a total of 1199 living donor nephrectomies. Overall, the best algorithm for identifying living kidney donors was the presence of 1 diagnostic code for kidney donor (ICD-10 Z52.4) and 1 procedural code for kidney procurement/excision (1PC58, 1PC89, 1PC91). Compared with the reference standard, this algorithm had a sensitivity of 97% and a PPV of 90%. The diagnostic and procedural codes performed better than the physician billing codes (sensitivity 60%, PPV 78%). Limitations: The donor chart review and validation study was performed in Ontario and may not be generalizable to other regions. Conclusions: An algorithm consisting of 1 diagnostic and 1 procedural code can be reliably used to conduct health services research that requires the accurate determination of living kidney donors at the population level. PMID:29662679

  14. Development and improvement of the operating diagnostics systems of NPO CKTI works for turbine of thermal and nuclear power plants

    NASA Astrophysics Data System (ADS)

    Kovalev, I. A.; Rakovskii, V. G.; Isakov, N. Yu.; Sandovskii, A. V.

    2016-03-01

    The work results on the development and improvement of the techniques, algorithms, and software-hardware of continuous operating diagnostics systems of rotating units and parts of turbine equipment state are presented. In particular, to ensure the full remote service of monitored turbine equipment using web technologies, the web version of the software of the automated systems of vibration-based diagnostics (ASVD VIDAS) was developed. The experience in the automated analysis of data obtained by ASVD VIDAS form the basis of the new algorithm of early detection of such dangerous defects as rotor deflection, crack in the rotor, and strong misalignment of supports. The program-technical complex of monitoring and measuring the deflection of medium pressure rotor (PTC) realizing this algorithm will alert the electric power plant staff during a deflection and indicate its value. This will give the opportunity to take timely measures to prevent the further extension of the defect. Repeatedly, recorded cases of full or partial destruction of shrouded shelves of rotor blades of the last stages of low-pressure cylinders of steam turbines defined the need to develop a version of the automated system of blade diagnostics (ASBD SKALA) for shrouded stages. The processing, analysis, presentation, and backup of data characterizing the mechanical state of blade device are carried out with a newly developed controller of the diagnostics system. As a result of the implementation of the works, the diagnosed parameters determining the operation security of rotating elements of equipment was expanded and the new tasks on monitoring the state of units and parts of turbines were solved. All algorithmic solutions and hardware-software implementations mentioned in the article were tested on the test benches and applied at some power plants.

  15. A fast Monte Carlo EM algorithm for estimation in latent class model analysis with an application to assess diagnostic accuracy for cervical neoplasia in women with AGC

    PubMed Central

    Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan

    2013-01-01

    In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493

  16. A simple algorithm for beam profile diagnostics using a thermographic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katagiri, Ken; Hojo, Satoru; Honma, Toshihiro

    2014-03-15

    A new algorithm for digital image processing apparatuses is developed to evaluate profiles of high-intensity DC beams from temperature images of irradiated thin foils. Numerical analyses are performed to examine the reliability of the algorithm. To simulate the temperature images acquired by a thermographic camera, temperature distributions are numerically calculated for 20 MeV proton beams with different parameters. Noise in the temperature images which is added by the camera sensor is also simulated to account for its effect. Using the algorithm, beam profiles are evaluated from the simulated temperature images and compared with exact solutions. We find that niobium ismore » an appropriate material for the thin foil used in the diagnostic system. We also confirm that the algorithm is adaptable over a wide beam current range of 0.11–214 μA, even when employing a general-purpose thermographic camera with rather high noise (ΔT{sub NETD} ≃ 0.3 K; NETD: noise equivalent temperature difference)« less

  17. Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches.

    PubMed

    Çelik, Ufuk; Yurtay, Nilüfer; Koç, Emine Rabia; Tepe, Nermin; Güllüoğlu, Halil; Ertaş, Mustafa

    2015-01-01

    The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS) were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy.

  18. Pediatric chest HRCT using the iDose4 Hybrid Iterative Reconstruction Algorithm: Which iDose level to choose?

    NASA Astrophysics Data System (ADS)

    Smarda, M.; Alexopoulou, E.; Mazioti, A.; Kordolaimi, S.; Ploussi, A.; Priftis, K.; Efstathopoulos, E.

    2015-09-01

    Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions.

  19. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2008-03-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  20. Project for the National Program of Early Diagnosis of Endometrial Cancer Part I.

    PubMed

    Bohîlțea, R E; Ancăr, V; Cirstoiu, M M; Rădoi, V; Bohîlțea, L C; Furtunescu, F

    2015-01-01

    Endometrial cancer recorded a peak incidence in ages 60-64 years in Romania, reaching in 2013 the average value of 8.06/ 100,000 women, and 15.97/ 100,000 women within the highest risk age range, having in recent years an increasing trend, being higher in urban than in rural population. Annually, approximately 800 new cases are registered in our country. The estimated lifetime risk of a woman to develop endometrial cancer is of about 1,03%. Based on an abnormal uterine bleeding, 35% of the endometrial cancers are diagnosed in an advanced stage of the disease, with significantly diminished lifetime expectancy. Drafting a national program for the early diagnosis of endometrial cancer. We proposed a standardization of the diagnostic steps and focused on 4 key elements for the early diagnosis of endometrial cancer: investigation of abnormal uterine bleeding occurring in pre/ post-menopausal women, investigating features/ anomalies of cervical cytology examination, diagnosis, treatment and proper monitoring of precursor endometrial lesions or cancer associated endometrial lesions and screening high risk populations (Lynch syndrome, Cowden syndrome). Improving medical practice based on diagnostic algorithms addresses the four risk groups, by improving information system reporting and record keeping. Improving addressability cases by increasing the health education of the population will increase the rate of diagnosis of endometrial cancer in the early stages of the disease. ACOG = American Society of Obstetricians and Gynecologists, ASCCP = American Society for Colposcopy and Cervical Pathology, PATT = Partial Activated Thromboplastin Time, BRCA = Breast Cancer Gene, CT = Computerized Tomography, IFGO = International Federation of Gynecology and Obstetrics, HLG = Hemoleucogram, HNPCC = Hereditary Nonpolyposis Colorectal Cancer (Lynch syndrome), IHC = Immunohistochemistry, BMI = Body Mass Index, INR = International Normalized Ratio, MSI = Microsatellites instability, MSI-H/ MSI-L = high (positive test)/ low (negative test) microsatellites instability, WHO = World Health Organization, PCR = Polymerase chain reaction, MRI = Magnetic Resonance Imaging, SGO = Society of Gynecologic Oncologists, SHG = Sonohysterography, SRU = Society of Radiologists in Ultrasound, TQ = Time Quick, BT = Bleeding Time, TVUS = Transvaginal ultrasound, USPIO = Ultrasmall superparamagnetic iron oxide.

  1. Technologic advances for evaluation of cervical cytology: is newer better?

    PubMed

    Hartmann, K E; Nanda, K; Hall, S; Myers, E

    2001-12-01

    Among those women who have cervical cancer and have been screened, 14% to 33% of the cases represent failure to detect abnormalities that existed at the time of screening. New technologies intended to improve detection of cytologic abnormalities include liquid-based, thin-layer cytology (ThinPrep, AutoCyte), computerized rescreening (PAPNET), and algorithm-based computer rescreening (AutoPap). This report combines evidence reviews conducted for the U.S. Preventive Services Task Force and the Agency for Healthcare Research and Quality, in which we systematically identified articles on cervical neoplasia, cervical dysplasia, and screening published between January 1966 and March 2001. We note the challenges for improving screening methods, providing an overview of methods for collecting and evaluating cytologic samples, and examining the evidence about the diagnostic performance of new technologies for detecting cervical lesions. Using standard criteria for evaluation of the diagnostic tests, we determined that knowledge about the sensitivity, specificity, and predictive values of new technologies is meager. Only one study of liquid-based cytology used a reference standard of colposcopy, with histology as indicated, to assess participants with normal screening results. Lack of an adequate reference standard is the overwhelming reason that test characteristics cannot be properly assessed or compared. Most publications compare results of screening using the new technology with expert panel review of the cytologic specimen. In that case, the tests are not independent measures and do nothing to relate the screening test findings to the true status of the cervix, making determination of false-negatives, and thus sensitivity, specificity, and negative predictive value, impossible. We did not identify any literature about health outcomes or cost effectiveness of using these tools in a system of screening. For the purposes of guiding decision making about choice of screening tools, the current evidence is inadequate to gauge whether new technologies are "better" than conventional cytology..

  2. Computer-aided diagnosis and artificial intelligence in clinical imaging.

    PubMed

    Shiraishi, Junji; Li, Qiang; Appelbaum, Daniel; Doi, Kunio

    2011-11-01

    Computer-aided diagnosis (CAD) is rapidly entering the radiology mainstream. It has already become a part of the routine clinical work for the detection of breast cancer with mammograms. The computer output is used as a "second opinion" in assisting radiologists' image interpretations. The computer algorithm generally consists of several steps that may include image processing, image feature analysis, and data classification via the use of tools such as artificial neural networks (ANN). In this article, we will explore these and other current processes that have come to be referred to as "artificial intelligence." One element of CAD, temporal subtraction, has been applied for enhancing interval changes and for suppressing unchanged structures (eg, normal structures) between 2 successive radiologic images. To reduce misregistration artifacts on the temporal subtraction images, a nonlinear image warping technique for matching the previous image to the current one has been developed. Development of the temporal subtraction method originated with chest radiographs, with the method subsequently being applied to chest computed tomography (CT) and nuclear medicine bone scans. The usefulness of the temporal subtraction method for bone scans was demonstrated by an observer study in which reading times and diagnostic accuracy improved significantly. An additional prospective clinical study verified that the temporal subtraction image could be used as a "second opinion" by radiologists with negligible detrimental effects. ANN was first used in 1990 for computerized differential diagnosis of interstitial lung diseases in CAD. Since then, ANN has been widely used in CAD schemes for the detection and diagnosis of various diseases in different imaging modalities, including the differential diagnosis of lung nodules and interstitial lung diseases in chest radiography, CT, and position emission tomography/CT. It is likely that CAD will be integrated into picture archiving and communication systems and will become a standard of care for diagnostic examinations in daily clinical work. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Improved Temperature Diagnostic for Non-Neutral Plasmas with Single-Electron Resolution

    NASA Astrophysics Data System (ADS)

    Shanman, Sabrina; Evans, Lenny; Fajans, Joel; Hunter, Eric; Nelson, Cheyenne; Sierra, Carlos; Wurtele, Jonathan

    2016-10-01

    Plasma temperature diagnostics in a Penning-Malmberg trap are essential for reliably obtaining cold, non-neutral plasmas. We have developed a setup for detecting the initial electrons that escape from a trapped pure electron plasma as the confining electrode potential is slowly reduced. The setup minimizes external noise by using a silicon photomultiplier to capture light emitted from an MCP-amplified phosphor screen. To take advantage of this enhanced resolution, we have developed a new plasma temperature diagnostic analysis procedure which takes discrete electron arrival times as input. We have run extensive simulations comparing this new discrete algorithm to our existing exponential fitting algorithm. These simulations are used to explore the behavior of these two temperature diagnostic procedures at low N and at high electronic noise. This work was supported by the DOE DE-FG02-06ER54904, and the NSF 1500538-PHY.

  4. Improving Security for SCADA Sensor Networks with Reputation Systems and Self-Organizing Maps.

    PubMed

    Moya, José M; Araujo, Alvaro; Banković, Zorana; de Goyeneche, Juan-Mariano; Vallejo, Juan Carlos; Malagón, Pedro; Villanueva, Daniel; Fraga, David; Romero, Elena; Blesa, Javier

    2009-01-01

    The reliable operation of modern infrastructures depends on computerized systems and Supervisory Control and Data Acquisition (SCADA) systems, which are also based on the data obtained from sensor networks. The inherent limitations of the sensor devices make them extremely vulnerable to cyberwarfare/cyberterrorism attacks. In this paper, we propose a reputation system enhanced with distributed agents, based on unsupervised learning algorithms (self-organizing maps), in order to achieve fault tolerance and enhanced resistance to previously unknown attacks. This approach has been extensively simulated and compared with previous proposals.

  5. Probabilistic computer model of optimal runway turnoffs

    NASA Technical Reports Server (NTRS)

    Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.

    1985-01-01

    Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.

  6. Improving Security for SCADA Sensor Networks with Reputation Systems and Self-Organizing Maps

    PubMed Central

    Moya, José M.; Araujo, Álvaro; Banković, Zorana; de Goyeneche, Juan-Mariano; Vallejo, Juan Carlos; Malagón, Pedro; Villanueva, Daniel; Fraga, David; Romero, Elena; Blesa, Javier

    2009-01-01

    The reliable operation of modern infrastructures depends on computerized systems and Supervisory Control and Data Acquisition (SCADA) systems, which are also based on the data obtained from sensor networks. The inherent limitations of the sensor devices make them extremely vulnerable to cyberwarfare/cyberterrorism attacks. In this paper, we propose a reputation system enhanced with distributed agents, based on unsupervised learning algorithms (self-organizing maps), in order to achieve fault tolerance and enhanced resistance to previously unknown attacks. This approach has been extensively simulated and compared with previous proposals. PMID:22291569

  7. Computerized lateral-shear interferometer

    NASA Astrophysics Data System (ADS)

    Hasegan, Sorin A.; Jianu, Angela; Vlad, Valentin I.

    1998-07-01

    A lateral-shear interferometer, coupled with a computer for laser wavefront analysis, is described. A CCD camera is used to transfer the fringe images through a frame-grabber into a PC. 3D phase maps are obtained by fringe pattern processing using a new algorithm for direct spatial reconstruction of the optical phase. The program describes phase maps by Zernike polynomials yielding an analytical description of the wavefront aberration. A compact lateral-shear interferometer has been built using a laser diode as light source, a CCD camera and a rechargeable battery supply, which allows measurements in-situ, if necessary.

  8. The impact of automation on organizational changes in a community hospital clinical microbiology laboratory.

    PubMed

    Camporese, Alessandro

    2004-06-01

    The diagnosis of infectious diseases and the role of the microbiology laboratory are currently undergoing a process of change. The need for overall efficiency in providing results is now given the same importance as accuracy. This means that laboratories must be able to produce quality results in less time with the capacity to interpret the results clinically. To improve the clinical impact of microbiology results, the new challenge facing the microbiologist has become one of process management instead of pure analysis. A proper project management process designed to improve workflow, reduce analytical time, and provide the same high quality results without losing valuable time treating the patient, has become essential. Our objective was to study the impact of introducing automation and computerization into the microbiology laboratory, and the reorganization of the laboratory workflow, i.e. scheduling personnel to work shifts covering both the entire day and the entire week. In our laboratory, the introduction of automation and computerization, as well as the reorganization of personnel, thus the workflow itself, has resulted in an improvement in response time and greater efficiency in diagnostic procedures.

  9. Implementation of real-time digital endoscopic image processing system

    NASA Astrophysics Data System (ADS)

    Song, Chul Gyu; Lee, Young Mook; Lee, Sang Min; Kim, Won Ky; Lee, Jae Ho; Lee, Myoung Ho

    1997-10-01

    Endoscopy has become a crucial diagnostic and therapeutic procedure in clinical areas. Over the past four years, we have developed a computerized system to record and store clinical data pertaining to endoscopic surgery of laparascopic cholecystectomy, pelviscopic endometriosis, and surgical arthroscopy. In this study, we developed a computer system, which is composed of a frame grabber, a sound board, a VCR control board, a LAN card and EDMS. Also, computer system controls peripheral instruments such as a color video printer, a video cassette recorder, and endoscopic input/output signals. Digital endoscopic data management system is based on open architecture and a set of widely available industry standards; namely Microsoft Windows as an operating system, TCP/IP as a network protocol and a time sequential database that handles both images and speech. For the purpose of data storage, we used MOD and CD- R. Digital endoscopic system was designed to be able to store, recreate, change, and compress signals and medical images. Computerized endoscopy enables us to generate and manipulate the original visual document, making it accessible to a virtually unlimited number of physicians.

  10. Development of an Item Bank for the Assessment of Knowledge on Biology in Argentine University Students.

    PubMed

    Cupani, Marcos; Zamparella, Tatiana Castro; Piumatti, Gisella; Vinculado, Grupo

    The calibration of item banks provides the basis for computerized adaptive testing that ensures high diagnostic precision and minimizes participants' test burden. This study aims to develop a bank of items to measure the level of Knowledge on Biology using the Rasch model. The sample consisted of 1219 participants that studied in different faculties of the National University of Cordoba (mean age = 21.85 years, SD = 4.66; 66.9% are women). The items were organized in different forms and into separate subtests, with some common items across subtests. The students were told they had to answer 60 questions of knowledge on biology. Evaluation of Rasch model fit (Zstd >|2.0|), differential item functioning, dimensionality, local independence, item and person separation (>2.0), and reliability (>.80) resulted in a bank of 180 items with good psychometric properties. The bank provides items with a wide range of content coverage and may serve as a sound basis for computerized adaptive testing applications. The contribution of this work is significant in the field of educational assessment in Argentina.

  11. Methodological, technical, and ethical issues of a computerized data system.

    PubMed

    Rice, C A; Godkin, M A; Catlin, R J

    1980-06-01

    This report examines some methodological, technical, and ethical issues which need to be addressed in designing and implementing a valid and reliable computerized clinical data base. The report focuses on the data collection system used by four residency based family health centers, affiliated with the University of Massachusetts Medical Center. It is suggested that data reliability and validity can be maximized by: (1) standardizing encounter forms at affiliated health centers to eliminate recording biases and ensure data comparability; (2) using forms with a diagnosis checklist to reduce coding errors and increase the number of diagnoses recorded per encounter; (3) developing uniform diagnostic criteria; (4) identifying sources of error, including discrepancies of clinical data as recorded in medical records, encounter forms, and the computer; and (5) improving provider cooperation in recording data by distributing data summaries which reinforce the data's applicability to service provision. Potential applications of the data for research purposes are restricted by personnel and computer costs, confidentiality considerations, programming related issues, and, most importantly, health center priorities, largely focused on patient care, not research.

  12. Planning guidelines for computerized transaxial tomography (CT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1976-11-23

    Guidelines to assist local communities in review and decisionmaking related to computerized tomography (CT) 'head' and 'whole body' scanner needs and placement are presented. Although medical benefits for head scanning are well established, the proper role of whole body scanning in relation to other diagnostic procedures has not been determined. It is recommended that a 20 percent weighted consideration could be given to a potential CT scanner applicant's present capabilities in diagnostic 'body' work. The following guidelines for CT are recommended for use in assessing work qualifications of potential CT scanner applicants: (1) The facility must have an active neurosurgicalmore » service, with a geographically full-time board - certified neurosurgeon and at least 50 intracranial procedures performed annually. (2) The facility must have an active neurological service, with a geographically full-time board - certified neurologist. (3) The facility must have on staff a qualified neuroradiologist. It is recommended that the CT scanner utilization level be a minimum of 3,000 examinations per year per unit of new equipment. The applicant must submit financial data and must be committed to providing care to all patients, independent of ability to pay. The applicant must submit letters from area hospitals agreeing to utilize the scanner services. Additional criteria are given for body scanning work and for the number of scanners in a specific area. Detailed information is presented about scanner development and use in southeastern Pennsylvania and neighboring planning areas, and the cost of scanner operations is compared with revenues. The CT scanner committee membership is included.« less

  13. Embedded Reasoning Supporting Aerospace IVHM

    DTIC Science & Technology

    2007-01-01

    c method (BIT or health assessment algorithm) which the monitoring diagnostic relies on input information tics and Astronautics In the diagram...viewing of the current health state of all monitored subsystems, while also providing a means to probe deeper in the event anomalous operation is...seeks to integrate detection , diagnostic, and prognostic capabilities with a hierarchical diagnostic reasoning architecture into a single

  14. Telepsychiatrists' Medication Treatment Strategies in the Children's Attention-Deficit/Hyperactivity Disorder Telemental Health Treatment Study

    PubMed Central

    Tse, Yuet Juhn; Fesinmeyer, Megan D.; Garcia, Jessica; Myers, Kathleen

    2016-01-01

    Abstract Objective: The purpose of this study was to examine the prescribing strategies that telepsychiatrists used to provide pharmacologic treatment in the Children's Attention-Deficit/Hyperactivity Disorder (ADHD) Telemental Health Treatment Study (CATTS). Methods: CATTS was a randomized controlled trial that demonstrated the superiority of a telehealth service delivery model for the treatment of ADHD with combined pharmacotherapy and behavior training (n=111), compared with management in primary care augmented with a telepsychiatry consultation (n=112). A diagnosis of ADHD was established with the Computerized Diagnostic Interview Schedule for Children (CDISC), and comorbidity for oppositional defiant disorder (ODD) and anxiety disorders (AD) was established using the CDISC and the Child Behavior Checklist. Telepsychiatrists used the Texas Children's Medication Algorithm Project (TCMAP) for ADHD to guide pharmacotherapy and the treat-to-target model to encourage their assertive medication management to a predetermined goal of 50% reduction in ADHD-related symptoms. We assessed whether telepsychiatrists' decision making about making medication changes was associated with baseline ADHD symptom severity, comorbidity, and attainment of the treat-to-target goal. Results: Telepsychiatrists showed high fidelity (91%) to their chosen algorithms in medication management. At the end of the trial, the CATTS intervention showed 46.0% attainment of the treat-to-target goal compared with 13.6% for the augmented primary care condition, and significantly greater attainment of the goal by comorbidity status for the ADHD with one and ADHD with two comorbidities groups. Telepsychiatrists' were more likely to decide to make medication adjustments for youth with higher baseline ADHD severity and the presence of disorders comorbid with ADHD. Multiple mixed methods regression analyses controlling for baseline ADHD severity and comorbidity status indicated that the telepsychiatrists also based their decision making session to session on attainment of the treat-to-target goal. Conclusions: Telepsychiatry is an effective service delivery model for providing pharmacotherapy for ADHD, and the CATTS telepsychiatrists showed high fidelity to evidence-based protocols. PMID:26258927

  15. An algorithm for improving the quality of structural images of turbid media in endoscopic optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Potlov, A. Yu.; Frolov, S. V.; Proskurin, S. G.

    2018-04-01

    High-quality OCT structural images reconstruction algorithm for endoscopic optical coherence tomography of biological tissue is described. The key features of the presented algorithm are: (1) raster scanning and averaging of adjacent Ascans and pixels; (2) speckle level minimization. The described algorithm can be used in the gastroenterology, urology, gynecology, otorhinolaryngology for mucous membranes and skin diagnostics in vivo and in situ.

  16. An algorithmic approach to the brain biopsy--part I.

    PubMed

    Kleinschmidt-DeMasters, B K; Prayson, Richard A

    2006-11-01

    The formulation of appropriate differential diagnoses for a slide is essential to the practice of surgical pathology but can be particularly challenging for residents and fellows. Algorithmic flow charts can help the less experienced pathologist to systematically consider all possible choices and eliminate incorrect diagnoses. They can assist pathologists-in-training in developing orderly, sequential, and logical thinking skills when confronting difficult cases. To present an algorithmic flow chart as an approach to formulating differential diagnoses for lesions seen in surgical neuropathology. An algorithmic flow chart to be used in teaching residents. Algorithms are not intended to be final diagnostic answers on any given case. Algorithms do not substitute for training received from experienced mentors nor do they substitute for comprehensive reading by trainees of reference textbooks. Algorithmic flow diagrams can, however, direct the viewer to the correct spot in reference texts for further in-depth reading once they hone down their diagnostic choices to a smaller number of entities. The best feature of algorithms is that they remind the user to consider all possibilities on each case, even if they can be quickly eliminated from further consideration. In Part I, we assist the resident in learning how to handle brain biopsies in general and how to distinguish nonneoplastic lesions that mimic tumors from true neoplasms.

  17. An algorithmic approach to the brain biopsy--part II.

    PubMed

    Prayson, Richard A; Kleinschmidt-DeMasters, B K

    2006-11-01

    The formulation of appropriate differential diagnoses for a slide is essential to the practice of surgical pathology but can be particularly challenging for residents and fellows. Algorithmic flow charts can help the less experienced pathologist to systematically consider all possible choices and eliminate incorrect diagnoses. They can assist pathologists-in-training in developing orderly, sequential, and logical thinking skills when confronting difficult cases. To present an algorithmic flow chart as an approach to formulating differential diagnoses for lesions seen in surgical neuropathology. An algorithmic flow chart to be used in teaching residents. Algorithms are not intended to be final diagnostic answers on any given case. Algorithms do not substitute for training received from experienced mentors nor do they substitute for comprehensive reading by trainees of reference textbooks. Algorithmic flow diagrams can, however, direct the viewer to the correct spot in reference texts for further in-depth reading once they hone down their diagnostic choices to a smaller number of entities. The best feature of algorithms is that they remind the user to consider all possibilities on each case, even if they can be quickly eliminated from further consideration. In Part II, we assist the resident in arriving at the correct diagnosis for neuropathologic lesions containing granulomatous inflammation, macrophages, or abnormal blood vessels.

  18. Economic evaluation of the one-hour rule-out and rule-in algorithm for acute myocardial infarction using the high-sensitivity cardiac troponin T assay in the emergency department.

    PubMed

    Ambavane, Apoorva; Lindahl, Bertil; Giannitsis, Evangelos; Roiz, Julie; Mendivil, Joan; Frankenstein, Lutz; Body, Richard; Christ, Michael; Bingisser, Roland; Alquezar, Aitor; Mueller, Christian

    2017-01-01

    The 1-hour (h) algorithm triages patients presenting with suspected acute myocardial infarction (AMI) to the emergency department (ED) towards "rule-out," "rule-in," or "observation," depending on baseline and 1-h levels of high-sensitivity cardiac troponin (hs-cTn). The economic consequences of applying the accelerated 1-h algorithm are unknown. We performed a post-hoc economic analysis in a large, diagnostic, multicenter study of hs-cTnT using central adjudication of the final diagnosis by two independent cardiologists. Length of stay (LoS), resource utilization (RU), and predicted diagnostic accuracy of the 1-h algorithm compared to standard of care (SoC) in the ED were estimated. The ED LoS, RU, and accuracy of the 1-h algorithm was compared to that achieved by the SoC at ED discharge. Expert opinion was sought to characterize clinical implementation of the 1-h algorithm, which required blood draws at ED presentation and 1h, after which "rule-in" patients were transferred for coronary angiography, "rule-out" patients underwent outpatient stress testing, and "observation" patients received SoC. Unit costs were for the United Kingdom, Switzerland, and Germany. The sensitivity and specificity for the 1-h algorithm were 87% and 96%, respectively, compared to 69% and 98% for SoC. The mean ED LoS for the 1-h algorithm was 4.3h-it was 6.5h for SoC, which is a reduction of 33%. The 1-h algorithm was associated with reductions in RU, driven largely by the shorter LoS in the ED for patients with a diagnosis other than AMI. The estimated total costs per patient were £2,480 for the 1-h algorithm compared to £4,561 for SoC, a reduction of up to 46%. The analysis shows that the use of 1-h algorithm is associated with reduction in overall AMI diagnostic costs, provided it is carefully implemented in clinical practice. These results need to be prospectively validated in the future.

  19. Computerized tests to evaluate recovery of cognitive function after deep sedation with propofol and remifentanil for colonoscopy.

    PubMed

    Borrat, Xavier; Ubre, Marta; Risco, Raquel; Gambús, Pedro L; Pedroso, Angela; Iglesias, Aina; Fernandez-Esparrach, Gloria; Ginés, Àngels; Balust, Jaume; Martínez-Palli, Graciela

    2018-03-27

    The use of sedation for diagnostic procedures including gastrointestinal endoscopy is rapidly growing. Recovery of cognitive function after sedation is important because it would be important for most patients to resume safe, normal life soon after the procedure. Computerized tests have shown being accurate descriptors of cognitive function. The purpose of the present study was to evaluate the time course of cognitive function recovery after sedation with propofol and remifentanil. A prospective observational double blind clinical study conducted in 34 young healthy adults undergoing elective outpatient colonoscopy under sedation with the combination of propofol and remifentanil using a target controlled infusion system. Cognitive function was measured using a validated battery of computerized cognitive tests (Cogstate™, Melbourne, Australia) at different predefined times: prior to starting sedation (Tbaseline), and then 10 min (T10), 40 min (T40) and 120 min (T120) after the end of colonoscopy. Tests included the assessment of psychomotor function, attention, visual memory and working memory. All colonoscopies were completed (median time: 26 min) without significant adverse events. Patients received a median total dose of propofol and remifentanil of 149 mg and 98 µg, respectively. Psychomotor function and attention declined at T10 but were back to baseline values at T40 for all patients. The magnitude of psychomotor task reduction was large (d = 0.81) however 100% of patients were recovered at T40. Memory related tasks were not affected 10 min after ending sedation. Cognitive impairment in attention and psychomotor function after propofol and remifentanil sedation was significant and large and could be easily detected by computerized cognitive tests. Even though, patients were fully recovered 40 min after ending the procedure. From a cognitive recovery point of view, larger studies should be undertaken to propose adequate criteria for discharge after sedation.

  20. A Comparative Analysis of the ADOS-G and ADOS-2 Algorithms: Preliminary Findings.

    PubMed

    Dorlack, Taylor P; Myers, Orrin B; Kodituwakku, Piyadasa W

    2018-06-01

    The Autism Diagnostic Observation Schedule (ADOS) is a widely utilized observational assessment tool for diagnosis of autism spectrum disorders. The original ADOS was succeeded by the ADOS-G with noted improvements. More recently, the ADOS-2 was introduced to further increase its diagnostic accuracy. Studies examining the validity of the ADOS have produced mixed findings, and pooled relationship trends between the algorithm versions are yet to be analyzed. The current review seeks to compare the relative merits of the ADOS-G and ADOS-2 algorithms, Modules 1-3. Eight studies met inclusion criteria for the review, and six were selected for paired comparisons of the sensitivity and specificity of the ADOS. Results indicate several contradictory findings, underscoring the importance of further study.

  1. Automatic analysis and classification of surface electromyography.

    PubMed

    Abou-Chadi, F E; Nashar, A; Saad, M

    2001-01-01

    In this paper, parametric modeling of surface electromyography (EMG) algorithms that facilitates automatic SEMG feature extraction and artificial neural networks (ANN) are combined for providing an integrated system for the automatic analysis and diagnosis of myopathic disorders. Three paradigms of ANN were investigated: the multilayer backpropagation algorithm, the self-organizing feature map algorithm and a probabilistic neural network model. The performance of the three classifiers was compared with that of the old Fisher linear discriminant (FLD) classifiers. The results have shown that the three ANN models give higher performance. The percentage of correct classification reaches 90%. Poorer diagnostic performance was obtained from the FLD classifier. The system presented here indicates that surface EMG, when properly processed, can be used to provide the physician with a diagnostic assist device.

  2. Bandlimited computerized improvements in characterization of nonlinear systems with memory

    NASA Astrophysics Data System (ADS)

    Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.

    2016-05-01

    The present article discusses some inroads in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] over many years of developmental research. The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms on the system are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order in order to combat and reasonably alleviate the curse of dimensionality.

  3. Diagnostic Challenges in Prostate Cancer and 68Ga-PSMA PET Imaging: A Game Changer?

    PubMed

    Zaman, Maseeh uz; Fatima, Nosheen; Zaman, Areeba; Sajid, Mahwsih; Zaman, Unaiza; Zaman, Sidra

    2017-10-26

    Prostate cancer (PC) is the most frequent solid tumor in men and the third most common cause of cancer mortality among men in developed countries. Current imaging modalities like ultrasound (US), computerized tomography (CT), magnetic resonance imaging (MRI) and choline based positron emission (PET) tracing have disappointing sensitivity for detection of nodal metastasis and small tumor recurrence. This poses a diagnostic challenge in staging of intermediate to high risk PC and restaging of patients with biochemical recurrence (PSA >0.2 ng/ml). Gallium-68 labeled prostate specific membrane antigen (68Ga-PSMA) PET imaging has now emerged with a higher diagnostic yield. 68Ga-PSMA PET/CT or PET/MRI can be expected to offer a one-stop-shop for staging and restaging of PC. PSMA ligands labeled with alpha and beta emitters have also shown promising therapeutic efficacy for nodal, bone and visceral metastasis. Therefore a PSMA based theranostics approach for detection, staging, treatment, and follow-up of PC would appear to be highly valuable to achieve personalized PC treatment. Creative Commons Attribution License

  4. Deep learning based syndrome diagnosis of chronic gastritis.

    PubMed

    Liu, Guo-Ping; Yan, Jian-Jun; Wang, Yi-Qin; Zheng, Wu; Zhong, Tao; Lu, Xiong; Qian, Peng

    2014-01-01

    In Traditional Chinese Medicine (TCM), most of the algorithms used to solve problems of syndrome diagnosis are superficial structure algorithms and not considering the cognitive perspective from the brain. However, in clinical practice, there is complex and nonlinear relationship between symptoms (signs) and syndrome. So we employed deep leaning and multilabel learning to construct the syndrome diagnostic model for chronic gastritis (CG) in TCM. The results showed that deep learning could improve the accuracy of syndrome recognition. Moreover, the studies will provide a reference for constructing syndrome diagnostic models and guide clinical practice.

  5. Open Energy Information System version 2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OpenEIS was created to provide standard methods for authoring, sharing, testing, using, and improving algorithms for operational building energy efficiency with building managers and building owners. OpenEIS is designed as a no-cost/low-cost solution that will propagate the fault detection and diagnostic (FDD) solutions into the marketplace by providing state- of- the-art analytical and diagnostic algorithms. As OpenEIS penetrates the market, demand by control system manufacturers and integrators serving small and medium commercial customers will help push these types of commercial software tool offerings into the broader marketplace.

  6. Deep Learning Based Syndrome Diagnosis of Chronic Gastritis

    PubMed Central

    Liu, Guo-Ping; Wang, Yi-Qin; Zheng, Wu; Zhong, Tao; Lu, Xiong; Qian, Peng

    2014-01-01

    In Traditional Chinese Medicine (TCM), most of the algorithms used to solve problems of syndrome diagnosis are superficial structure algorithms and not considering the cognitive perspective from the brain. However, in clinical practice, there is complex and nonlinear relationship between symptoms (signs) and syndrome. So we employed deep leaning and multilabel learning to construct the syndrome diagnostic model for chronic gastritis (CG) in TCM. The results showed that deep learning could improve the accuracy of syndrome recognition. Moreover, the studies will provide a reference for constructing syndrome diagnostic models and guide clinical practice. PMID:24734118

  7. Early prosthetic aortic valve infection identified with the use of positron emission tomography in a patient with lead endocarditis.

    PubMed

    Amraoui, Sana; Tlili, Ghoufrane; Sohal, Manav; Bordenave, Laurence; Bordachar, Pierre

    2016-12-01

    18-Fluorodeoxyglucose positron emission tomography/computerized tomography (FDG PET/CT) scanning has recently been proposed as a diagnostic tool for lead endocarditis (LE). FDG PET/CT might be also useful to localize associated septic emboli in patients with LE. We report an interesting case of a LE patient with a prosthetic aortic valve in whom a trans-esophageal echocardiogram did not show associated aortic endocarditis. FDG PET/CT revealed prosthetic aortic valve infection. A second TEE performed 2 weeks after identified aortic vegetation. A longer duration of antimicrobial therapy with serial follow-up echocardiography was initiated. There was also increased uptake in the sigmoid colon, corresponding to focal polyps resected during a colonoscopy. FDG PET/CT scanning seems to be highly sensitive for prosthetic aortic valve endocarditis diagnosis. This promising diagnostic tool may be beneficial in LE patients, by identifying septic emboli and potential sites of pathogen entry.

  8. Dual and multiple diagnosis among substance using runaway youth.

    PubMed

    Slesnick, Natasha; Prestopnik, Jillian

    2005-01-01

    Although research on runaway and homeless youth is increasing, relatively little is known about the diagnostic profile of runaway adolescents. The current study examined patterns of psychiatric dual and multiple diagnosis among a sample (N=226) of treatment-engaged substance-abusing youth (ages 13 to 17) who were residing at a runaway shelter. As part of a larger treatment outcome study, the youths' psychiatric status was assessed using the DSM-IV based computerized diagnostic interview schedule for children [CDISC; (1)]. The majority of the youth in our sample met criteria for dual or multiple diagnosis (60%) with many having more than one substance-use diagnosis (56%). The severity of mental-health and substance-use problems in this sample of substance-abusing runaways suggests the need for continued development of comprehensive services. The range and intensity of diagnoses seen indicates a need for greater focus on treatment development and strategies to address their multiple areas of risk.

  9. The validity and clinical utility of purging disorder.

    PubMed

    Keel, Pamela K; Striegel-Moore, Ruth H

    2009-12-01

    To review evidence of the validity and clinical utility of Purging Disorder and examine options for the Diagnostic and Statistical Manual of Mental Disorders fifth edition (DSM-V). Articles were identified by computerized and manual searches and reviewed to address five questions about Purging Disorder: Is there "ample" literature? Is the syndrome clearly defined? Can it be measured and diagnosed reliably? Can it be differentiated from other eating disorders? Is there evidence of syndrome validity? Although empirical classification and concurrent validity studies provide emerging support for the distinctiveness of Purging Disorder, questions remain about definition, diagnostic reliability in clinical settings, and clinical utility (i.e., prognostic validity). We discuss strengths and weaknesses associated with various options for the status of Purging Disorder in the DSM-V ranging from making no changes from DSM-IV to designating Purging Disorder a diagnosis on equal footing with Anorexia Nervosa and Bulimia Nervosa.

  10. Pathways to multidrug-resistant tuberculosis diagnosis and treatment initiation: a qualitative comparison of patients' experiences in the era of rapid molecular diagnostic tests.

    PubMed

    Naidoo, Pren; van Niekerk, Margaret; du Toit, Elizabeth; Beyers, Nulda; Leon, Natalie

    2015-10-28

    Although new molecular diagnostic tests such as GenoType MTBDRplus and Xpert® MTB/RIF have reduced multidrug-resistant tuberculosis (MDR-TB) treatment initiation times, patients' experiences of diagnosis and treatment initiation are not known. This study aimed to explore and compare MDR-TB patients' experiences of their diagnostic and treatment initiation pathway in GenoType MTBDRplus and Xpert® MTB/RIF-based diagnostic algorithms. The study was undertaken in Cape Town, South Africa where primary health-care services provided free TB diagnosis and treatment. A smear, culture and GenoType MTBDRplus diagnostic algorithm was used in 2010, with Xpert® MTB/RIF phased in from 2011-2013. Participants diagnosed in each algorithm at four facilities were purposively sampled, stratifying by age, gender and MDR-TB risk profiles. We conducted in-depth qualitative interviews using a semi-structured interview guide. Through constant comparative analysis we induced common and divergent themes related to symptom recognition, health-care access, testing for MDR-TB and treatment initiation within and between groups. Data were triangulated with clinical information and health visit data from a structured questionnaire. We identified both enablers and barriers to early MDR-TB diagnosis and treatment. Half the patients had previously been treated for TB; most recognised recurring symptoms and reported early health-seeking. Those who attributed symptoms to other causes delayed health-seeking. Perceptions of poor public sector services were prevalent and may have contributed both to deferred health-seeking and to patient's use of the private sector, contributing to delays. However, once on treatment, most patients expressed satisfaction with public sector care. Two patients in the Xpert® MTB/RIF-based algorithm exemplified its potential to reduce delays, commencing MDR-TB treatment within a week of their first health contact. However, most patients in both algorithms experienced substantial delays. Avoidable health system delays resulted from providers not testing for TB at initial health contact, non-adherence to testing algorithms, results not being available and failure to promptly recall patients with positive results. Whilst the introduction of rapid tests such as Xpert® MTB/RIF can expedite MDR-TB diagnosis and treatment initiation, the full benefits are unlikely to be realised without reducing delays in health-seeking and addressing the structural barriers present in the health-care system.

  11. Automated detection of pulmonary nodules in CT images with support vector machines

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Liu, Wanyu; Sun, Xiaoming

    2008-10-01

    Many methods have been proposed to avoid radiologists fail to diagnose small pulmonary nodules. Recently, support vector machines (SVMs) had received an increasing attention for pattern recognition. In this paper, we present a computerized system aimed at pulmonary nodules detection; it identifies the lung field, extracts a set of candidate regions with a high sensitivity ratio and then classifies candidates by the use of SVMs. The Computer Aided Diagnosis (CAD) system presented in this paper supports the diagnosis of pulmonary nodules from Computed Tomography (CT) images as inflammation, tuberculoma, granuloma..sclerosing hemangioma, and malignant tumor. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of SVMs classifiers. The achieved classification performance was 100%, 92.75% and 90.23% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  12. The application of compressive sampling in rapid ultrasonic computerized tomography (UCT) technique of steel tube slab (STS).

    PubMed

    Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao

    2018-01-01

    This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique.

  13. A Testbed for Data Fusion for Helicopter Diagnostics and Prognostics

    DTIC Science & Technology

    2003-03-01

    and algorithm design and tuning in order to develop advanced diagnostic and prognostic techniques for air craft health monitoring . Here a...and development of models for diagnostics, prognostics , and anomaly detection . Figure 5 VMEP Server Browser Interface 7 Download... detections , and prognostic prediction time horizons. The VMEP system and in particular the web component are ideal for performing data collection

  14. Using Standardized Diagnostic Instruments to Classify Children with Autism in the Study to Explore Early Development

    ERIC Educational Resources Information Center

    Wiggins, Lisa D.; Reynolds, Ann; Rice, Catherine E.; Moody, Eric J.; Bernal, Pilar; Blaskey, Lisa; Rosenberg, Steven A.; Lee, Li-Ching; Levy, Susan E.

    2015-01-01

    The Study to Explore Early Development (SEED) is a multi-site case-control study designed to explore the relationship between autism spectrum disorder (ASD) phenotypes and etiologies. The goals of this paper are to (1) describe the SEED algorithm that uses the Autism Diagnostic Interview-Revised (ADI-R) and Autism Diagnostic Observation Schedule…

  15. Multisite Study of New Autism Diagnostic Interview-Revised (ADI-R) Algorithms for Toddlers and Young Preschoolers

    ERIC Educational Resources Information Center

    Kim, So Hyun; Thurm, Audrey; Shumway, Stacy; Lord, Catherine

    2013-01-01

    Using two independent datasets provided by National Institute of Health funded consortia, the Collaborative Programs for Excellence in Autism and Studies to Advance Autism Research and Treatment (n = 641) and the National Institute of Mental Health (n = 167), diagnostic validity and factor structure of the new Autism Diagnostic Interview (ADI-R)…

  16. Multivariate Analysis As a Support for Diagnostic Flowcharts in Allergic Bronchopulmonary Aspergillosis: A Proof-of-Concept Study.

    PubMed

    Vitte, Joana; Ranque, Stéphane; Carsin, Ania; Gomez, Carine; Romain, Thomas; Cassagne, Carole; Gouitaa, Marion; Baravalle-Einaudi, Mélisande; Bel, Nathalie Stremler-Le; Reynaud-Gaubert, Martine; Dubus, Jean-Christophe; Mège, Jean-Louis; Gaudart, Jean

    2017-01-01

    Molecular-based allergy diagnosis yields multiple biomarker datasets. The classical diagnostic score for allergic bronchopulmonary aspergillosis (ABPA), a severe disease usually occurring in asthmatic patients and people with cystic fibrosis, comprises succinct immunological criteria formulated in 1977: total IgE, anti- Aspergillus fumigatus ( Af ) IgE, anti- Af "precipitins," and anti- Af IgG. Progress achieved over the last four decades led to multiple IgE and IgG(4) Af biomarkers available with quantitative, standardized, molecular-level reports. These newly available biomarkers have not been included in the current diagnostic criteria, either individually or in algorithms, despite persistent underdiagnosis of ABPA. Large numbers of individual biomarkers may hinder their use in clinical practice. Conversely, multivariate analysis using new tools may bring about a better chance of less diagnostic mistakes. We report here a proof-of-concept work consisting of a three-step multivariate analysis of Af IgE, IgG, and IgG4 biomarkers through a combination of principal component analysis, hierarchical ascendant classification, and classification and regression tree multivariate analysis. The resulting diagnostic algorithms might show the way for novel criteria and improved diagnostic efficiency in Af -sensitized patients at risk for ABPA.

  17. SU-E-J-275: Review - Computerized PET/CT Image Analysis in the Evaluation of Tumor Response to Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, W; Wang, J; Zhang, H

    Purpose: To review the literature in using computerized PET/CT image analysis for the evaluation of tumor response to therapy. Methods: We reviewed and summarized more than 100 papers that used computerized image analysis techniques for the evaluation of tumor response with PET/CT. This review mainly covered four aspects: image registration, tumor segmentation, image feature extraction, and response evaluation. Results: Although rigid image registration is straightforward, it has been shown to achieve good alignment between baseline and evaluation scans. Deformable image registration has been shown to improve the alignment when complex deformable distortions occur due to tumor shrinkage, weight loss ormore » gain, and motion. Many semi-automatic tumor segmentation methods have been developed on PET. A comparative study revealed benefits of high levels of user interaction with simultaneous visualization of CT images and PET gradients. On CT, semi-automatic methods have been developed for only tumors that show marked difference in CT attenuation between the tumor and the surrounding normal tissues. Quite a few multi-modality segmentation methods have been shown to improve accuracy compared to single-modality algorithms. Advanced PET image features considering spatial information, such as tumor volume, tumor shape, total glycolytic volume, histogram distance, and texture features have been found more informative than the traditional SUVmax for the prediction of tumor response. Advanced CT features, including volumetric, attenuation, morphologic, structure, and texture descriptors, have also been found advantage over the traditional RECIST and WHO criteria in certain tumor types. Predictive models based on machine learning technique have been constructed for correlating selected image features to response. These models showed improved performance compared to current methods using cutoff value of a single measurement for tumor response. Conclusion: This review showed that computerized PET/CT image analysis holds great potential to improve the accuracy in evaluation of tumor response. This work was supported in part by the National Cancer Institute Grant R01CA172638.« less

  18. Non-invasive optical detection of esophagus cancer based on urine surface-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Huang, Shaohua; Wang, Lan; Chen, Weiwei; Lin, Duo; Huang, Lingling; Wu, Shanshan; Feng, Shangyuan; Chen, Rong

    2014-09-01

    A surface-enhanced Raman spectroscopy (SERS) approach was utilized for urine biochemical analysis with the aim to develop a label-free and non-invasive optical diagnostic method for esophagus cancer detection. SERS spectrums were acquired from 31 normal urine samples and 47 malignant esophagus cancer (EC) urine samples. Tentative assignments of urine SERS bands demonstrated esophagus cancer specific changes, including an increase in the relative amounts of urea and a decrease in the percentage of uric acid in the urine of normal compared with EC. The empirical algorithm integrated with linear discriminant analysis (LDA) were employed to identify some important urine SERS bands for differentiation between healthy subjects and EC urine. The empirical diagnostic approach based on the ratio of the SERS peak intensity at 527 to 1002 cm-1 and 725 to 1002 cm-1 coupled with LDA yielded a diagnostic sensitivity of 72.3% and specificity of 96.8%, respectively. The area under the receive operating characteristic (ROC) curve was 0.954, which further evaluate the performance of the diagnostic algorithm based on the ratio of the SERS peak intensity combined with LDA analysis. This work demonstrated that the urine SERS spectra associated with empirical algorithm has potential for noninvasive diagnosis of esophagus cancer.

  19. Autism in the Faroe Islands: Diagnostic Stability from Childhood to Early Adult Life

    PubMed Central

    Kočovská, Eva; Billstedt, Eva; Ellefsen, Asa; Kampmann, Hanna; Gillberg, I. Carina; Biskupstø, Rannvá; Andorsdóttir, Guðrið; Stóra, Tormóður; Minnis, Helen; Gillberg, Christopher

    2013-01-01

    Childhood autism or autism spectrum disorder (ASD) has been regarded as one of the most stable diagnostic categories applied to young children with psychiatric/developmental disorders. The stability over time of a diagnosis of ASD is theoretically interesting and important for various diagnostic and clinical reasons. We studied the diagnostic stability of ASD from childhood to early adulthood in the Faroe Islands: a total school age population sample (8–17-year-olds) was screened and diagnostically assessed for AD in 2002 and 2009. This paper compares both independent clinical diagnosis and Diagnostic Interview for Social and Communication Disorders (DISCO) algorithm diagnosis at two time points, separated by seven years. The stability of clinical ASD diagnosis was perfect for AD, good for “atypical autism”/PDD-NOS, and less than perfect for Asperger syndrome (AS). Stability of the DISCO algorithm subcategory diagnoses was more variable but still good for AD. Both systems showed excellent stability over the seven-year period for “any ASD” diagnosis, although a number of clear cases had been missed at the original screening in 2002. The findings support the notion that subcategories of ASD should be collapsed into one overarching diagnostic entity with subgrouping achieved on other “non-autism” variables, such as IQ and language levels and overall adaptive functioning. PMID:23476144

  20. Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C

    2013-01-01

    Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less

  1. On-line experimental validation of a model-based diagnostic algorithm dedicated to a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas

    2016-02-01

    In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.

  2. Efficient fault diagnosis of helicopter gearboxes

    NASA Technical Reports Server (NTRS)

    Chin, H.; Danai, K.; Lewicki, D. G.

    1993-01-01

    Application of a diagnostic system to a helicopter gearbox is presented. The diagnostic system is a nonparametric pattern classifier that uses a multi-valued influence matrix (MVIM) as its diagnostic model and benefits from a fast learning algorithm that enables it to estimate its diagnostic model from a small number of measurement-fault data. To test this diagnostic system, vibration measurements were collected from a helicopter gearbox test stand during accelerated fatigue tests and at various fault instances. The diagnostic results indicate that the MVIM system can accurately detect and diagnose various gearbox faults so long as they are included in training.

  3. A Computerized Microelectrode Recording to Magnetic Resonance Imaging Mapping System for Subthalamic Nucleus Deep Brain Stimulation Surgery.

    PubMed

    Dodani, Sunjay S; Lu, Charles W; Aldridge, J Wayne; Chou, Kelvin L; Patil, Parag G

    2018-06-01

    Accurate electrode placement is critical to the success of deep brain stimulation (DBS) surgery. Suboptimal targeting may arise from poor initial target localization, frame-based targeting error, or intraoperative brain shift. These uncertainties can make DBS surgery challenging. To develop a computerized system to guide subthalamic nucleus (STN) DBS electrode localization and to estimate the trajectory of intraoperative microelectrode recording (MER) on magnetic resonance (MR) images algorithmically during DBS surgery. Our method is based upon the relationship between the high-frequency band (HFB; 500-2000 Hz) signal from MER and voxel intensity on MR images. The HFB profile along an MER trajectory recorded during surgery is compared to voxel intensity profiles along many potential trajectories in the region of the surgically planned trajectory. From these comparisons of HFB recordings and potential trajectories, an estimate of the MER trajectory is calculated. This calculated trajectory is then compared to actual trajectory, as estimated by postoperative high-resolution computed tomography. We compared 20 planned, calculated, and actual trajectories in 13 patients who underwent STN DBS surgery. Targeting errors for our calculated trajectories (2.33 mm ± 0.2 mm) were significantly less than errors for surgically planned trajectories (2.83 mm ± 0.2 mm; P = .01), improving targeting prediction in 70% of individual cases (14/20). Moreover, in 4 of 4 initial MER trajectories that missed the STN, our method correctly indicated the required direction of targeting adjustment for the DBS lead to intersect the STN. A computer-based algorithm simultaneously utilizing MER and MR information potentially eases electrode localization during STN DBS surgery.

  4. New web-based algorithm to improve rigid gas permeable contact lens fitting in keratoconus.

    PubMed

    Ortiz-Toquero, Sara; Rodriguez, Guadalupe; de Juan, Victoria; Martin, Raul

    2017-06-01

    To calculate and validate a new web-based algorithm for selecting the back optic zone radius (BOZR) of spherical gas permeable (GP) lens in keratoconus eyes. A retrospective calculation (n=35; multiple regression analysis) and a posterior prospective validation (new sample of 50 keratoconus eyes) of a new algorithm to select the BOZR of spherical KAKC design GP lenses (Conoptica) in keratoconus were conducted. BOZR calculated with the new algorithm, manufacturer guidelines and APEX software were compared with the BOZR that was finally prescribed. Number of diagnostic lenses, ordered lenses and visits to achieve optimal fitting were recorded and compared those obtained for a control group [50 healthy eyes fitted with spherical GP (BIAS design; Conoptica)]. The new algorithm highly correlated with the final BOZR fitted (r 2 =0.825, p<0.001). BOZR of the first diagnostic lens using the new algorithm demonstrated lower difference with the final BOZR prescribed (-0.01±0.12mm, p=0.65; 58% difference≤0.05mm) than with the manufacturer guidelines (+0.12±0.22mm, p<0.001; 26% difference≤0.05mm) and APEX software (-0.14±0.16mm, p=0.001; 34% difference≤0.05mm). Close numbers of diagnostic lens (1.6±0.8, 1.3±0.5; p=0.02), ordered lens (1.4±0.6, 1.1±0.3; P<0.001), and visits (3.4±0.7, 3.2±0.4; p=0.08) were required to fit keratoconus and healthy eyes, respectively. This new algorithm (free access at www.calculens.com) improves spherical KAKC GP fitting in keratoconus and can reduce the practitioner and patient chair time to achieve a final acceptable fit in keratoconus. This algorithm reduces differences between keratoconus GP fitting (KAKC design) and standard GP (BIAS design) lenses fitting in healthy eyes. Copyright © 2016 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  5. Validation of existing diagnosis of autism in mainland China using standardised diagnostic instruments.

    PubMed

    Sun, Xiang; Allison, Carrie; Auyeung, Bonnie; Zhang, Zhixiang; Matthews, Fiona E; Baron-Cohen, Simon; Brayne, Carol

    2015-11-01

    Research to date in mainland China has mainly focused on children with autistic disorder rather than Autism Spectrum Conditions and the diagnosis largely depended on clinical judgment without the use of diagnostic instruments. Whether children who have been diagnosed in China before meet the diagnostic criteria of Autism Spectrum Conditions is not known nor how many such children would meet these criteria. The aim of this study was to evaluate children with a known diagnosis of autism in mainland China using the Autism Diagnostic Observation Schedule and the Autism Diagnostic Interview-Revised to verify that children who were given a diagnosis of autism made by Chinese clinicians in China were mostly children with severe autism. Of 50 children with an existing diagnosis of autism made by Chinese clinicians, 47 children met the diagnosis of autism on the Autism Diagnostic Observation Schedule algorithm and 44 children met the diagnosis of autism on the Autism Diagnostic Interview-Revised algorithm. Using the Gwet's alternative chance-corrected statistic, the agreement between the Chinese diagnosis and the Autism Diagnostic Observation Schedule diagnosis was very good (AC1 = 0.94, p < 0.005, 95% confidence interval (0.86, 1.00)), so was the agreement between the Chinese diagnosis and the Autism Diagnostic Interview-Revised (AC1 = 0.91, p < 0.005, 95% confidence interval (0.81, 1.00)). The agreement between the Autism Diagnostic Observation Schedule and the Autism Diagnostic Interview-Revised was lower but still very good (AC1 = 0.83, p < 0.005). © The Author(s) 2015.

  6. Validation of existing diagnosis of autism in mainland China using standardised diagnostic instruments

    PubMed Central

    Sun, Xiang; Allison, Carrie; Auyeung, Bonnie; Zhang, Zhixiang; Matthews, Fiona E; Baron-Cohen, Simon; Brayne, Carol

    2016-01-01

    Research to date in mainland China has mainly focused on children with autistic disorder rather than Autism Spectrum Conditions and the diagnosis largely depended on clinical judgment without the use of diagnostic instruments. Whether children who have been diagnosed in China before meet the diagnostic criteria of Autism Spectrum Conditions is not known nor how many such children would meet these criteria. The aim of this study was to evaluate children with a known diagnosis of autism in mainland China using the Autism Diagnostic Observation Schedule and the Autism Diagnostic Interview–Revised to verify that children who were given a diagnosis of autism made by Chinese clinicians in China were mostly children with severe autism. Of 50 children with an existing diagnosis of autism made by Chinese clinicians, 47 children met the diagnosis of autism on the Autism Diagnostic Observation Schedule algorithm and 44 children met the diagnosis of autism on the Autism Diagnostic Interview–Revised algorithm. Using the Gwet’s alternative chance-corrected statistic, the agreement between the Chinese diagnosis and the Autism Diagnostic Observation Schedule diagnosis was very good (AC1 = 0.94, p < 0.005, 95% confidence interval (0.86, 1.00)), so was the agreement between the Chinese diagnosis and the Autism Diagnostic Interview–Revised (AC1 = 0.91, p < 0.005, 95% confidence interval (0.81, 1.00)). The agreement between the Autism Diagnostic Observation Schedule and the Autism Diagnostic Interview–Revised was lower but still very good (AC1 = 0.83, p < 0.005). PMID:25757721

  7. Magnetic resonance imaging for the ophthalmologist: A primer

    PubMed Central

    Simha, Arathi; Irodi, Aparna; David, Sarada

    2012-01-01

    Magnetic resonance imaging (MRI) and computerized tomography (CT) have added a new dimension in the diagnosis and management of ocular and orbital diseases. Although CT is more widely used, MRI is the modality of choice in select conditions and can be complimentary to CT in certain situations. The diagnostic yield is best when the ophthalmologist and radiologist work together. Ophthalmologists should be able to interpret these complex imaging modalities as better clinical correlation is then possible. In this article, we attempt to describe the basic principles of MRI and its interpretation, avoiding confusing technical terms. PMID:22824600

  8. Dynamic, diagnostic, and pharmacological radionuclide studies of the esophagus in achalasia: correlation with manometric measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozen, P.; Gelfond, M.; Zaltzman, S.

    1982-08-01

    The esophagus was evaluated in 15 patients with achalasia by continuous gamma camera imaging following ingestion of a semi-solid meal labeled with /sup 99//sup m/Tc. The images were displayed and recorded on a simple computerized data processing/display system. Subsequent cine' mode images of esophagela emptying demonstrated abnormalities of the body of the esophagus not reflected by the manometric examination. Computer-generated time-activity curves representing specific regions of interest were better than manometry in evaluating the results of myotomy, dilatation, and drug therapy. Isosorbide dinitrate significantly improved esophageal emptying.

  9. American Pancreatic Association Practice Guidelines in Chronic Pancreatitis: Evidence-Based Report on Diagnostic Guidelines

    PubMed Central

    Conwell, Darwin L.; Lee, Linda S.; Yadav, Dhiraj; Longnecker, Daniel S.; Miller, Frank H.; Mortele, Koenraad J.; Levy, Michael J.; Kwon, Richard; Lieb, John G.; Stevens, Tyler; Toskes, Philip P.; Gardner, Timothy B.; Gelrud, Andres; Wu, Bechien U.; Forsmark, Christopher E.; Vege, Santhi S.

    2016-01-01

    The diagnosis of chronic pancreatitis remains challenging in early stages of the disease. This report defines the diagnostic criteria useful in the assessment of patients with suspected and established chronic pancreatitis. All current diagnostic procedures are reviewed and evidence based statements are provided about their utility and limitations. Diagnostic criteria for chronic pancreatitis are classified as definitive, probable or insufficient evidence. A diagnostic (STEP-wise; S-survey, T-tomography, E-endoscopy and P-pancreas function testing) algorithm is proposed that proceeds from a non-invasive to a more invasive approach. This algorithm maximizes specificity (low false positive rate) in subjects with chronic abdominal pain and equivocal imaging changes. Futhermore, a nomenclature is suggested to further characterize patients with established chronic pancreatitis based on TIGAR-O (T-toxic, I-idiopathic, G-genetic, A- autoimmune, R-recurrent and O-obstructive) etiology, gland morphology (Cambridge criteria) and physiologic state (exocrine, endocrine function) for uniformity across future multi-center research collaborations. This guideline will serve as a baseline manuscript that will be modified as new evidence becomes available and our knowledge of chronic pancreatitis improves. PMID:25333398

  10. Decentralized diagnostics based on a distributed micro-genetic algorithm for transducer networks monitoring large experimental systems.

    PubMed

    Arpaia, P; Cimmino, P; Girone, M; La Commara, G; Maisto, D; Manna, C; Pezzetti, M

    2014-09-01

    Evolutionary approach to centralized multiple-faults diagnostics is extended to distributed transducer networks monitoring large experimental systems. Given a set of anomalies detected by the transducers, each instance of the multiple-fault problem is formulated as several parallel communicating sub-tasks running on different transducers, and thus solved one-by-one on spatially separated parallel processes. A micro-genetic algorithm merges evaluation time efficiency, arising from a small-size population distributed on parallel-synchronized processors, with the effectiveness of centralized evolutionary techniques due to optimal mix of exploitation and exploration. In this way, holistic view and effectiveness advantages of evolutionary global diagnostics are combined with reliability and efficiency benefits of distributed parallel architectures. The proposed approach was validated both (i) by simulation at CERN, on a case study of a cold box for enhancing the cryogeny diagnostics of the Large Hadron Collider, and (ii) by experiments, under the framework of the industrial research project MONDIEVOB (Building Remote Monitoring and Evolutionary Diagnostics), co-funded by EU and the company Del Bo srl, Napoli, Italy.

  11. Periprosthetic joint infections: a clinical practice algorithm.

    PubMed

    Volpe, Luigi; Indelli, Pier Francesco; Latella, Leonardo; Poli, Paolo; Yakupoglu, Jale; Marcucci, Massimiliano

    2014-01-01

    periprosthetic joint infection (PJI) accounts for 25% of failed total knee arthroplasties (TKAs) and 15% of failed total hip arthroplasties (THAs). The purpose of the present study was to design a multidisciplinary diagnostic algorithm to detect a PJI as cause of a painful TKA or THA. from April 2010 to October 2012, 111 patients with suspected PJI were evaluated. The study group comprised 75 females and 36 males with an average age of 71 years (range, 48 to 94 years). Eighty-four patients had a painful THA, while 27 reported a painful TKA. The stepwise diagnostic algorithm, applied in all the patients, included: measurement of serum C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) levels; imaging studies, including standard radiological examination, standard technetium-99m-methylene diphosphonate (MDP) bone scan (if positive, confirmation by LeukoScan was obtained); and joint aspiration with analysis of synovial fluid. following application of the stepwise diagnostic algorithm, 24 out of our 111 screened patients were classified as having a suspected PJI (21.7%). CRP and ESR levels were negative in 84 and positive in 17 cases; 93.7% of the patients had a positive technetium-labeled bone scan, and 23% a positive LeukoScan. Preoperative synovial fluid analysis was positive in 13.5%; analysis of synovial fluid obtained by preoperative aspiration showed a leucocyte count of > 3000 cells μ/l in 52% of the patients. the present study showed that the diagnosis of PJI requires the application of a multimodal diagnostic protocol in order to avoid complications related to surgical revision of a misdiagnosed "silent" PJI. Level IV, therapeutic case series.

  12. Probability scores and diagnostic algorithms in pulmonary embolism: are they followed in clinical practice?

    PubMed

    Sanjuán, Pilar; Rodríguez-Núñez, Nuria; Rábade, Carlos; Lama, Adriana; Ferreiro, Lucía; González-Barcala, Francisco Javier; Alvarez-Dobaño, José Manuel; Toubes, María Elena; Golpe, Antonio; Valdés, Luis

    2014-05-01

    Clinical probability scores (CPS) determine the pre-test probability of pulmonary embolism (PE) and assess the need for the tests required in these patients. Our objective is to investigate if PE is diagnosed according to clinical practice guidelines. Retrospective study of clinically suspected PE in the emergency department between January 2010 and December 2012. A D-dimer value ≥ 500 ng/ml was considered positive. PE was diagnosed on the basis of the multislice computed tomography angiography and, to a lesser extent, with other imaging techniques. The CPS used was the revised Geneva scoring system. There was 3,924 cases of suspected PE (56% female). Diagnosis was determined in 360 patients (9.2%) and the incidence was 30.6 cases per 100,000 inhabitants/year. Sensitivity and the negative predictive value of the D-dimer test were 98.7% and 99.2% respectively. CPS was calculated in only 24 cases (0.6%) and diagnostic algorithms were not followed in 2,125 patients (54.2%): in 682 (17.4%) because clinical probability could not be estimated and in 482 (37.6%), 852 (46.4%) and 109 (87.9%) with low, intermediate and high clinical probability, respectively, because the diagnostic algorithms for these probabilities were not applied. CPS are rarely calculated in the diagnosis of PE and the diagnostic algorithm is rarely used in clinical practice. This may result in procedures with potential significant side effects being unnecessarily performed or to a high risk of underdiagnosis. Copyright © 2013 SEPAR. Published by Elsevier Espana. All rights reserved.

  13. Evidence-based development of a diagnosis-dependent therapy planning system and its implementation in modern diagnostic software.

    PubMed

    Ahlers, M O; Jakstat, H A

    2005-07-01

    The prerequisite for structured individual therapy of craniomandibular dysfunctions is differential diagnostics. Suggestions for the structured recording of findings and their structured evaluation beyond the global diagnosis of "craniomandibular disorders" have been published. Only this structured approach enables computerization of the diagnostic process. The respective software is available for use in practice (CMDcheck for CMD screening, CMDfact for the differential diagnostics). Based on this structured diagnostics, knowledge-based therapy planning is also conceivable. The prerequisite for this would be a model of achieving consensus on the indicated forms of therapy related to the diagnosis. Therefore, a procedure for evidence-based achievement of consensus on suitable forms of therapy in CMD was developed first in multicentric cooperation, and then implemented in corresponding software. The clinical knowledge of experienced specialists was included consciously for the consensus achievement process. At the same time, anonymized mathematical statistical evaluations were used for control and objectification. Different examiners form different departments of several universities working independently of one another assigned the theoretically conceiveable therapeutic alternatives to the already published diagnostic scheme. After anonymization, the correlation of these assignments was then calculated mathematically. For achieving consensus in those cases for which no agreement initally existed, agreement was subsequently arrived at in the course of a consensus conference on the basis of literature evaluations and the discussion of clinical case examples. This consensus in turn finally served as the basis of a therapy planner implemented in the above-mentioned diagnostic software CMDfact. Contributing to quality assurance, the principles of programming this assistant as well as the interface for linking into the diagnostic software are documented and also published here.

  14. Development of Multi-perspective Diagnostics and Analysis Algorithms with Applications to Subsonic and Supersonic Combustors

    NASA Astrophysics Data System (ADS)

    Wickersham, Andrew Joseph

    There are two critical research needs for the study of hydrocarbon combustion in high speed flows: 1) combustion diagnostics with adequate temporal and spatial resolution, and 2) mathematical techniques that can extract key information from large datasets. The goal of this work is to address these needs, respectively, by the use of high speed and multi-perspective chemiluminescence and advanced mathematical algorithms. To obtain the measurements, this work explored the application of high speed chemiluminescence diagnostics and the use of fiber-based endoscopes (FBEs) for non-intrusive and multi-perspective chemiluminescence imaging up to 20 kHz. Non-intrusive and full-field imaging measurements provide a wealth of information for model validation and design optimization of propulsion systems. However, it is challenging to obtain such measurements due to various implementation difficulties such as optical access, thermal management, and equipment cost. This work therefore explores the application of FBEs for non-intrusive imaging to supersonic propulsion systems. The FBEs used in this work are demonstrated to overcome many of the aforementioned difficulties and provided datasets from multiple angular positions up to 20 kHz in a supersonic combustor. The combustor operated on ethylene fuel at Mach 2 with an inlet stagnation temperature and pressure of approximately 640 degrees Fahrenheit and 70 psia, respectively. The imaging measurements were obtained from eight perspectives simultaneously, providing full-field datasets under such flow conditions for the first time, allowing the possibility of inferring multi-dimensional measurements. Due to the high speed and multi-perspective nature, such new diagnostic capability generates a large volume of data and calls for analysis algorithms that can process the data and extract key physics effectively. To extract the key combustion dynamics from the measurements, three mathematical methods were investigated in this work: Fourier analysis, proper orthogonal decomposition (POD), and wavelet analysis (WA). These algorithms were first demonstrated and tested on imaging measurements obtained from one perspective in a sub-sonic combustor (up to Mach 0.2). The results show that these algorithms are effective in extracting the key physics from large datasets, including the characteristic frequencies of flow-flame interactions especially during transient processes such as lean blow off and ignition. After these relatively simple tests and demonstrations, these algorithms were applied to process the measurements obtained from multi-perspective in the supersonic combustor. compared to past analyses (which have been limited to data obtained from one perspective only), the availability of data at multiple perspective provide further insights into the flame and flow structures in high speed flows. In summary, this work shows that high speed chemiluminescence is a simple yet powerful combustion diagnostic. Especially when combined with FBEs and the analyses algorithms described in this work, such diagnostics provide full-field imaging at high repetition rate in challenging flows. Based on such measurements, a wealth of information can be obtained from proper analysis algorithms, including characteristic frequency, dominating flame modes, and even multi-dimensional flame and flow structures.

  15. Rapid and Accurate Behavioral Health Diagnostic Screening: Initial Validation Study of a Web-Based, Self-Report Tool (the SAGE-SR)

    PubMed Central

    Purcell, Susan E; Rhea, Karen; Maier, Philip; First, Michael; Zweede, Lisa; Sinisterra, Manuela; Nunn, M Brad; Austin, Marie-Paule; Brodey, Inger S

    2018-01-01

    Background The Structured Clinical Interview for DSM (SCID) is considered the gold standard assessment for accurate, reliable psychiatric diagnoses; however, because of its length, complexity, and training required, the SCID is rarely used outside of research. Objective This paper aims to describe the development and initial validation of a Web-based, self-report screening instrument (the Screening Assessment for Guiding Evaluation-Self-Report, SAGE-SR) based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) and the SCID-5-Clinician Version (CV) intended to make accurate, broad-based behavioral health diagnostic screening more accessible within clinical care. Methods First, study staff drafted approximately 1200 self-report items representing individual granular symptoms in the diagnostic criteria for the 8 primary SCID-CV modules. An expert panel iteratively reviewed, critiqued, and revised items. The resulting items were iteratively administered and revised through 3 rounds of cognitive interviewing with community mental health center participants. In the first 2 rounds, the SCID was also administered to participants to directly compare their Likert self-report and SCID responses. A second expert panel evaluated the final pool of items from cognitive interviewing and criteria in the DSM-5 to construct the SAGE-SR, a computerized adaptive instrument that uses branching logic from a screener section to administer appropriate follow-up questions to refine the differential diagnoses. The SAGE-SR was administered to healthy controls and outpatient mental health clinic clients to assess test duration and test-retest reliability. Cutoff scores for screening into follow-up diagnostic sections and criteria for inclusion of diagnoses in the differential diagnosis were evaluated. Results The expert panel reduced the initial 1200 test items to 664 items that panel members agreed collectively represented the SCID items from the 8 targeted modules and DSM criteria for the covered diagnoses. These 664 items were iteratively submitted to 3 rounds of cognitive interviewing with 50 community mental health center participants; the expert panel reviewed session summaries and agreed on a final set of 661 clear and concise self-report items representing the desired criteria in the DSM-5. The SAGE-SR constructed from this item pool took an average of 14 min to complete in a nonclinical sample versus 24 min in a clinical sample. Responses to individual items can be combined to generate DSM criteria endorsements and differential diagnoses, as well as provide indices of individual symptom severity. Preliminary measures of test-retest reliability in a small, nonclinical sample were promising, with good to excellent reliability for screener items in 11 of 13 diagnostic screening modules (intraclass correlation coefficient [ICC] or kappa coefficients ranging from .60 to .90), with mania achieving fair test-retest reliability (ICC=.50) and other substance use endorsed too infrequently for analysis. Conclusions The SAGE-SR is a computerized adaptive self-report instrument designed to provide rigorous differential diagnostic information to clinicians. PMID:29572204

  16. Computerized screening devices and performance assessment: development of a policy towards automation. International Academy of Cytology Task Force summary. Diagnostic Cytology Towards the 21st Century: An International Expert Conference and Tutorial.

    PubMed

    Bartels, P H; Bibbo, M; Hutchinson, M L; Gahm, T; Grohs, H K; Gwi-Mak, E; Kaufman, E A; Kaufman, R H; Knight, B K; Koss, L G; Magruder, L E; Mango, L J; McCallum, S M; Melamed, M R; Peebles, A; Richart, R M; Robinowitz, M; Rosenthal, D L; Sauer, T; Schenck, U; Tanaka, N; Topalidis, T; Verhest, A P; Wertlake, P T; Wilbur, D C

    1998-01-01

    The extension of automation to the diagnostic assessment of clinical materials raises issues of professional responsibility, on the part of both the medical professional and designer of the device. The International Academy of Cytology (IAC) and other professional cytology societies should develop a policy towards automation in the diagnostic assessment of clinical cytologic materials. The following summarizes the discussion of the initial position statement at the International Expert Conference on Diagnostic Cytology Towards the 21st Century, Hawaii, June 1997. 1. The professional in charge of a clinical cytopathology laboratory continues to bear the ultimate medical responsibility for diagnostic decisions made at the facility, whether automated devices are involved or not. 2. The introduction of automated procedures into clinical cytology should under no circumstances lead to a lowering of standards of performance. A prime objective of any guidelines should be to ensure that an automated procedure, in principle, does not expose any patient to new risks, nor should it increase already-existing, inherent risks. 3. Automated devices should provide capabilities for the medical professional to conduct periodic tests of the appropriate performance of the device. 4. Supervisory personnel should continue visual quality control screening of a certain percentage of slides dismissed at primary screening as within normal limits (WNL), even when automated procedures are employed in the laboratory. 5. Specifications for the design of primary screening devices for the detection of cervical cancer issued by the IAC in 1984 were reaffirmed. 6. The setting of numeric performance criteria is the proper charge of regulatory agencies, which also have the power of enforcement. 7. Human expert verification of results represents the "gold standard" at this time. Performance characteristics of computerized cytology devices should be determined by adherence to defined and well-considered protocols. Manufacturers should not claim a new standard of care; this is the responsibility of the medical community and professional groups. 8. Cytology professionals should support the development of procedures that bring about an improvement in diagnostic decision making. Advances in technology should be adopted if they can help solve problems in clinical cytology. The introduction of automated procedures into diagnostic decision making should take place strictly under the supervision and with the active participation and critical evaluation by the professional cytology community. Guidelines should be developed for the communication of technical information about the performance of automated screening devices by the IAC to governmental agencies and national societies. Also, guidelines are necessary for the official communication of IAC concerns to industry, medicolegal entities and the media. Procedures and guidelines for the evaluation of studies pertaining to the performance of automated devices, performance metrics and definitions for evaluation criteria should be established.

  17. Design and testing of artifact-suppressed adaptive histogram equalization: a contrast-enhancement technique for display of digital chest radiographs.

    PubMed

    Rehm, K; Seeley, G W; Dallas, W J; Ovitt, T W; Seeger, J F

    1990-01-01

    One of the goals of our research in the field of digital radiography has been to develop contrast-enhancement algorithms for eventual use in the display of chest images on video devices with the aim of preserving the diagnostic information presently available with film, some of which would normally be lost because of the smaller dynamic range of video monitors. The ASAHE algorithm discussed in this article has been tested by investigating observer performance in a difficult detection task involving phantoms and simulated lung nodules, using film as the output medium. The results of the experiment showed that the algorithm is successful in providing contrast-enhanced, natural-looking chest images while maintaining diagnostic information. The algorithm did not effect an increase in nodule detectability, but this was not unexpected because film is a medium capable of displaying a wide range of gray levels. It is sufficient at this stage to show that there is no degradation in observer performance. Future tests will evaluate the performance of the ASAHE algorithm in preparing chest images for video display.

  18. Diagnosing breast cancer using Raman spectroscopy: prospective analysis

    NASA Astrophysics Data System (ADS)

    Haka, Abigail S.; Volynskaya, Zoya; Gardecki, Joseph A.; Nazemi, Jon; Shenk, Robert; Wang, Nancy; Dasari, Ramachandra R.; Fitzmaurice, Maryann; Feld, Michael S.

    2009-09-01

    We present the first prospective test of Raman spectroscopy in diagnosing normal, benign, and malignant human breast tissues. Prospective testing of spectral diagnostic algorithms allows clinicians to accurately assess the diagnostic information contained in, and any bias of, the spectroscopic measurement. In previous work, we developed an accurate, internally validated algorithm for breast cancer diagnosis based on analysis of Raman spectra acquired from fresh-frozen in vitro tissue samples. We currently evaluate the performance of this algorithm prospectively on a large ex vivo clinical data set that closely mimics the in vivo environment. Spectroscopic data were collected from freshly excised surgical specimens, and 129 tissue sites from 21 patients were examined. Prospective application of the algorithm to the clinical data set resulted in a sensitivity of 83%, a specificity of 93%, a positive predictive value of 36%, and a negative predictive value of 99% for distinguishing cancerous from normal and benign tissues. The performance of the algorithm in different patient populations is discussed. Sources of bias in the in vitro calibration and ex vivo prospective data sets, including disease prevalence and disease spectrum, are examined and analytical methods for comparison provided.

  19. Advanced power system protection and incipient fault detection and protection of spaceborne power systems

    NASA Technical Reports Server (NTRS)

    Russell, B. Don

    1989-01-01

    This research concentrated on the application of advanced signal processing, expert system, and digital technologies for the detection and control of low grade, incipient faults on spaceborne power systems. The researchers have considerable experience in the application of advanced digital technologies and the protection of terrestrial power systems. This experience was used in the current contracts to develop new approaches for protecting the electrical distribution system in spaceborne applications. The project was divided into three distinct areas: (1) investigate the applicability of fault detection algorithms developed for terrestrial power systems to the detection of faults in spaceborne systems; (2) investigate the digital hardware and architectures required to monitor and control spaceborne power systems with full capability to implement new detection and diagnostic algorithms; and (3) develop a real-time expert operating system for implementing diagnostic and protection algorithms. Significant progress has been made in each of the above areas. Several terrestrial fault detection algorithms were modified to better adapt to spaceborne power system environments. Several digital architectures were developed and evaluated in light of the fault detection algorithms.

  20. A cDNA microarray gene expression data classifier for clinical diagnostics based on graph theory.

    PubMed

    Benso, Alfredo; Di Carlo, Stefano; Politano, Gianfranco

    2011-01-01

    Despite great advances in discovering cancer molecular profiles, the proper application of microarray technology to routine clinical diagnostics is still a challenge. Current practices in the classification of microarrays' data show two main limitations: the reliability of the training data sets used to build the classifiers, and the classifiers' performances, especially when the sample to be classified does not belong to any of the available classes. In this case, state-of-the-art algorithms usually produce a high rate of false positives that, in real diagnostic applications, are unacceptable. To address this problem, this paper presents a new cDNA microarray data classification algorithm based on graph theory and is able to overcome most of the limitations of known classification methodologies. The classifier works by analyzing gene expression data organized in an innovative data structure based on graphs, where vertices correspond to genes and edges to gene expression relationships. To demonstrate the novelty of the proposed approach, the authors present an experimental performance comparison between the proposed classifier and several state-of-the-art classification algorithms.

  1. A Diagnostic Approach for Electro-Mechanical Actuators in Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai Frank; Stoelting, Paul; Curran, Simon

    2009-01-01

    Electro-mechanical actuators (EMA) are finding increasing use in aerospace applications, especially with the trend towards all all-electric aircraft and spacecraft designs. However, electro-mechanical actuators still lack the knowledge base accumulated for other fielded actuator types, particularly with regard to fault detection and characterization. This paper presents a thorough analysis of some of the critical failure modes documented for EMAs and describes experiments conducted on detecting and isolating a subset of them. The list of failures has been prepared through an extensive Failure Modes and Criticality Analysis (FMECA) reference, literature review, and accessible industry experience. Methods for data acquisition and validation of algorithms on EMA test stands are described. A variety of condition indicators were developed that enabled detection, identification, and isolation among the various fault modes. A diagnostic algorithm based on an artificial neural network is shown to operate successfully using these condition indicators and furthermore, robustness of these diagnostic routines to sensor faults is demonstrated by showing their ability to distinguish between them and component failures. The paper concludes with a roadmap leading from this effort towards developing successful prognostic algorithms for electromechanical actuators.

  2. Using the brain's fight-or-flight response for predicting mental illness on the human space flight program

    NASA Astrophysics Data System (ADS)

    Losik, L.

    A predictive medicine program allows disease and illness including mental illness to be predicted using tools created to identify the presence of accelerated aging (a.k.a. disease) in electrical and mechanical equipment. When illness and disease can be predicted, actions can be taken so that the illness and disease can be prevented and eliminated. A predictive medicine program uses the same tools and practices from a prognostic and health management program to process biological and engineering diagnostic data provided in analog telemetry during prelaunch readiness and space exploration missions. The biological and engineering diagnostic data necessary to predict illness and disease is collected from the pre-launch spaceflight readiness activities and during space flight for the ground crew to perform a prognostic analysis on the results from a diagnostic analysis. The diagnostic, biological data provided in telemetry is converted to prognostic (predictive) data using the predictive algorithms. Predictive algorithms demodulate telemetry behavior. They illustrate the presence of accelerated aging/disease in normal appearing systems that function normally. Mental illness can predicted using biological diagnostic measurements provided in CCSDS telemetry from a spacecraft such as the ISS or from a manned spacecraft in deep space. The measurements used to predict mental illness include biological and engineering data from an astronaut's circadian and ultranian rhythms. This data originates deep in the brain that is also damaged from the long-term exposure to cortisol and adrenaline anytime the body's fight or flight response is activated. This paper defines the brain's FOFR; the diagnostic, biological and engineering measurements needed to predict mental illness, identifies the predictive algorithms necessary to process the behavior in CCSDS analog telemetry to predict and thus prevent mental illness from occurring on human spaceflight missions.

  3. Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhua; Zeng, Haishan; Kalia, Sunil; Lui, Harvey

    2017-02-01

    Background: Raman spectroscopy is a non-invasive optical technique which can measure molecular vibrational modes within tissue. A large-scale clinical study (n = 518) has demonstrated that real-time Raman spectroscopy could distinguish malignant from benign skin lesions with good diagnostic accuracy; this was validated by a follow-up independent study (n = 127). Objective: Most of the previous diagnostic algorithms have typically been based on analyzing the full band of the Raman spectra, either in the fingerprint or high wavenumber regions. Our objective in this presentation is to explore wavenumber selection based analysis in Raman spectroscopy for skin cancer diagnosis. Methods: A wavenumber selection algorithm was implemented using variably-sized wavenumber windows, which were determined by the correlation coefficient between wavenumbers. Wavenumber windows were chosen based on accumulated frequency from leave-one-out cross-validated stepwise regression or least and shrinkage selection operator (LASSO). The diagnostic algorithms were then generated from the selected wavenumber windows using multivariate statistical analyses, including principal component and general discriminant analysis (PC-GDA) and partial least squares (PLS). A total cohort of 645 confirmed lesions from 573 patients encompassing skin cancers, precancers and benign skin lesions were included. Lesion measurements were divided into training cohort (n = 518) and testing cohort (n = 127) according to the measurement time. Result: The area under the receiver operating characteristic curve (ROC) improved from 0.861-0.891 to 0.891-0.911 and the diagnostic specificity for sensitivity levels of 0.99-0.90 increased respectively from 0.17-0.65 to 0.20-0.75 by selecting specific wavenumber windows for analysis. Conclusion: Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels.

  4. A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study.

    PubMed

    Kalpathy-Cramer, Jayashree; Zhao, Binsheng; Goldgof, Dmitry; Gu, Yuhua; Wang, Xingwei; Yang, Hao; Tan, Yongqiang; Gillies, Robert; Napel, Sandy

    2016-08-01

    Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 μl to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.

  5. Diagnostic accuracy of administrative data algorithms in the diagnosis of osteoarthritis: a systematic review.

    PubMed

    Shrestha, Swastina; Dave, Amish J; Losina, Elena; Katz, Jeffrey N

    2016-07-07

    Administrative health care data are frequently used to study disease burden and treatment outcomes in many conditions including osteoarthritis (OA). OA is a chronic condition with significant disease burden affecting over 27 million adults in the US. There are few studies examining the performance of administrative data algorithms to diagnose OA. The purpose of this study is to perform a systematic review of administrative data algorithms for OA diagnosis; and, to evaluate the diagnostic characteristics of algorithms based on restrictiveness and reference standards. Two reviewers independently screened English-language articles published in Medline, Embase, PubMed, and Cochrane databases that used administrative data to identify OA cases. Each algorithm was classified as restrictive or less restrictive based on number and type of administrative codes required to satisfy the case definition. We recorded sensitivity and specificity of algorithms and calculated positive likelihood ratio (LR+) and positive predictive value (PPV) based on assumed OA prevalence of 0.1, 0.25, and 0.50. The search identified 7 studies that used 13 algorithms. Of these 13 algorithms, 5 were classified as restrictive and 8 as less restrictive. Restrictive algorithms had lower median sensitivity and higher median specificity compared to less restrictive algorithms when reference standards were self-report and American college of Rheumatology (ACR) criteria. The algorithms compared to reference standard of physician diagnosis had higher sensitivity and specificity than those compared to self-reported diagnosis or ACR criteria. Restrictive algorithms are more specific for OA diagnosis and can be used to identify cases when false positives have higher costs e.g. interventional studies. Less restrictive algorithms are more sensitive and suited for studies that attempt to identify all cases e.g. screening programs.

  6. Style-based classification of Chinese ink and wash paintings

    NASA Astrophysics Data System (ADS)

    Sheng, Jiachuan; Jiang, Jianmin

    2013-09-01

    Following the fact that a large collection of ink and wash paintings (IWP) is being digitized and made available on the Internet, their automated content description, analysis, and management are attracting attention across research communities. While existing research in relevant areas is primarily focused on image processing approaches, a style-based algorithm is proposed to classify IWPs automatically by their authors. As IWPs do not have colors or even tones, the proposed algorithm applies edge detection to locate the local region and detect painting strokes to enable histogram-based feature extraction and capture of important cues to reflect the styles of different artists. Such features are then applied to drive a number of neural networks in parallel to complete the classification, and an information entropy balanced fusion is proposed to make an integrated decision for the multiple neural network classification results in which the entropy is used as a pointer to combine the global and local features. Evaluations via experiments support that the proposed algorithm achieves good performances, providing excellent potential for computerized analysis and management of IWPs.

  7. Lip reading using neural networks

    NASA Astrophysics Data System (ADS)

    Kalbande, Dhananjay; Mishra, Akassh A.; Patil, Sanjivani; Nirgudkar, Sneha; Patel, Prashant

    2011-10-01

    Computerized lip reading, or speech reading, is concerned with the difficult task of converting a video signal of a speaking person to written text. It has several applications like teaching deaf and dumb to speak and communicate effectively with the other people, its crime fighting potential and invariance to acoustic environment. We convert the video of the subject speaking vowels into images and then images are further selected manually for processing. However, several factors like fast speech, bad pronunciation, and poor illumination, movement of face, moustaches and beards make lip reading difficult. Contour tracking methods and Template matching are used for the extraction of lips from the face. K Nearest Neighbor algorithm is then used to classify the 'speaking' images and the 'silent' images. The sequence of images is then transformed into segments of utterances. Feature vector is calculated on each frame for all the segments and is stored in the database with properly labeled class. Character recognition is performed using modified KNN algorithm which assigns more weight to nearer neighbors. This paper reports the recognition of vowels using KNN algorithms

  8. Breast Cancer Diagnostic System Final Report CRADA No. TC02098.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubenchik, A. M.; DaSilva, L. B.

    This was a collaborative effort between Lawrence Livermore National Security, LLC (formerly The Regents of the University of California)/Lawrence Liver more National Laboratory (LLNL) and BioTelligent, Inc. together with a Russian Institution (BioFil, Ltd.), to develop a new system ( diagnostic device, operating procedures, algorithms and software) to accurately distinguish between benign and malignant breast tissue (Breast Cancer Diagnostic System, BCDS).

  9. [EBOLA HEMORRHAGIC FEVER: DIAGNOSTICS, ETIOTROPIC AND PATHOGENETIC THERAPY, PREVENTION].

    PubMed

    Zhdanov, K V; Zakharenko, S M; Kovalenko, A N; Semenov, A V; Fisun, A Ya

    2015-01-01

    The data on diagnostics, etiotropic and pathogenetic therapy, prevention of Ebola hemorrhagic fever are presented including diagnostic algorithms for different clinical situations. Fundamentals of pathogenetic therapy are described. Various groups of medications used for antiviral therapy of conditions caused by Ebola virus are characterized. Experimental drugs at different stages of clinical studies are considered along with candidate vaccines being developed for the prevention of the disease.

  10. Toward DSM-V: An Item Response Theory Analysis of the Diagnostic Process for DSM-IV Alcohol Abuse and Dependence in Adolescents

    ERIC Educational Resources Information Center

    Gelhorn, Heather; Hartman, Christie; Sakai, Joseph; Stallings, Michael; Young, Susan; Rhee, So Hyun; Corley, Robin; Hewitt, John; Hopger, Christian; Crowley, Thomas D.

    2008-01-01

    Clinical interviews of approximately 5,587 adolescents revealed that DSM-IV diagnostic categories were found to be different in terms of the severity of alcohol use disorders (AUDs). However, a substantial inconsistency and overlap was found in severity of AUDs across categories. The need for an alternative diagnostic algorithm which considers all…

  11. New auto-segment method of cerebral hemorrhage

    NASA Astrophysics Data System (ADS)

    Wang, Weijiang; Shen, Tingzhi; Dang, Hua

    2007-12-01

    A novel method for Computerized tomography (CT) cerebral hemorrhage (CH) image automatic segmentation is presented in the paper, which uses expert system that models human knowledge about the CH automatic segmentation problem. The algorithm adopts a series of special steps and extracts some easy ignored CH features which can be found by statistic results of mass real CH images, such as region area, region CT number, region smoothness and some statistic CH region relationship. And a seven steps' extracting mechanism will ensure these CH features can be got correctly and efficiently. By using these CH features, a decision tree which models the human knowledge about the CH automatic segmentation problem has been built and it will ensure the rationality and accuracy of the algorithm. Finally some experiments has been taken to verify the correctness and reasonable of the automatic segmentation, and the good correct ratio and fast speed make it possible to be widely applied into practice.

  12. Validation of a computerized technique for automatically tracking and measuring the inferior vena cava in ultrasound imagery.

    PubMed

    Bellows, Spencer; Smith, Jordan; Mcguire, Peter; Smith, Andrew

    2014-01-01

    Accurate resuscitation of the critically-ill patient using intravenous fluids and blood products is a challenging, time sensitive task. Ultrasound of the inferior vena cava (IVC) is a non-invasive technique currently used to guide fluid administration, though multiple factors such as variable image quality, time, and operator skill challenge mainstream acceptance. This study represents a first attempt to develop and validate an algorithm capable of automatically tracking and measuring the IVC compared to human operators across a diverse range of image quality. Minimal tracking failures and high levels of agreement between manual and algorithm measurements were demonstrated on good quality videos. Addressing problems such as gaps in the vessel wall and intra-lumen speckle should result in improved performance in average and poor quality videos. Semi-automated measurement of the IVC for the purposes of non-invasive estimation of circulating blood volume poses challenges however is feasible.

  13. An application of corelap algoritm to improve the utilization space of the classroom

    NASA Astrophysics Data System (ADS)

    Sembiring, A. C.; Budiman, I.; Mardhatillah, A.; Tarigan, U. P.; Jawira, A.

    2018-04-01

    The high demand of the room due to the increasing number of students requires the addition of the room The limited number of rooms, the price of land and the cost of building expensive infrastructure requires effective and efficient use of the space. The facility layout redesign is done using the Computerized Relationship Planning (CORELAP) algorithm based on total closeness rating (TCR). By calculating the square distance between the departments based on the coordinates of the central point of the department. The distance obtained is multiplied by the material current from the From-to chart matrix. The analysis is done by comparing the total distance between the initial layout and the proposed layout and then viewing the activities performed in each room. The results of CORELAP algorithm processing gives an increase of room usage efficiency equal to 14, 98% from previous activity.

  14. Stimulation of a turbofan engine for evaluation of multivariable optimal control concepts. [(computerized simulation)

    NASA Technical Reports Server (NTRS)

    Seldner, K.

    1976-01-01

    The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.

  15. Diagnosis of Posttraumatic Stress Disorder in Preschool Children

    ERIC Educational Resources Information Center

    De Young, Alexandra C.; Kenardy, Justin A.; Cobham, Vanessa E.

    2011-01-01

    This study investigated the existing diagnostic algorithms for posttraumatic stress disorder (PTSD) to determine the most developmentally sensitive and valid approach for diagnosing this disorder in preschoolers. Participants were 130 parents of unintentionally burned children (1-6 years). Diagnostic interviews were conducted with parents to…

  16. Integrating Oil Debris and Vibration Measurements for Intelligent Machine Health Monitoring. Degree awarded by Toledo Univ., May 2002

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.

    2003-01-01

    A diagnostic tool for detecting damage to gears was developed. Two different measurement technologies, oil debris analysis and vibration were integrated into a health monitoring system for detecting surface fatigue pitting damage on gears. This integrated system showed improved detection and decision-making capabilities as compared to using individual measurement technologies. This diagnostic tool was developed and evaluated experimentally by collecting vibration and oil debris data from fatigue tests performed in the NASA Glenn Spur Gear Fatigue Rig. An oil debris sensor and the two vibration algorithms were adapted as the diagnostic tools. An inductance type oil debris sensor was selected for the oil analysis measurement technology. Gear damage data for this type of sensor was limited to data collected in the NASA Glenn test rigs. For this reason, this analysis included development of a parameter for detecting gear pitting damage using this type of sensor. The vibration data was used to calculate two previously available gear vibration diagnostic algorithms. The two vibration algorithms were selected based on their maturity and published success in detecting damage to gears. Oil debris and vibration features were then developed using fuzzy logic analysis techniques, then input into a multi sensor data fusion process. Results show combining the vibration and oil debris measurement technologies improves the detection of pitting damage on spur gears. As a result of this research, this new diagnostic tool has significantly improved detection of gear damage in the NASA Glenn Spur Gear Fatigue Rigs. This research also resulted in several other findings that will improve the development of future health monitoring systems. Oil debris analysis was found to be more reliable than vibration analysis for detecting pitting fatigue failure of gears and is capable of indicating damage progression. Also, some vibration algorithms are as sensitive to operational effects as they are to damage. Another finding was that clear threshold limits must be established for diagnostic tools. Based on additional experimental data obtained from the NASA Glenn Spiral Bevel Gear Fatigue Rig, the methodology developed in this study can be successfully implemented on other geared systems.

  17. Approximation algorithms for a genetic diagnostics problem.

    PubMed

    Kosaraju, S R; Schäffer, A A; Biesecker, L G

    1998-01-01

    We define and study a combinatorial problem called WEIGHTED DIAGNOSTIC COVER (WDC) that models the use of a laboratory technique called genotyping in the diagnosis of an important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the greedy heuristic has been extensively studied. We prove worst-case performance bounds on the greedy heuristic for WDC and for another heuristic we call directional greedy. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by greedy and dir-greedy and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.

  18. Recurrent Pneumonia in Children: A Reasoned Diagnostic Approach and a Single Centre Experience.

    PubMed

    Montella, Silvia; Corcione, Adele; Santamaria, Francesca

    2017-01-29

    Recurrent pneumonia (RP), i.e., at least two episodes of pneumonia in one year or three episodes ever with intercritical radiographic clearing of densities, occurs in 7.7%-9% of children with community-acquired pneumonia. In RP, the challenge is to discriminate between children with self-limiting or minor problems, that do not require a diagnostic work-up, and those with an underlying disease. The aim of the current review is to discuss a reasoned diagnostic approach to RP in childhood. Particular emphasis has been placed on which children should undergo a diagnostic work-up and which tests should be performed. A pediatric case series is also presented, in order to document a single centre experience of RP. A management algorithm for the approach to children with RP, based on the evidence from a literature review, is proposed. Like all algorithms, it is not meant to replace clinical judgment, but it should drive physicians to adopt a systematic approach to pediatric RP and provide a useful guide to the clinician.

  19. Integrated Multi-Point Space Plasma Measurements With Four Ionospheric Satellites

    NASA Astrophysics Data System (ADS)

    Siefring, C. L.; Bernhardt, P. A.; Selcher, C.; Wilkens, M. R.; McHarg, M. G.; Krause, L.; Chun, F.; Enloe, L.; Panholzer, R.; Sakoda, D.; Phelps, R.; D Roussel-Dupre, D.; Colestock, P.; Close, S.

    2006-12-01

    The STP-1 launch scheduled for late 2006 will place four satellites with ionospheric plasma diagnostics into the same nearly circular orbit with an altitude of 560 km and inclination of 35.4°. The satellites will allow for unique multipoint measurements of ionospheric scintillations and their causes. Both the radio and in-situ diagnostics will provide coverage of low- and mid-latitudes. The four satellites, STPSat1, NPSat1, FalconSat3, and CFE will follow the same ground-track but because of drag and mass differences their relative velocities will be different and vary during the lifetime of the satellites. The four satellites will start close together; separate over a few months and coming back together with near conjunctions at six and eight months. Two satellite conjunctions between NPSat1 and STPSat1 will occur most often, approximately one month apart at the end of the mission. STPSat1 is equipped with CITRIS (sCintillation and TEC Receiver In Space) which will measure scintillations in the VHF, UHF and L-band along with measuring Total Electron Content (TEC) along the propagation path. NPSat1 will carry a three-frequency CERTO (Coherent Electromagnetic Radio TOmography) Beacon which broadcasts phase-coherent signals at 150.012 MHz, 400.032 MHz, and 1066.752 MHz. CITRIS will be able to measure TEC and Scintillations along the orbital path (propagation path from NPSat1 to STPSat1) as well as between the CITRIS and the ground. NPSat1 carries electron and ion saturation Langmuir Probes, while FalconSat3 carries the FLAPS (FLAt Plasma Spectrometer) and PLANE (Plasma Local Anomalous Noise Environment). The in-situ diagnostic complement the CITRIS/CERTO radio techniques in many ways. The CIBOLA Flight Experiment (CFE) contains a wide band receiver covering 100 to 500 MHz. The CFE data can be processed to show distortion of wide-band modulations by ionospheric irregularities. CFE and CITRIS can record ground transmissions from the French DORIS beacons which radiate at 401.25 and 2036.25 MHz. The multi-point techniques provide redundant measurements of radio scintillations and other ionospheric distortions. The causative density irregularities will be imaged using computerized ionospheric tomographic and inverse-diffraction algorithms. The STP-1 sensors in low-earth-orbit will relate electron and ion density fluctuations and radio scintillation effects over a wide range of frequencies. This research supported at NRL by ONR.

  20. Cost-effectiveness analysis of microscopic observation drug susceptibility test versus Xpert MTB/Rif test for diagnosis of pulmonary tuberculosis in HIV patients in Uganda.

    PubMed

    Walusimbi, Simon; Kwesiga, Brendan; Rodrigues, Rashmi; Haile, Melles; de Costa, Ayesha; Bogg, Lennart; Katamba, Achilles

    2016-10-10

    Microscopic Observation Drug Susceptibility (MODS) and Xpert MTB/Rif (Xpert) are highly sensitive tests for diagnosis of pulmonary tuberculosis (PTB). This study evaluated the cost effectiveness of utilizing MODS versus Xpert for diagnosis of active pulmonary TB in HIV infected patients in Uganda. A decision analysis model comparing MODS versus Xpert for TB diagnosis was used. Costs were estimated by measuring and valuing relevant resources required to perform the MODS and Xpert tests. Diagnostic accuracy data of the tests were obtained from systematic reviews involving HIV infected patients. We calculated base values for unit costs and varied several assumptions to obtain the range estimates. Cost effectiveness was expressed as costs per TB patient diagnosed for each of the two diagnostic strategies. Base case analysis was performed using the base estimates for unit cost and diagnostic accuracy of the tests. Sensitivity analysis was performed using a range of value estimates for resources, prevalence, number of tests and diagnostic accuracy. The unit cost of MODS was US$ 6.53 versus US$ 12.41 of Xpert. Consumables accounted for 59 % (US$ 3.84 of 6.53) of the unit cost for MODS and 84 % (US$10.37 of 12.41) of the unit cost for Xpert. The cost effectiveness ratio of the algorithm using MODS was US$ 34 per TB patient diagnosed compared to US$ 71 of the algorithm using Xpert. The algorithm using MODS was more cost-effective compared to the algorithm using Xpert for a wide range of different values of accuracy, cost and TB prevalence. The cost (threshold value), where the algorithm using Xpert was optimal over the algorithm using MODS was US$ 5.92. MODS versus Xpert was more cost-effective for the diagnosis of PTB among HIV patients in our setting. Efforts to scale-up MODS therefore need to be explored. However, since other non-economic factors may still favour the use of Xpert, the current cost of the Xpert cartridge still needs to be reduced further by more than half, in order to make it economically competitive with MODS.

  1. Development of CPR security using impact analysis.

    PubMed Central

    Salazar-Kish, J.; Tate, D.; Hall, P. D.; Homa, K.

    2000-01-01

    The HIPAA regulations will require that institutions ensure the prevention of unauthorized access to electronically stored or transmitted patient records. This paper discusses a process for analyzing the impact of security mechanisms on users of computerized patient records through "behind the scenes" electronic access audits. In this way, those impacts can be assessed and refined to an acceptable standard prior to implementation. Through an iterative process of design and evaluation, we develop security algorithms that will protect electronic health information from improper access, alteration or loss, while minimally affecting the flow of work of the user population as a whole. PMID:11079984

  2. Smart Computer-Assisted Markets

    NASA Astrophysics Data System (ADS)

    McCabe, Kevin A.; Rassenti, Stephen J.; Smith, Vernon L.

    1991-10-01

    The deregulation movement has motivated the experimental study of auction markets designed for interdependent network industries such as natural gas pipelines or electric power systems. Decentralized agents submit bids to buy commodity and offers to sell transportation and commodity to a computerized dispatch center. Computer algorithms determine prices and allocations that maximize the gains from exchange in the system relative to the submitted bids and offers. The problem is important, because traditionally the scale and coordination economies in such industries were thought to require regulation. Laboratory experiments are used to study feasibility, limitations, incentives, and performance of proposed market designs for deregulation, providing motivation for new theory.

  3. Image-based path planning for automated virtual colonoscopy navigation

    NASA Astrophysics Data System (ADS)

    Hong, Wei

    2008-03-01

    Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.

  4. The F-SCAN system of foot pressure analysis.

    PubMed

    Young, C R

    1993-07-01

    The age of computerized gait analysis is here. There are several systems available to meet the needs of the podiatric practitioner. This author believes that the F-SCAN technology system makes a significant contribution to the practice of podiatric medicine. The system is user friendly, accurate, reproducible, and affordable. Its graphic display capabilities are colorfully attractive and easily understood. The primary focus of the F-SCAN system is that of peak pressure distribution over time. Vertical plantar pressure dispersion across the plantar surface of the foot is recorded, processed, and graphically displayed in terms of sequential gait changes. The system further allows for the manipulation of the accumulated data to present it in a more comprehensive manner. Future updates on the F-SCAN software are already close at hand and are expected to enhance the diagnostic capabilities of the system further. The four primary areas of clinical application for F-SCAN have been identified and briefly discussed. The recognition of certain biomechanical abnormalities, monitoring preorthotic and postorthotic use, evaluation of the diabetic or neuropathic foot, and presurgical and postsurgical functional examinations constitute this group. The F-SCAN system largely helps to remove some of the unavoidable guess work from essential diagnostic and therapeutic procedures. As we increase our understanding of the pathomechanics of these clinical problems, so too will we improve our management of the associated complications. Years ago, at the time when computerized gait analysis was being introduced to the podiatric profession, a frequently asked question was: What does it tell me that I don't already know or can't see by watching the patient walk?(ABSTRACT TRUNCATED AT 250 WORDS)

  5. Project Among African-Americans to Explore Risks for Schizophrenia (PAARTNERS): Evidence for Impairment and Heritability of Neurocognitive Functioning in Families of Schizophrenia Patients

    PubMed Central

    Calkins, Monica E.; Tepper, Ping; Gur, Ruben C.; Ragland, J. Daniel; Klei, Lambertus; Wiener, Howard W.; Richard, Jan; Savage, Robert M.; Allen, Trina B.; O'Jile, Judith; Devlin, Bernie; Kwentus, Joseph; Aliyu, Muktar H.; Bradford, L. DiAnne; Edwards, Neil; Lyons, Paul D.; Nimgaonkar, Vishwajit L.; Santos, Alberto B.; Go, Rodney C.P.; Gur, Raquel E.

    2015-01-01

    Objective Neurocognitive impairments in schizophrenia are well replicated and widely regarded as candidate endophenotypes that may facilitate understanding of schizophrenia genetics and pathophysiology. The Project Among African-Americans to Explore Risks for Schizophrenia (PAARTNERS) aims to identify genes underlying liability to schizophrenia. The unprecedented size of its study group (N=1,872), made possible through use of a computerized neurocognitive battery, can help further investigation of the genetics of neurocognition. The current analysis evaluated two characteristics not fully addressed in prior research: 1) heritability of neurocognition in African American families and 2) relationship between neurocognition and psychopathology in families of African American probands with schizophrenia or schizoaffective disorder. Method Across eight data collection sites, patients with schizophrenia or schizoaffective disorder (N=610), their biological relatives (N=928), and community comparison subjects (N=334) completed a standardized diagnostic evaluation and the computerized neurocognitive battery. Performance accuracy and response time (speed) were measured separately for 10 neurocognitive domains. Results The patients with schizophrenia or schizoaffective disorder exhibited less accuracy and speed in most neurocognitive domains than their relatives both with and without other psychiatric disorders, who in turn were more impaired than comparison subjects in most domains. Estimated trait heritability after inclusion of the mean effect of diagnostic status, age, and sex revealed significant heritabilities for most neurocognitive domains, with the highest for accuracy of abstraction/ flexibility, verbal memory, face memory, spatial processing, and emotion processing and for speed of attention. Conclusions Neurocognitive functions in African American families are heritable and associated with schizophrenia. They show potential for gene-mapping studies. PMID:20194479

  6. Code-based Diagnostic Algorithms for Idiopathic Pulmonary Fibrosis. Case Validation and Improvement.

    PubMed

    Ley, Brett; Urbania, Thomas; Husson, Gail; Vittinghoff, Eric; Brush, David R; Eisner, Mark D; Iribarren, Carlos; Collard, Harold R

    2017-06-01

    Population-based studies of idiopathic pulmonary fibrosis (IPF) in the United States have been limited by reliance on diagnostic code-based algorithms that lack clinical validation. To validate a well-accepted International Classification of Diseases, Ninth Revision, code-based algorithm for IPF using patient-level information and to develop a modified algorithm for IPF with enhanced predictive value. The traditional IPF algorithm was used to identify potential cases of IPF in the Kaiser Permanente Northern California adult population from 2000 to 2014. Incidence and prevalence were determined overall and by age, sex, and race/ethnicity. A validation subset of cases (n = 150) underwent expert medical record and chest computed tomography review. A modified IPF algorithm was then derived and validated to optimize positive predictive value. From 2000 to 2014, the traditional IPF algorithm identified 2,608 cases among 5,389,627 at-risk adults in the Kaiser Permanente Northern California population. Annual incidence was 6.8/100,000 person-years (95% confidence interval [CI], 6.1-7.7) and was higher in patients with older age, male sex, and white race. The positive predictive value of the IPF algorithm was only 42.2% (95% CI, 30.6 to 54.6%); sensitivity was 55.6% (95% CI, 21.2 to 86.3%). The corrected incidence was estimated at 5.6/100,000 person-years (95% CI, 2.6-10.3). A modified IPF algorithm had improved positive predictive value but reduced sensitivity compared with the traditional algorithm. A well-accepted International Classification of Diseases, Ninth Revision, code-based IPF algorithm performs poorly, falsely classifying many non-IPF cases as IPF and missing a substantial proportion of IPF cases. A modification of the IPF algorithm may be useful for future population-based studies of IPF.

  7. Conversion-Integration of MSFC Nonlinear Signal Diagnostic Analysis Algorithms for Realtime Execution of MSFC's MPP Prototype System

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1996-01-01

    NASA's advanced propulsion system Small Scale Magnetic Disturbances/Advanced Technology Development (SSME/ATD) has been undergoing extensive flight certification and developmental testing, which involves large numbers of health monitoring measurements. To enhance engine safety and reliability, detailed analysis and evaluation of the measurement signals are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce the risk of catastrophic system failures and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. During the development of SSME, ASRI participated in the research and development of several advanced non- linear signal diagnostic methods for health monitoring and failure prediction in turbomachinery components. However, due to the intensive computational requirement associated with such advanced analysis tasks, current SSME dynamic data analysis and diagnostic evaluation is performed off-line following flight or ground test with a typical diagnostic turnaround time of one to two days. The objective of MSFC's MPP Prototype System is to eliminate such 'diagnostic lag time' by achieving signal processing and analysis in real-time. Such an on-line diagnostic system can provide sufficient lead time to initiate corrective action and also to enable efficient scheduling of inspection, maintenance and repair activities. The major objective of this project was to convert and implement a number of advanced nonlinear diagnostic DSP algorithms in a format consistent with that required for integration into the Vanderbilt Multigraph Architecture (MGA) Model Based Programming environment. This effort will allow the real-time execution of these algorithms using the MSFC MPP Prototype System. ASRI has completed the software conversion and integration of a sequence of nonlinear signal analysis techniques specified in the SOW for real-time execution on MSFC's MPP Prototype. This report documents and summarizes the results of the contract tasks; provides the complete computer source code; including all FORTRAN/C Utilities; and all other utilities/supporting software libraries that are required for operation.

  8. Did death certificates and a death review process agree on lung cancer cause of death in the National Lung Screening Trial?

    PubMed

    Marcus, Pamela M; Doria-Rose, Vincent Paul; Gareen, Ilana F; Brewer, Brenda; Clingan, Kathy; Keating, Kristen; Rosenbaum, Jennifer; Rozjabek, Heather M; Rathmell, Joshua; Sicks, JoRean; Miller, Anthony B

    2016-08-01

    Randomized controlled trials frequently use death review committees to assign a cause of death rather than relying on cause of death information from death certificates. The National Lung Screening Trial, a randomized controlled trial of lung cancer screening with low-dose computed tomography versus chest X-ray for heavy and/or long-term smokers ages 55-74 years at enrollment, used a committee blinded to arm assignment for a subset of deaths to determine whether cause of death was due to lung cancer. Deaths were selected for review using a pre-determined computerized algorithm. The algorithm, which considered cancers diagnosed during the trial, causes and significant conditions listed on the death certificate, and the underlying cause of death derived from death certificate information by trained nosologists, selected deaths that were most likely to represent a death due to lung cancer (either directly or indirectly) and deaths that might have been erroneously assigned lung cancer as the cause of death. The algorithm also selected deaths that might be due to adverse events of diagnostic evaluation for lung cancer. Using the review cause of death as the gold standard and lung cancer cause of death as the outcome of interest (dichotomized as lung cancer versus not lung cancer), we calculated performance measures of the death certificate cause of death. We also recalculated the trial primary endpoint using the death certificate cause of death. In all, 1642 deaths were reviewed and assigned a cause of death (42% of the 3877 National Lung Screening Trial deaths). Sensitivity of death certificate cause of death was 91%; specificity, 97%; positive predictive value, 98%; and negative predictive value, 89%. About 40% of the deaths reclassified to lung cancer cause of death had a death certificate cause of death of a neoplasm other than lung. Using the death certificate cause of death, the lung cancer mortality reduction was 18% (95% confidence interval: 4.2-25.0), as compared with the published finding of 20% (95% confidence interval: 6.7-26.7). Death review may not be necessary for primary-outcome analyses in lung cancer screening trials. If deemed necessary, researchers should strive to streamline the death review process as much as possible. © The Author(s) 2016.

  9. Concept-match medical data scrubbing. How pathology text can be used in research.

    PubMed

    Berman, Jules J

    2003-06-01

    In the normal course of activity, pathologists create and archive immense data sets of scientifically valuable information. Researchers need pathology-based data sets, annotated with clinical information and linked to archived tissues, to discover and validate new diagnostic tests and therapies. Pathology records can be used for research purposes (without obtaining informed patient consent for each use of each record), provided the data are rendered harmless. Large data sets can be made harmless through 3 computational steps: (1) deidentification, the removal or modification of data fields that can be used to identify a patient (name, social security number, etc); (2) rendering the data ambiguous, ensuring that every data record in a public data set has a nonunique set of characterizing data; and (3) data scrubbing, the removal or transformation of words in free text that can be used to identify persons or that contain information that is incriminating or otherwise private. This article addresses the problem of data scrubbing. To design and implement a general algorithm that scrubs pathology free text, removing all identifying or private information. The Concept-Match algorithm steps through confidential text. When a medical term matching a standard nomenclature term is encountered, the term is replaced by a nomenclature code and a synonym for the original term. When a high-frequency "stop" word, such as a, an, the, or for, is encountered, it is left in place. When any other word is encountered, it is blocked and replaced by asterisks. This produces a scrubbed text. An open-source implementation of the algorithm is freely available. The Concept-Match scrub method transformed pathology free text into scrubbed output that preserved the sense of the original sentences, while it blocked terms that did not match terms found in the Unified Medical Language System (UMLS). The scrubbed product is safe, in the restricted sense that the output retains only standard medical terms. The software implementation scrubbed more than half a million surgical pathology report phrases in less than an hour. Computerized scrubbing can render the textual portion of a pathology report harmless for research purposes. Scrubbing and deidentification methods allow pathologists to create and use large pathology databases to conduct medical research.

  10. Different CT perfusion algorithms in the detection of delayed cerebral ischemia after aneurysmal subarachnoid hemorrhage.

    PubMed

    Cremers, Charlotte H P; Dankbaar, Jan Willem; Vergouwen, Mervyn D I; Vos, Pieter C; Bennink, Edwin; Rinkel, Gabriel J E; Velthuis, Birgitta K; van der Schaaf, Irene C

    2015-05-01

    Tracer delay-sensitive perfusion algorithms in CT perfusion (CTP) result in an overestimation of the extent of ischemia in thromboembolic stroke. In diagnosing delayed cerebral ischemia (DCI) after aneurysmal subarachnoid hemorrhage (aSAH), delayed arrival of contrast due to vasospasm may also overestimate the extent of ischemia. We investigated the diagnostic accuracy of tracer delay-sensitive and tracer delay-insensitive algorithms for detecting DCI. From a prospectively collected series of aSAH patients admitted between 2007-2011, we included patients with any clinical deterioration other than rebleeding within 21 days after SAH who underwent NCCT/CTP/CTA imaging. Causes of clinical deterioration were categorized into DCI and no DCI. CTP maps were calculated with tracer delay-sensitive and tracer delay-insensitive algorithms and were visually assessed for the presence of perfusion deficits by two independent observers with different levels of experience. The diagnostic value of both algorithms was calculated for both observers. Seventy-one patients were included. For the experienced observer, the positive predictive values (PPVs) were 0.67 for the delay-sensitive and 0.66 for the delay-insensitive algorithm, and the negative predictive values (NPVs) were 0.73 and 0.74. For the less experienced observer, PPVs were 0.60 for both algorithms, and NPVs were 0.66 for the delay-sensitive and 0.63 for the delay-insensitive algorithm. Test characteristics are comparable for tracer delay-sensitive and tracer delay-insensitive algorithms for the visual assessment of CTP in diagnosing DCI. This indicates that both algorithms can be used for this purpose.

  11. Effects of Child Characteristics on the Autism Diagnostic Interview-Revised: Implications for Use of Scores as a Measure of ASD Severity

    ERIC Educational Resources Information Center

    Hus, Vanessa; Lord, Catherine

    2013-01-01

    The Autism Diagnostic Interview-Revised (ADI-R) is commonly used to inform diagnoses of autism spectrum disorders (ASD). Considering the time dedicated to using the ADI-R, it is of interest to expand the ways in which information obtained from this interview is used. The current study examines how algorithm totals reflecting past (ADI-Diagnostic)…

  12. Spatial enhancement of ECG using diagnostic similarity score based lead selective multi-scale linear model.

    PubMed

    Nallikuzhy, Jiss J; Dandapat, S

    2017-06-01

    In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Development of a novel diagnostic algorithm to predict NASH in HCV-positive patients.

    PubMed

    Gallotta, Andrea; Paneghetti, Laura; Mrázová, Viera; Bednárová, Adriana; Kružlicová, Dáša; Frecer, Vladimir; Miertus, Stanislav; Biasiolo, Alessandra; Martini, Andrea; Pontisso, Patrizia; Fassina, Giorgio

    2018-05-01

    Non-alcoholic steato-hepatitis (NASH) is a severe disease characterised by liver inflammation and progressive hepatic fibrosis, which may progress to cirrhosis and hepatocellular carcinoma. Clinical evidence suggests that in hepatitis C virus patients steatosis and NASH are associated with faster fibrosis progression and hepatocellular carcinoma. A safe and reliable non-invasive diagnostic method to detect NASH at its early stages is still needed to prevent progression of the disease. We prospectively enrolled 91 hepatitis C virus-positive patients with histologically proven chronic liver disease: 77 patients were included in our study; of these, 10 had NASH. For each patient, various clinical and serological variables were collected. Different algorithms combining squamous cell carcinoma antigen-immunoglobulin-M (SCCA-IgM) levels with other common clinical data were created to provide the probability of having NASH. Our analysis revealed a statistically significant correlation between the histological presence of NASH and SCCA-IgM, insulin, homeostasis model assessment, haemoglobin, high-density lipoprotein and ferritin levels, and smoke. Compared to the use of a single marker, algorithms that combined four, six or seven variables identified NASH with higher accuracy. The best diagnostic performance was obtained with the logistic regression combination, which included all seven variables correlated with NASH. The combination of SCCA-IgM with common clinical data shows promising diagnostic performance for the detection of NASH in hepatitis C virus patients.

  14. Diagnosing Sexual Dysfunction in Men and Women: Sexual History Taking and the Role of Symptom Scales and Questionnaires.

    PubMed

    Hatzichristou, Dimitris; Kirana, Paraskevi-Sofia; Banner, Linda; Althof, Stanley E; Lonnee-Hoffmann, Risa A M; Dennerstein, Lorraine; Rosen, Raymond C

    2016-08-01

    A detailed sexual history is the cornerstone for all sexual problem assessments and sexual dysfunction diagnoses. Diagnostic evaluation is based on an in-depth sexual history, including sexual and gender identity and orientation, sexual activity and function, current level of sexual function, overall health and comorbidities, partner relationship and interpersonal factors, and the role of cultural and personal expectations and attitudes. To propose key steps in the diagnostic evaluation of sexual dysfunctions, with special focus on the use of symptom scales and questionnaires. Critical assessment of the current literature by the International Consultation on Sexual Medicine committee. A revised algorithm for the management of sexual dysfunctions, level of evidence, and recommendation for scales and questionnaires. The International Consultation on Sexual Medicine proposes an updated algorithm for diagnostic evaluation of sexual dysfunction in men and women, with specific recommendations for sexual history taking and diagnostic evaluation. Standardized scales, checklists, and validated questionnaires are additional adjuncts that should be used routinely in sexual problem evaluation. Scales developed for specific patient groups are included. Results of this evaluation are presented with recommendations for clinical and research uses. Defined principles, an algorithm and a range of scales may provide coherent and evidence based management for sexual dysfunctions. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  15. Computer-Aided Diagnosis in Medical Imaging: Historical Review, Current Status and Future Potential

    PubMed Central

    Doi, Kunio

    2007-01-01

    Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. In this article, the motivation and philosophy for early development of CAD schemes are presented together with the current status and future potential of CAD in a PACS environment. With CAD, radiologists use the computer output as a “second opinion” and make the final decisions. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral chest images has the potential to improve the overall performance in the detection of lung nodules when combined with another CAD scheme for PA chest images. Because vertebral fractures can be detected reliably by computer on lateral chest radiographs, radiologists’ accuracy in the detection of vertebral fractures would be improved by the use of CAD, and thus early diagnosis of osteoporosis would become possible. In MRA, a CAD system has been developed for assisting radiologists in the detection of intracranial aneurysms. On successive bone scan images, a CAD scheme for detection of interval changes has been developed by use of temporal subtraction images. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for chest CAD may include the computerized detection of lung nodules, interstitial opacities, cardiomegaly, vertebral fractures, and interval changes in chest radiographs as well as the computerized classification of benign and malignant nodules and the differential diagnosis of interstitial lung diseases. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with known pathology, which would be very similar to a new unknown case, from PACS when a reliable and useful method has been developed for quantifying the similarity of a pair of images for visual comparison by radiologists. PMID:17349778

  16. A preface on advances in diagnostics for infectious and parasitic diseases: detecting parasites of medical and veterinary importance.

    PubMed

    Stothard, J Russell; Adams, Emily

    2014-12-01

    There are many reasons why detection of parasites of medical and veterinary importance is vital and where novel diagnostic and surveillance tools are required. From a medical perspective alone, these originate from a desire for better clinical management and rational use of medications. Diagnosis can be at the individual-level, at close to patient settings in testing a clinical suspicion or at the community-level, perhaps in front of a computer screen, in classification of endemic areas and devising appropriate control interventions. Thus diagnostics for parasitic diseases has a broad remit as parasites are not only tied with their definitive hosts but also in some cases with their vectors/intermediate hosts. Application of current diagnostic tools and decision algorithms in sustaining control programmes, or in elimination settings, can be problematic and even ill-fitting. For example in resource-limited settings, are current diagnostic tools sufficiently robust for operational use at scale or are they confounded by on-the-ground realities; are the diagnostic algorithms underlying public health interventions always understood and well-received within communities which are targeted for control? Within this Special Issue (SI) covering a variety of diseases and diagnostic settings some answers are forthcoming. An important theme, however, throughout the SI is to acknowledge that cross-talk and continuous feedback between development and application of diagnostic tests is crucial if they are to be used effectively and appropriately.

  17. Maximum likelihood phase-retrieval algorithm: applications.

    PubMed

    Nahrstedt, D A; Southwell, W H

    1984-12-01

    The maximum likelihood estimator approach is shown to be effective in determining the wave front aberration in systems involving laser and flow field diagnostics and optical testing. The robustness of the algorithm enables convergence even in cases of severe wave front error and real, nonsymmetrical, obscured amplitude distributions.

  18. Diagnosis and treatment of gastroesophageal reflux disease complicated by Barrett's esophagus.

    PubMed

    Stasyshyn, Andriy

    2017-08-31

    The aim of the study was to evaluate the effectiveness of a diagnostic and therapeutic algorithm for gastroesophageal reflux disease complicated by Barrett's esophagus in 46 patients. A diagnostic and therapeutic algorithm for complicated GERD was developed. To describe the changes in the esophagus with reflux esophagitis, the Los Angeles classification was used. Intestinal metaplasia of the epithelium in the lower third of the esophagus was assessed using videoendoscopy, chromoscopy, and biopsy. Quality of life was assessed with the Gastro-Intestinal Quality of Life Index. The used methods were modeling, clinical, analytical, comparative, standardized, and questionnaire-based. Results and their discussion. Among the complications of GERD, Barrett's esophagus was diagnosed in 9 (19.6 %), peptic ulcer in the esophagus in 10 (21.7 %), peptic stricture of the esophagus in 4 (8.7 %), esophageal-gastric bleeding in 23 (50.0 %), including Malory-Weiss syndrome in 18, and erosive ulcerous bleeding in 5 people. Hiatal hernia was diagnosed in 171 (87.7 %) patients (sliding in 157 (91.8%), paraesophageal hernia in 2 (1.2%), and mixed hernia in 12 (7.0%) cases). One hundred ninety-five patients underwent laparoscopic surgery. Nissen fundoplication was conducted in 176 (90.2%) patients, Toupet fundoplication in 14 (7.2%), and Dor fundoplication in 5 (2.6%). It was established that the use of the diagnostic and treatment algorithm promoted systematization and objectification of changes in complicated GERD, contributed to early diagnosis, helped in choosing treatment, and improved quality of life. Argon coagulation and use of PPIs for 8-12 weeks before surgery led to the regeneration of the mucous membrane in the esophagus. The developed diagnostic and therapeutic algorithm facilitated systematization and objectification of changes in complicated GERD, contributed to early diagnosis, helped in choosing treatment, and improved quality of life.

  19. Validation of chronic obstructive pulmonary disease recording in the Clinical Practice Research Datalink (CPRD-GOLD)

    PubMed Central

    Quint, Jennifer K; Müllerova, Hana; DiSantostefano, Rachael L; Forbes, Harriet; Eaton, Susan; Hurst, John R; Davis, Kourtney; Smeeth, Liam

    2014-01-01

    Objectives The optimal method of identifying people with chronic obstructive pulmonary disease (COPD) from electronic primary care records is not known. We assessed the accuracy of different approaches using the Clinical Practice Research Datalink, a UK electronic health record database. Setting 951 participants registered with a CPRD practice in the UK between 1 January 2004 and 31 December 2012. Individuals were selected for ≥1 of 8 algorithms to identify people with COPD. General practitioners were sent a brief questionnaire and additional evidence to support a COPD diagnosis was requested. All information received was reviewed independently by two respiratory physicians whose opinion was taken as the gold standard. Primary outcome measure The primary measure of accuracy was the positive predictive value (PPV), the proportion of people identified by each algorithm for whom COPD was confirmed. Results 951 questionnaires were sent and 738 (78%) returned. After quality control, 696 (73.2%) patients were included in the final analysis. All four algorithms including a specific COPD diagnostic code performed well. Using a diagnostic code alone, the PPV was 86.5% (77.5–92.3%) while requiring a diagnosis plus spirometry plus specific medication; the PPV was slightly higher at 89.4% (80.7–94.5%) but reduced case numbers by 10%. Algorithms without specific diagnostic codes had low PPVs (range 12.2–44.4%). Conclusions Patients with COPD can be accurately identified from UK primary care records using specific diagnostic codes. Requiring spirometry or COPD medications only marginally improved accuracy. The high accuracy applies since the introduction of an incentivised disease register for COPD as part of Quality and Outcomes Framework in 2004. PMID:25056980

  20. A simple and robust classification tree for differentiation between benign and malignant lesions in MR-mammography.

    PubMed

    Baltzer, Pascal A T; Dietzel, Matthias; Kaiser, Werner A

    2013-08-01

    In the face of multiple available diagnostic criteria in MR-mammography (MRM), a practical algorithm for lesion classification is needed. Such an algorithm should be as simple as possible and include only important independent lesion features to differentiate benign from malignant lesions. This investigation aimed to develop a simple classification tree for differential diagnosis in MRM. A total of 1,084 lesions in standardised MRM with subsequent histological verification (648 malignant, 436 benign) were investigated. Seventeen lesion criteria were assessed by 2 readers in consensus. Classification analysis was performed using the chi-squared automatic interaction detection (CHAID) method. Results include the probability for malignancy for every descriptor combination in the classification tree. A classification tree incorporating 5 lesion descriptors with a depth of 3 ramifications (1, root sign; 2, delayed enhancement pattern; 3, border, internal enhancement and oedema) was calculated. Of all 1,084 lesions, 262 (40.4 %) and 106 (24.3 %) could be classified as malignant and benign with an accuracy above 95 %, respectively. Overall diagnostic accuracy was 88.4 %. The classification algorithm reduced the number of categorical descriptors from 17 to 5 (29.4 %), resulting in a high classification accuracy. More than one third of all lesions could be classified with accuracy above 95 %. • A practical algorithm has been developed to classify lesions found in MR-mammography. • A simple decision tree consisting of five criteria reaches high accuracy of 88.4 %. • Unique to this approach, each classification is associated with a diagnostic certainty. • Diagnostic certainty of greater than 95 % is achieved in 34 % of all cases.

  1. Fourier analysis algorithm for the posterior corneal keratometric data: clinical usefulness in keratoconus.

    PubMed

    Sideroudi, Haris; Labiris, Georgios; Georgantzoglou, Kimon; Ntonti, Panagiota; Siganos, Charalambos; Kozobolis, Vassilios

    2017-07-01

    To develop an algorithm for the Fourier analysis of posterior corneal videokeratographic data and to evaluate the derived parameters in the diagnosis of Subclinical Keratoconus (SKC) and Keratoconus (KC). This was a cross-sectional, observational study that took place in the Eye Institute of Thrace, Democritus University, Greece. Eighty eyes formed the KC group, 55 eyes formed the SKC group while 50 normal eyes populated the control group. A self-developed algorithm in visual basic for Microsoft Excel performed a Fourier series harmonic analysis for the posterior corneal sagittal curvature data. The algorithm decomposed the obtained curvatures into a spherical component, regular astigmatism, asymmetry and higher order irregularities for averaged central 4 mm and for each individual ring separately (1, 2, 3 and 4 mm). The obtained values were evaluated for their diagnostic capacity using receiver operating curves (ROC). Logistic regression was attempted for the identification of a combined diagnostic model. Significant differences were detected in regular astigmatism, asymmetry and higher order irregularities among groups. For the SKC group, the parameters with high diagnostic ability (AUC > 90%) were the higher order irregularities, the asymmetry and the regular astigmatism, mainly in the corneal periphery. Higher predictive accuracy was identified using diagnostic models that combined the asymmetry, regular astigmatism and higher order irregularities in averaged 3and 4 mm area (AUC: 98.4%, Sensitivity: 91.7% and Specificity:100%). Fourier decomposition of posterior Keratometric data provides parameters with high accuracy in differentiating SKC from normal corneas and should be included in the prompt diagnosis of KC. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  2. Diagnostic algorithm for relapsing acquired demyelinating syndromes in children.

    PubMed

    Hacohen, Yael; Mankad, Kshitij; Chong, W K; Barkhof, Frederik; Vincent, Angela; Lim, Ming; Wassmer, Evangeline; Ciccarelli, Olga; Hemingway, Cheryl

    2017-07-18

    To establish whether children with relapsing acquired demyelinating syndromes (RDS) and myelin oligodendrocyte glycoprotein antibodies (MOG-Ab) show distinctive clinical and radiologic features and to generate a diagnostic algorithm for the main RDS for clinical use. A panel reviewed the clinical characteristics, MOG-Ab and aquaporin-4 (AQP4) Ab, intrathecal oligoclonal bands, and Epstein-Barr virus serology results of 110 children with RDS. A neuroradiologist blinded to the diagnosis scored the MRI scans. Clinical, radiologic, and serologic tests results were compared. The findings showed that 56.4% of children were diagnosed with multiple sclerosis (MS), 25.4% with neuromyelitis optica spectrum disorder (NMOSD), 12.7% with multiphasic disseminated encephalomyelitis (MDEM), and 5.5% with relapsing optic neuritis (RON). Blinded analysis defined baseline MRI as typical of MS in 93.5% of children with MS. Acute disseminated encephalomyelitis presentation was seen only in the non-MS group. Of NMOSD cases, 30.7% were AQP4-Ab positive. MOG-Ab were found in 83.3% of AQP4-Ab-negative NMOSD, 100% of MDEM, and 33.3% of RON. Children with MOG-Ab were younger, were less likely to present with area postrema syndrome, and had lower disability, longer time to relapse, and more cerebellar peduncle lesions than children with AQP4-Ab NMOSD. A diagnostic algorithm applicable to any episode of CNS demyelination leads to 4 main phenotypes: MS, AQP4-Ab NMOSD, MOG-Ab-associated disease, and antibody-negative RDS. Children with MS and AQP4-Ab NMOSD showed features typical of adult cases. Because MOG-Ab-positive children showed notable and distinctive clinical and MRI features, they were grouped into a unified phenotype (MOG-Ab-associated disease), included in a new diagnostic algorithm. © 2017 American Academy of Neurology.

  3. Computer-aided diagnosis workstation and telemedicine network system for chest diagnosis based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kakinuma, Ryutaru; Moriyama, Noriyuki

    2009-02-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. Moreover, the doctor who diagnoses a medical image is insufficient in Japan. To overcome these problems, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The functions to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and "Success in login" effective. As a result, patients' private information is protected. We can share the screen of Web medical image conference system from two or more web conference terminals at the same time. An opinion can be exchanged mutually by using a camera and a microphone that are connected with workstation. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  4. Diagnostic value of 3D time-of-flight MRA in trigeminal neuralgia.

    PubMed

    Cai, Jing; Xin, Zhen-Xue; Zhang, Yu-Qiang; Sun, Jie; Lu, Ji-Liang; Xie, Feng

    2015-08-01

    The aim of this meta-analysis was to evaluate the diagnostic value of 3D time-of-flight magnetic resonance angiography (3D-TOF-MRA) in trigeminal neuralgia (TN). Relevant studies were identified by computerized database searches supplemented by manual search strategies. The studies were included in accordance with stringent inclusion and exclusion criteria. Following a multistep screening process, high quality studies related to the diagnostic value of 3D-TOF-MRA in TN were selected for meta-analysis. Statistical analyses were conducted using Statistical Analysis Software (version 8.2; SAS Institute, Cary, NC, USA) and Meta Disc (version 1.4; Unit of Clinical Biostatistics, Ramon y Cajal Hospital, Madrid, Spain). For the present meta-analysis, we initially retrieved 95 studies from database searches. A total of 13 studies were eventually enrolled containing a combined total of 1084 TN patients. The meta-analysis results demonstrated that the sensitivity and specificity of the diagnostic value of 3D-TOF-MRA in TN were 95% (95% confidence interval [CI] 0.93-0.96) and 77% (95% CI 0.66-0.86), respectively. The pooled positive likelihood ratio and negative likelihood ratio were 2.72 (95% CI 1.81-4.09) and 0.08 (95% CI 0.06-0.12), respectively. The pooled diagnostic odds ratio of 3D-TOF-MRA in TN was 52.92 (95% CI 26.39-106.11), and the corresponding area under the curve in the summary receiver operating characteristic curve based on the 3D-TOF-MRA diagnostic image of observers was 0.9695 (standard error 0.0165). Our results suggest that 3D-TOF-MRA has excellent sensitivity and specificity as a diagnostic tool for TN, and that it can accurately identify neurovascular compression in TN patients. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Diagnostic Abilities of Variable and Enhanced Corneal Compensation Algorithms of GDx in Different Severities of Glaucoma.

    PubMed

    Yadav, Ravi K; Begum, Viquar U; Addepalli, Uday K; Senthil, Sirisha; Garudadri, Chandra S; Rao, Harsha L

    2016-02-01

    To compare the abilities of retinal nerve fiber layer (RNFL) parameters of variable corneal compensation (VCC) and enhanced corneal compensation (ECC) algorithms of scanning laser polarimetry (GDx) in detecting various severities of glaucoma. Two hundred and eighty-five eyes of 194 subjects from the Longitudinal Glaucoma Evaluation Study who underwent GDx VCC and ECC imaging were evaluated. Abilities of RNFL parameters of GDx VCC and ECC to diagnose glaucoma were compared using area under receiver operating characteristic curves (AUC), sensitivities at fixed specificities, and likelihood ratios. After excluding 5 eyes that failed to satisfy manufacturer-recommended quality parameters with ECC and 68 with VCC, 56 eyes of 41 normal subjects and 161 eyes of 121 glaucoma patients [36 eyes with preperimetric glaucoma, 52 eyes with early (MD>-6 dB), 34 with moderate (MD between -6 and -12 dB), and 39 with severe glaucoma (MD<-12 dB)] were included for the analysis. Inferior RNFL, average RNFL, and nerve fiber indicator parameters showed the best AUCs and sensitivities both with GDx VCC and ECC in diagnosing all severities of glaucoma. AUCs and sensitivities of all RNFL parameters were comparable between the VCC and ECC algorithms (P>0.20 for all comparisons). Likelihood ratios associated with the diagnostic categorization of RNFL parameters were comparable between the VCC and ECC algorithms. In scans satisfying the manufacturer-recommended quality parameters, which were significantly greater with ECC than VCC algorithm, diagnostic abilities of GDx ECC and VCC in glaucoma were similar.

  6. A computerized, self-administered questionnaire to evaluate posttraumatic stress among firefighters after the World Trade Center collapse.

    PubMed

    Corrigan, Malachy; McWilliams, Rita; Kelly, Kerry J; Niles, Justin; Cammarata, Claire; Jones, Kristina; Wartenberg, Daniel; Hallman, William K; Kipen, Howard M; Glass, Lara; Schorr, John K; Feirstein, Ira; Prezant, David J

    2009-11-01

    We sought to determine the frequency of psychological symptoms and elevated posttraumatic stress disorder (PTSD) risk among New York City firefighters after the World Trade Center (WTC) attack and whether these measures were associated with Counseling Services Unit (CSU) use or mental health-related medical leave over the first 2.5 years after the attack. Shortly after the WTC attack, a computerized, binary-response screening questionnaire was administered. Exposure assessment included WTC arrival time and "loss of a co-worker while working at the collapse." We determined elevated PTSD risk using thresholds derived from Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision, and a sensitivity-specificity analysis. Of 8487 participants, 76% reported at least 1 symptom, 1016 (12%) met criteria for elevated PTSD risk, and 2389 (28%) self-referred to the CSU, a 5-fold increase from before the attack. Higher scores were associated with CSU use, functional job impairment, and mental health-related medical leave. Exposure-response gradients were significant for all outcomes. This screening tool effectively identified elevated PTSD risk, higher CSU use, and functional impairment among firefighters and therefore may be useful in allocating scarce postdisaster mental health resources.

  7. Objective research on tongue manifestation of patients with eczema.

    PubMed

    Yu, Zhifeng; Zhang, Haifang; Fu, Linjie; Lu, Xiaozuo

    2017-07-20

    Tongue observation often depends on subjective judgment, it is necessary to establish an objective and quantifiable standard for tongue observation. To discuss the features of tongue manifestation of patients who suffered from eczema with different types and to reveal the clinical significance of the tongue images. Two hundred patients with eczema were recruited and divided into three groups according to the diagnostic criteria. Acute group had 47 patients, subacute group had 82 patients, and chronic group had 71 patients. The computerized tongue image digital analysis device was used to detect tongue parameters. The L*a*b* color model was applied to classify tongue parameters quantitatively. For parameters such as tongue color, tongue shape, color of tongue coating, and thickness or thinness of tongue coating, there was a significant difference among acute group, subacute group and chronic group (P< 0.05). For Lab values of both tongue and tongue coating, there was statistical significance among the above types of eczema (P< 0.05). Tongue images can reflect some features of eczema, and different types of eczema may be related to the changes of tongue images. The computerized tongue image digital analysis device can reflect the tongue characteristics of patients with eczema objectively.

  8. Combining technologies: a computerized occlusal analysis system synchronized with a computerized electromyography system.

    PubMed

    Kerstein, Robert B

    2004-04-01

    Current advances in computer technologies have afforded dentists precision ways to examine occlusal contacts and muscle function. Recently, two separate computer technologies have been synchronized together, so that an operator can record their separate diagnostic data simultaneously. The two systems are: the T Scan II Occlusal Analysis System and the Biopak Electromyography Recording System. The simultaneous recording and playback capacity of these two computer systems allows the operator to analyze and correlate specific occlusal moments to specific electromyographic changes that result from these occlusal moments. This synchronization provides unparalleled evidence of the effect occlusal contact arrangement has on muscle function. Therefore, the occlusal condition of an inserted dental prosthesis or the occlusal scheme of the natural teeth (before and after corrective occlusal adjustments) can be readily evaluated, documented, and quantified for both, quality of occlusal parameters and muscle activity and the responses to the quality of the occlusal condition. This article describes their synchronization and illustrates their use in performing precision occlusal adjustment procedures on two patients: one who demonstrates occlusal disharmony while exhibiting the signs and symptoms of chronic myofascial pain dysfunction syndrome, and the other who had extensive restorative work accomplished but exhibits occlusal discomfort post-operatively.

  9. Lessons learned: a "homeless shelter intervention" by a medical student.

    PubMed

    Owusu, Yasmin; Kunik, Mark; Coverdale, John; Shah, Asim; Primm, Annelle; Harris, Toi

    2012-05-01

    The authors explored the process of implementing a medical student-initiated program designed to provide computerized mental health screening, referral, and education in a homeless shelter. An educational program was designed to teach homeless shelter staff about psychiatric disorders and culturally-informed treatment strategies. Pre- and post-questionnaires were obtained in conjunction with the educational program involving seven volunteer shelter staff. A computerized mental health screening tool, Quick Psycho-Diagnostics Panel (QPD), was utilized to screen for the presence of nine psychiatric disorders in 19 volunteer homeless shelter residents. Shelter staffs' overall fund of knowledge improved by an average of 23% on the basis of pre-/post-questionnaires (p=0.005). Of the individuals who participated in the mental health screening, 68% screened positive for at least one psychiatric disorder and were referred for further mental health care. At the 3-month follow-up of these individuals, 46% of those referred had accessed their referral services as recommended. Medical student-initiated psychiatric outreach programs to the homeless community have the potential to reduce mental health disparities by both increasing access to mental health services and by providing education. The authors discuss educational challenges and benefits for the medical students involved in this project.

  10. Incremental Yield of Including Determine-TB LAM Assay in Diagnostic Algorithms for Hospitalized and Ambulatory HIV-Positive Patients in Kenya.

    PubMed

    Huerga, Helena; Ferlazzo, Gabriella; Bevilacqua, Paolo; Kirubi, Beatrice; Ardizzoni, Elisa; Wanjala, Stephen; Sitienei, Joseph; Bonnet, Maryline

    2017-01-01

    Determine-TB LAM assay is a urine point-of-care test useful for TB diagnosis in HIV-positive patients. We assessed the incremental diagnostic yield of adding LAM to algorithms based on clinical signs, sputum smear-microscopy, chest X-ray and Xpert MTB/RIF in HIV-positive patients with symptoms of pulmonary TB (PTB). Prospective observational cohort of ambulatory (either severely ill or CD4<200cells/μl or with Body Mass Index<17Kg/m2) and hospitalized symptomatic HIV-positive adults in Kenya. Incremental diagnostic yield of adding LAM was the difference in the proportion of confirmed TB patients (positive Xpert or MTB culture) diagnosed by the algorithm with LAM compared to the algorithm without LAM. The multivariable mortality model was adjusted for age, sex, clinical severity, BMI, CD4, ART initiation, LAM result and TB confirmation. Among 474 patients included, 44.1% were severely ill, 69.6% had CD4<200cells/μl, 59.9% had initiated ART, 23.2% could not produce sputum. LAM, smear-microscopy, Xpert and culture in sputum were positive in 39.0% (185/474), 21.6% (76/352), 29.1% (102/350) and 39.7% (92/232) of the patients tested, respectively. Of 156 patients with confirmed TB, 65.4% were LAM positive. Of those classified as non-TB, 84.0% were LAM negative. Adding LAM increased the diagnostic yield of the algorithms by 36.6%, from 47.4% (95%CI:39.4-55.6) to 84.0% (95%CI:77.3-89.4%), when using clinical signs and X-ray; by 19.9%, from 62.2% (95%CI:54.1-69.8) to 82.1% (95%CI:75.1-87.7), when using clinical signs and microscopy; and by 13.4%, from 74.4% (95%CI:66.8-81.0) to 87.8% (95%CI:81.6-92.5), when using clinical signs and Xpert. LAM positive patients had an increased risk of 2-months mortality (aOR:2.7; 95%CI:1.5-4.9). LAM should be included in TB diagnostic algorithms in parallel to microscopy or Xpert request for HIV-positive patients either ambulatory (severely ill or CD4<200cells/μl) or hospitalized. LAM allows same day treatment initiation in patients at higher risk of death and in those not able to produce sputum.

  11. Feeding Disorders in Children with Developmental Disabilities.

    ERIC Educational Resources Information Center

    Schwarz, Steven M.

    2003-01-01

    This article describes an approach to evaluating and managing feeding disorders in children with developmental disabilities and examines effects of these management strategies on growth and clinical outcomes. A structured approach is stressed and a diagnostic and treatment algorithm is presented. Use with 79 children found that diagnostic-specific…

  12. Diagnostic Criteria for Temporomandibular Disorders (DC/TMD) for Clinical and Research Applications: Recommendations of the International RDC/TMD Consortium Network* and Orofacial Pain Special Interest Group†

    PubMed Central

    Schiffman, Eric; Ohrbach, Richard; Truelove, Edmond; Look, John; Anderson, Gary; Goulet, Jean-Paul; List, Thomas; Svensson, Peter; Gonzalez, Yoly; Lobbezoo, Frank; Michelotti, Ambra; Brooks, Sharon L.; Ceusters, Werner; Drangsholt, Mark; Ettlin, Dominik; Gaul, Charly; Goldberg, Louis J.; Haythornthwaite, Jennifer A.; Hollender, Lars; Jensen, Rigmor; John, Mike T.; De Laat, Antoon; de Leeuw, Reny; Maixner, William; van der Meulen, Marylee; Murray, Greg M.; Nixdorf, Donald R.; Palla, Sandro; Petersson, Arne; Pionchon, Paul; Smith, Barry; Visscher, Corine M.; Zakrzewska, Joanna; Dworkin, Samuel F.

    2015-01-01

    Aims The original Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Axis I diagnostic algorithms have been demonstrated to be reliable. However, the Validation Project determined that the RDC/TMD Axis I validity was below the target sensitivity of ≥ 0.70 and specificity of ≥ 0.95. Consequently, these empirical results supported the development of revised RDC/TMD Axis I diagnostic algorithms that were subsequently demonstrated to be valid for the most common pain-related TMD and for one temporomandibular joint (TMJ) intra-articular disorder. The original RDC/TMD Axis II instruments were shown to be both reliable and valid. Working from these findings and revisions, two international consensus workshops were convened, from which recommendations were obtained for the finalization of new Axis I diagnostic algorithms and new Axis II instruments. Methods Through a series of workshops and symposia, a panel of clinical and basic science pain experts modified the revised RDC/TMD Axis I algorithms by using comprehensive searches of published TMD diagnostic literature followed by review and consensus via a formal structured process. The panel's recommendations for further revision of the Axis I diagnostic algorithms were assessed for validity by using the Validation Project's data set, and for reliability by using newly collected data from the ongoing TMJ Impact Project—the follow-up study to the Validation Project. New Axis II instruments were identified through a comprehensive search of the literature providing valid instruments that, relative to the RDC/TMD, are shorter in length, are available in the public domain, and currently are being used in medical settings. Results The newly recommended Diagnostic Criteria for TMD (DC/TMD) Axis I protocol includes both a valid screener for detecting any pain-related TMD as well as valid diagnostic criteria for differentiating the most common pain-related TMD (sensitivity ≥ 0.86, specificity ≥ 0.98) and for one intra-articular disorder (sensitivity of 0.80 and specificity of 0.97). Diagnostic criteria for other common intra-articular disorders lack adequate validity for clinical diagnoses but can be used for screening purposes. Inter-examiner reliability for the clinical assessment associated with the validated DC/TMD criteria for pain-related TMD is excellent (kappa ≥ 0.85). Finally, a comprehensive classification system that includes both the common and less common TMD is also presented. The Axis II protocol retains selected original RDC/TMD screening instruments augmented with new instruments to assess jaw function as well as behavioral and additional psychosocial factors. The Axis II protocol is divided into screening and comprehensive self-report instrument sets. The screening instruments’ 41 questions assess pain intensity, pain-related disability, psychological distress, jaw functional limitations, and parafunctional behaviors, and a pain drawing is used to assess locations of pain. The comprehensive instruments, composed of 81 questions, assess in further detail jaw functional limitations and psychological distress as well as additional constructs of anxiety and presence of comorbid pain conditions. Conclusion The recommended evidence-based new DC/TMD protocol is appropriate for use in both clinical and research settings. More comprehensive instruments augment short and simple screening instruments for Axis I and Axis II. These validated instruments allow for identification of patients with a range of simple to complex TMD presentations. PMID:24482784

  13. Influenza detection and prediction algorithms: comparative accuracy trial in Östergötland county, Sweden, 2008-2012.

    PubMed

    Spreco, A; Eriksson, O; Dahlström, Ö; Timpka, T

    2017-07-01

    Methods for the detection of influenza epidemics and prediction of their progress have seldom been comparatively evaluated using prospective designs. This study aimed to perform a prospective comparative trial of algorithms for the detection and prediction of increased local influenza activity. Data on clinical influenza diagnoses recorded by physicians and syndromic data from a telenursing service were used. Five detection and three prediction algorithms previously evaluated in public health settings were calibrated and then evaluated over 3 years. When applied on diagnostic data, only detection using the Serfling regression method and prediction using the non-adaptive log-linear regression method showed acceptable performances during winter influenza seasons. For the syndromic data, none of the detection algorithms displayed a satisfactory performance, while non-adaptive log-linear regression was the best performing prediction method. We conclude that evidence was found for that available algorithms for influenza detection and prediction display satisfactory performance when applied on local diagnostic data during winter influenza seasons. When applied on local syndromic data, the evaluated algorithms did not display consistent performance. Further evaluations and research on combination of methods of these types in public health information infrastructures for 'nowcasting' (integrated detection and prediction) of influenza activity are warranted.

  14. Comparison of Traditional and Reverse Syphilis Screening Algorithms in Medical Health Checkups.

    PubMed

    Nah, Eun Hee; Cho, Seon; Kim, Suyoung; Cho, Han Ik; Chai, Jong Yil

    2017-11-01

    The syphilis diagnostic algorithms applied in different countries vary significantly depending on the local syphilis epidemiology and other considerations, including the expected workload, the need for automation in the laboratory and budget factors. This study was performed to investigate the efficacy of traditional and reverse syphilis diagnostic algorithms during general health checkups. In total, 1,000 blood specimens were obtained from 908 men and 92 women during their regular health checkups. Traditional screening and reverse screening were applied to the same specimens using automatic rapid plasma regain (RPR) and Treponema pallidum latex agglutination (TPLA) tests, respectively. Specimens that were reverse algorithm (TPLA) reactive, were subjected to a second treponemal test performed by using the chemiluminescent microparticle immunoassay (CMIA). Of the 1,000 specimens tested, 68 (6.8%) were reactive by reverse screening (TPLA) compared with 11 (1.1%) by traditional screening (RPR). The traditional algorithm failed to detect 48 specimens [TPLA(+)/RPR(-)/CMIA(+)]. The median TPLA cutoff index (COI) was higher in CMIA-reactive cases than in CMIA-nonreactive cases (90.5 vs 12.5 U). The reverse screening algorithm could detect the subjects with possible latent syphilis who were not detected by the traditional algorithm. Those individuals could be provided with opportunities for evaluating syphilis during their health checkups. The COI values of the initial TPLA test may be helpful in excluding false-positive TPLA test results in the reverse algorithm. © The Korean Society for Laboratory Medicine

  15. Aerophagia in children: characterization of a functional gastrointestinal disorder.

    PubMed

    Chitkara, D K; Bredenoord, A J; Bredenood, A J; Wang, M; Rucker, M J; Talley, N J

    2005-08-01

    The purpose of this study was to describe presenting symptoms, diagnostic testing, treatments and outcomes in a group of children with a diagnosis of aerophagia. A computerized diagnostic index was used to identify all children between the age of 1 and 17 years diagnosed with aerophagia at a tertiary care medical centre between 1975 and 2003. Individual medical charts were abstracted for information on the demographics, clinical features, co-morbid diagnoses, diagnostic work up and treatment of children with aerophagia. Information on presenting symptoms was also collected for a group of children who were retrospectively classified as having functional dyspepsia for comparison (n = 40). Forty-five children had a diagnosis of aerophagia. The mean duration of symptoms in children with aerophagia was 16 +/- 5 months. The most common gastrointestinal symptoms were abdominal pain, distention and frequent belching. Children with functional dyspepsia had a higher prevalence of nausea, vomiting, abdominal pain and unintentional weight loss compared to children with aerophagia (all P < 0.05). In conclusion, aerophagia is a disorder that is diagnosed in neurologically normal males and females, who can experience prolonged symptoms. Although many children with aerophagia present with upper gastrointestinal symptoms, the disorder appears to be distinct from functional dyspepsia.

  16. Validation of an integrated software for the detection of rapid eye movement sleep behavior disorder.

    PubMed

    Frauscher, Birgit; Gabelia, David; Biermayr, Marlene; Stefani, Ambra; Hackner, Heinz; Mitterling, Thomas; Poewe, Werner; Högl, Birgit

    2014-10-01

    Rapid eye movement sleep without atonia (RWA) is the polysomnographic hallmark of REM sleep behavior disorder (RBD). To partially overcome the disadvantages of manual RWA scoring, which is time consuming but essential for the accurate diagnosis of RBD, we aimed to validate software specifically developed and integrated with polysomnography for RWA detection against the gold standard of manual RWA quantification. Academic referral center sleep laboratory. Polysomnographic recordings of 20 patients with RBD and 60 healthy volunteers were analyzed. N/A. Motor activity during REM sleep was quantified manually and computer assisted (with and without artifact detection) according to Sleep Innsbruck Barcelona (SINBAR) criteria for the mentalis ("any," phasic, tonic electromyographic [EMG] activity) and the flexor digitorum superficialis (FDS) muscle (phasic EMG activity). Computer-derived indices (with and without artifact correction) for "any," phasic, tonic mentalis EMG activity, phasic FDS EMG activity, and the SINBAR index ("any" mentalis + phasic FDS) correlated well with the manually derived indices (all Spearman rhos 0.66-0.98). In contrast with computerized scoring alone, computerized scoring plus manual artifact correction (median duration 5.4 min) led to a significant reduction of false positives for "any" mentalis (40%), phasic mentalis (40.6%), and the SINBAR index (41.2%). Quantification of tonic mentalis and phasic FDS EMG activity was not influenced by artifact correction. The computer algorithm used here appears to be a promising tool for REM sleep behavior disorder detection in both research and clinical routine. A short check for plausibility of automatic detection should be a basic prerequisite for this and all other available computer algorithms. © 2014 Associated Professional Sleep Societies, LLC.

  17. Incidence and Prevalence of Juvenile Idiopathic Arthritis Among Children in a Managed Care Population, 1996–2009

    PubMed Central

    Harrold, Leslie R.; Salman, Craig; Shoor, Stanford; Curtis, Jeffrey R.; Asgari, Maryam M.; Gelfand, Joel M.; Wu, Jashin J.; Herrinton, Lisa J.

    2017-01-01

    Objective Few studies based in well-defined North American populations have examined the occurrence of juvenile idiopathic arthritis (JIA), and none has been based in an ethnically diverse population. We used computerized healthcare information from the Kaiser Permanente Northern California membership to validate JIA diagnoses and estimate the incidence and prevalence of the disease in this well-characterized population. Methods We identified children aged ≤ 15 years with ≥ 1 relevant International Classification of Diseases, 9th edition, diagnosis code of 696.0, 714, or 720 in computerized clinical encounter data during 1996–2009. In a random sample, we then reviewed the medical records to confirm the diagnosis and diagnosis date and to identify the best-performing case-finding algorithms. Finally, we used the case-finding algorithms to estimate the incidence rate and point prevalence of JIA. Results A diagnosis of JIA was confirmed in 69% of individuals with at least 1 relevant code. Forty-five percent were newly diagnosed during the study period. The age- and sex-standardized incidence rate of JIA per 100,000 person-years was 11.9 (95% CI 10.9–12.9). It was 16.4 (95% CI 14.6–18.1) in girls and 7.7 (95% CI 6.5–8.9) in boys. The peak incidence rate occurred in children aged 11–15 years. The prevalence of JIA per 100,000 persons was 44.7 (95% CI 39.1–50.2) on December 31, 2009. Conclusion The incidence rate of JIA observed in the Kaiser Permanente population, 1996–2009, was similar to that reported in Rochester, Minnesota, USA, but 2 to 3 times higher than Canadian estimates. PMID:23588938

  18. Indication-Based Ordering: A New Paradigm for Glycemic Control in Hospitalized Inpatients

    PubMed Central

    Lee, Joshua; Clay, Brian; Zelazny, Ziband; Maynard, Gregory

    2008-01-01

    Background Inpatient glycemic control is a constant challenge. Institutional insulin management protocols and structured order sets are commonly advocated but poorly studied. Effective and validated methods to integrate algorithmic protocol guidance into the insulin ordering process are needed. Methods We introduced a basic structured set of computerized insulin orders (Version 1), and later introduced a paper insulin management protocol, to assist users with the order set. Metrics were devised to assess the impact of the protocol on insulin use, glycemic control, and hypoglycemia using pharmacy data and point of care glucose tests. When incremental improvement was seen (as described in the results), Version 2 of the insulin orders was created to further streamline the process. Results The percentage of regimens containing basal insulin improved with Version 1. The percentage of patient days with hypoglycemia improved from 3.68% at baseline to 2.59% with Version 1 plus the paper insulin management protocol, representing a relative risk for hypoglycemic day of 0.70 [confidence interval (CI) 0.62, 0.80]. The relative risk of an uncontrolled (mean glucose over 180 mg/dl) patient stay was reduced to 0.84 (CI 0.77, 0.91) with Version 1 and was reduced further to 0.73 (CI 0.66, 0.81) with the paper protocol. Version 2 used clinician-entered patient parameters to guide protocol-based insulin ordering and simultaneously improved the flexibility and ease of ordering over Version 1. Conclusion Patient parameter and protocol-based clinical decision support, added to computerized provider order entry, has a track record of improving glycemic control indices. This justifies the incorporation of these algorithms into online order management. PMID:19885198

  19. PCA-based artifact removal algorithm for stroke detection using UWB radar imaging.

    PubMed

    Ricci, Elisa; di Domenico, Simone; Cianca, Ernestina; Rossi, Tommaso; Diomedi, Marina

    2017-06-01

    Stroke patients should be dispatched at the highest level of care available in the shortest time. In this context, a transportable system in specialized ambulances, able to evaluate the presence of an acute brain lesion in a short time interval (i.e., few minutes), could shorten delay of treatment. UWB radar imaging is an emerging diagnostic branch that has great potential for the implementation of a transportable and low-cost device. Transportability, low cost and short response time pose challenges to the signal processing algorithms of the backscattered signals as they should guarantee good performance with a reasonably low number of antennas and low computational complexity, tightly related to the response time of the device. The paper shows that a PCA-based preprocessing algorithm can: (1) achieve good performance already with a computationally simple beamforming algorithm; (2) outperform state-of-the-art preprocessing algorithms; (3) enable a further improvement in the performance (and/or decrease in the number of antennas) by using a multistatic approach with just a modest increase in computational complexity. This is an important result toward the implementation of such a diagnostic device that could play an important role in emergency scenario.

  20. Mental Health Risk Adjustment with Clinical Categories and Machine Learning.

    PubMed

    Shrestha, Akritee; Bergquist, Savannah; Montz, Ellen; Rose, Sherri

    2017-12-15

    To propose nonparametric ensemble machine learning for mental health and substance use disorders (MHSUD) spending risk adjustment formulas, including considering Clinical Classification Software (CCS) categories as diagnostic covariates over the commonly used Hierarchical Condition Category (HCC) system. 2012-2013 Truven MarketScan database. We implement 21 algorithms to predict MHSUD spending, as well as a weighted combination of these algorithms called super learning. The algorithm collection included seven unique algorithms that were supplied with three differing sets of MHSUD-related predictors alongside demographic covariates: HCC, CCS, and HCC + CCS diagnostic variables. Performance was evaluated based on cross-validated R 2 and predictive ratios. Results show that super learning had the best performance based on both metrics. The top single algorithm was random forests, which improved on ordinary least squares regression by 10 percent with respect to relative efficiency. CCS categories-based formulas were generally more predictive of MHSUD spending compared to HCC-based formulas. Literature supports the potential benefit of implementing a separate MHSUD spending risk adjustment formula. Our results suggest there is an incentive to explore machine learning for MHSUD-specific risk adjustment, as well as considering CCS categories over HCCs. © Health Research and Educational Trust.

  1. Neurocognitive findings in compulsive sexual behavior: a preliminary study.

    PubMed

    Derbyshire, Katherine L; Grant, Jon E

    2015-06-01

    BACKGROUND AND AIMS :Compulsive sexual behavior (CSB) is a common behavior affecting 3-6% of the population, characterized by repetitive and intrusive sexual urges or behaviors that typically cause negative social and emotional consequences. For this small pilot study on neurological data, we compared 13 individuals with CSB and gender- matched healthy controls on diagnostic assessments and computerized neurocognitive testing. No significant differences were found between the groups. These data contradict a common hypothesis that CSB is cognitively different from those without psychiatric comorbidities as well as previous research on impulse control disorders and alcohol dependence. Further research is needed to better understand and classify CSB based on these findings.

  2. Course and outcome of somatoform disorders in non-referred adolescents.

    PubMed

    Essau, Cecilia A

    2007-01-01

    The author examined the course of somatoform disorders in non-referred adolescents. Somatoform disorders were coded from DSM-IV criteria, using the computerized Munich (Germany) version of the Composite International Diagnostic Interview. About 35.9% of the adolescents with somatoform disorders at the index investigation continued to have the same disorders at the follow-up investigation: 26.7% had anxiety, 17.1% had depression, 22% had substance-use disorders, and 53.7% had no psychiatric disorders. Factors related to the chronicity of somatoform disorders included gender, comorbid depressive disorders, parental psychiatric disorders, and negative life events. Somatoform disorders showed a heterogeneous pattern of course.

  3. [Imaging of temporo-mandibular disorders].

    PubMed

    Felizardo, Rufino; Foucart, Jean-Michel; Pizelle, Christophe

    2012-03-01

    Dominated for years by standard films (tomographic mouth open and mouth closed X-rays, MRI) radiographs of the TMJ have progressively lost their usefulness to diagnosticians who have progressively increased their reliance on well codified clinical examinations, which suffice in a great majority of cases.The indications for and diagnostic worth of radiological studies and the impact they have on the management of TMJ disorders are today quite low especially when the high cost of procedures like MRI, computerized tomography, and CBCT is taken into account. In this article we discuss the various maladies that dentists might encounter and the situations in which radiological examinations are still indicated. © EDP Sciences, SFODF, 2012.

  4. Practical measures of cognitive function and promotion of their performance in the context of research.

    PubMed

    Gujski, Mariusz; Juńczyk, Tomasz; Pinkas, Jaroslaw; Owoc, Alfred; Bojar, Iwona

    2016-09-01

    The aging of the population generates a number of very interesting research questions in the fields of medicine, psychology, sociology, demography, and many others. One of the issues subject to both intensive research by scientists and exploration by practitioners is associated with cognitive functions. The article presents current knowledge regarding practical actions in the field of promoting cognitive function using diagnostic programmes and training using modern technologies. An important aspect presented in this study is also related to the welfare of the maintenance or improvement of cognitive function. Information and communication technologies will contribute to the dissemination of computerized cognitive training, also personalized.

  5. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records.

    PubMed

    MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A

    2015-08-21

    To develop a natural language processing software inference algorithm to classify the content of primary care consultations using electronic health record Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care. Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation. Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between 1 January 2008-31 December 2013 for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three 'gold standard' sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm. Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgements of expert clinicians within the 1200 record gold standard validation set. The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1.00; throat infections). A software inference algorithm that uses primary care Big Data can accurately classify the content of clinical consultations. This algorithm will enable accurate estimation of the prevalence of childhood respiratory illness in primary care and resultant service utilisation. The methodology can also be applied to other areas of clinical care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. MO-AB-210-00: Diagnostic Ultrasound Imaging Quality Control and High Intensity Focused Ultrasound Therapy Hands-On Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The goal of this ultrasound hands-on workshop is to demonstrate advancements in high intensity focused ultrasound (HIFU) and to demonstrate quality control (QC) testing in diagnostic ultrasound. HIFU is a therapeutic modality that uses ultrasound waves as carriers of energy. HIFU is used to focus a beam of ultrasound energy into a small volume at specific target locations within the body. The focused beam causes localized high temperatures and produces a well-defined regions of necrosis. This completely non-invasive technology has great potential for tumor ablation and targeted drug delivery. At the workshop, attendees will see configurations, applications, and hands-on demonstrationsmore » with on-site instructors at separate stations. The involvement of medical physicists in diagnostic ultrasound imaging service is increasing due to QC and accreditation requirements. At the workshop, an array of ultrasound testing phantoms and ultrasound scanners will be provided for attendees to learn diagnostic ultrasound QC in a hands-on environment with live demonstrations of the techniques. Target audience: Medical physicists and other medical professionals in diagnostic imaging and radiation oncology with interest in high-intensity focused ultrasound and in diagnostic ultrasound QC. Learning Objectives: Learn ultrasound physics and safety for HIFU applications through live demonstrations Get an overview of the state-of-the art in HIFU technologies and equipment Gain familiarity with common elements of a quality control program for diagnostic ultrasound imaging Identify QC tools available for testing diagnostic ultrasound systems and learn how to use these tools List of supporting vendors for HIFU and diagnostic ultrasound QC hands-on workshop: Philips Healthcare Alpinion Medical Systems Verasonics, Inc Zonare Medical Systems, Inc Computerized Imaging Reference Systems (CIRS), Inc. GAMMEX, Inc., Cablon Medical BV Steffen Sammet: NIH/NCI grant 5R25CA132822, NIH/NINDS grant 5R25NS080949 and Philips Healthcare research grant C32.« less

  7. Evaluation of the Effect of Diagnostic Molecular Testing on the Surgical Decision-Making Process for Patients With Thyroid Nodules.

    PubMed

    Noureldine, Salem I; Najafian, Alireza; Aragon Han, Patricia; Olson, Matthew T; Genther, Dane J; Schneider, Eric B; Prescott, Jason D; Agrawal, Nishant; Mathur, Aarti; Zeiger, Martha A; Tufano, Ralph P

    2016-07-01

    Diagnostic molecular testing is used in the workup of thyroid nodules. While these tests appear to be promising in more definitively assigning a risk of malignancy, their effect on surgical decision making has yet to be demonstrated. To investigate the effect of diagnostic molecular profiling of thyroid nodules on the surgical decision-making process. A surgical management algorithm was developed and published after peer review that incorporated individual Bethesda System for Reporting Thyroid Cytopathology classifications with clinical, laboratory, and radiological results. This algorithm was created to formalize the decision-making process selected herein in managing patients with thyroid nodules. Between April 1, 2014, and March 31, 2015, a prospective study of patients who had undergone diagnostic molecular testing of a thyroid nodule before being seen for surgical consultation was performed. The recommended management undertaken by the surgeon was then prospectively compared with the corresponding one in the algorithm. Patients with thyroid nodules who did not undergo molecular testing and were seen for surgical consultation during the same period served as a control group. All pertinent treatment options were presented to each patient, and any deviation from the algorithm was recorded prospectively. To evaluate the appropriateness of any change (deviation) in management, the surgical histopathology diagnosis was correlated with the surgery performed. The study cohort comprised 140 patients who underwent molecular testing. Their mean (SD) age was 50.3 (14.6) years, and 75.0% (105 of 140) were female. Over a 1-year period, 20.3% (140 of 688) had undergone diagnostic molecular testing before surgical consultation, and 79.7% (548 of 688) had not undergone molecular testing. The surgical management deviated from the treatment algorithm in 12.9% (18 of 140) with molecular testing and in 10.2% (56 of 548) without molecular testing (P = .37). In the group with molecular testing, the surgical management plan of only 7.9% (11 of 140) was altered as a result of the molecular test. All but 1 of those patients were found to be overtreated relative to the surgical histopathology analysis. Molecular testing did not significantly affect the surgical decision-making process in this study. Among patients whose treatment was altered based on these markers, there was evidence of overtreatment.

  8. The application of compressive sampling in rapid ultrasonic computerized tomography (UCT) technique of steel tube slab (STS)

    PubMed Central

    Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao

    2018-01-01

    This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique. PMID:29293593

  9. Computerized Hammer Sounding Interpretation for Concrete Assessment with Online Machine Learning.

    PubMed

    Ye, Jiaxing; Kobayashi, Takumi; Iwata, Masaya; Tsuda, Hiroshi; Murakawa, Masahiro

    2018-03-09

    Developing efficient Artificial Intelligence (AI)-enabled systems to substitute the human role in non-destructive testing is an emerging topic of considerable interest. In this study, we propose a novel hammering response analysis system using online machine learning, which aims at achieving near-human performance in assessment of concrete structures. Current computerized hammer sounding systems commonly employ lab-scale data to validate the models. In practice, however, the response signal patterns can be far more complicated due to varying geometric shapes and materials of structures. To deal with a large variety of unseen data, we propose a sequential treatment for response characterization. More specifically, the proposed system can adaptively update itself to approach human performance in hammering sounding data interpretation. To this end, a two-stage framework has been introduced, including feature extraction and the model updating scheme. Various state-of-the-art online learning algorithms have been reviewed and evaluated for the task. To conduct experimental validation, we collected 10,940 response instances from multiple inspection sites; each sample was annotated by human experts with healthy/defective condition labels. The results demonstrated that the proposed scheme achieved favorable assessment accuracy with high efficiency and low computation load.

  10. Effectiveness of Computerized Decision Support Systems Linked to Electronic Health Records: A Systematic Review and Meta-Analysis

    PubMed Central

    Kwag, Koren H.; Lytras, Theodore; Bertizzolo, Lorenzo; Brandt, Linn; Pecoraro, Valentina; Rigon, Giulio; Vaona, Alberto; Ruggiero, Francesca; Mangia, Massimo; Iorio, Alfonso; Kunnamo, Ilkka; Bonovas, Stefanos

    2014-01-01

    We systematically reviewed randomized controlled trials (RCTs) assessing the effectiveness of computerized decision support systems (CDSSs) featuring rule- or algorithm-based software integrated with electronic health records (EHRs) and evidence-based knowledge. We searched MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, and Cochrane Database of Abstracts of Reviews of Effects. Information on system design, capabilities, acquisition, implementation context, and effects on mortality, morbidity, and economic outcomes were extracted. Twenty-eight RCTs were included. CDSS use did not affect mortality (16 trials, 37395 patients; 2282 deaths; risk ratio [RR] = 0.96; 95% confidence interval [CI] = 0.85, 1.08; I2 = 41%). A statistically significant effect was evident in the prevention of morbidity, any disease (9 RCTs; 13868 patients; RR = 0.82; 95% CI = 0.68, 0.99; I2 = 64%), but selective outcome reporting or publication bias cannot be excluded. We observed differences for costs and health service utilization, although these were often small in magnitude. Across clinical settings, new generation CDSSs integrated with EHRs do not affect mortality and might moderately improve morbidity outcomes. PMID:25322302

  11. Best Design for Multidimensional Computerized Adaptive Testing With the Bifactor Model

    PubMed Central

    Seo, Dong Gi; Weiss, David J.

    2015-01-01

    Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm (MCAT) with a bifactor model using simulated data. Four item selection methods in MCAT were examined for three bifactor pattern designs using two multidimensional item response theory models. To compare MCAT item selection and estimation methods, a fixed test length was used. The Ds-optimality item selection improved θ estimates with respect to a general factor, and either D- or A-optimality improved estimates of the group factors in three bifactor pattern designs under two multidimensional item response theory models. The MCAT model without a guessing parameter functioned better than the MCAT model with a guessing parameter. The MAP (maximum a posteriori) estimation method provided more accurate θ estimates than the EAP (expected a posteriori) method under most conditions, and MAP showed lower observed standard errors than EAP under most conditions, except for a general factor condition using Ds-optimality item selection. PMID:29795848

  12. Extraction of skin lesions from non-dermoscopic images for surgical excision of melanoma.

    PubMed

    Jafari, M Hossein; Nasr-Esfahani, Ebrahim; Karimi, Nader; Soroushmehr, S M Reza; Samavi, Shadrokh; Najarian, Kayvan

    2017-06-01

    Computerized prescreening of suspicious moles and lesions for malignancy is of great importance for assessing the need and the priority of the removal surgery. Detection can be done by images captured by standard cameras, which are more preferable due to low cost and availability. One important step in computerized evaluation is accurate detection of lesion's region, i.e., segmentation of an image into two regions as lesion and normal skin. In this paper, a new method based on deep neural networks is proposed for accurate extraction of a lesion region. The input image is preprocessed, and then, its patches are fed to a convolutional neural network. Local texture and global structure of the patches are processed in order to assign pixels to lesion or normal classes. A method for effective selection of training patches is proposed for more accurate detection of a lesion's border. Our results indicate that the proposed method could reach the accuracy of 98.7% and the sensitivity of 95.2% in segmentation of lesion regions over the dataset of clinical images. The experimental results of qualitative and quantitative evaluations demonstrate that our method can outperform other state-of-the-art algorithms exist in the literature.

  13. Automation of orbit determination functions for National Aeronautics and Space Administration (NASA)-supported satellite missions

    NASA Technical Reports Server (NTRS)

    Mardirossian, H.; Beri, A. C.; Doll, C. E.

    1990-01-01

    The Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC) provides spacecraft trajectory determination for a wide variety of National Aeronautics and Space Administration (NASA)-supported satellite missions, using the Tracking Data Relay Satellite System (TDRSS) and Ground Spaceflight and Tracking Data Network (GSTDN). To take advantage of computerized decision making processes that can be used in spacecraft navigation, the Orbit Determination Automation System (ODAS) was designed, developed, and implemented as a prototype system to automate orbit determination (OD) and orbit quality assurance (QA) functions performed by orbit operations. Based on a machine-resident generic schedule and predetermined mission-dependent QA criteria, ODAS autonomously activates an interface with the existing trajectory determination system using a batch least-squares differential correction algorithm to perform the basic OD functions. The computational parameters determined during the OD are processed to make computerized decisions regarding QA, and a controlled recovery process is activated when the criteria are not satisfied. The complete cycle is autonomous and continuous. ODAS was extensively tested for performance under conditions resembling actual operational conditions and found to be effective and reliable for extended autonomous OD. Details of the system structure and function are discussed, and test results are presented.

  14. Automation of orbit determination functions for National Aeronautics and Space Administration (NASA)-supported satellite missions

    NASA Technical Reports Server (NTRS)

    Mardirossian, H.; Heuerman, K.; Beri, A.; Samii, M. V.; Doll, C. E.

    1989-01-01

    The Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC) provides spacecraft trajectory determination for a wide variety of National Aeronautics and Space Administration (NASA)-supported satellite missions, using the Tracking Data Relay Satellite System (TDRSS) and Ground Spaceflight and Tracking Data Network (GSTDN). To take advantage of computerized decision making processes that can be used in spacecraft navigation, the Orbit Determination Automation System (ODAS) was designed, developed, and implemented as a prototype system to automate orbit determination (OD) and orbit quality assurance (QA) functions performed by orbit operations. Based on a machine-resident generic schedule and predetermined mission-dependent QA criteria, ODAS autonomously activates an interface with the existing trajectory determination system using a batch least-squares differential correction algorithm to perform the basic OD functions. The computational parameters determined during the OD are processed to make computerized decisions regarding QA, and a controlled recovery process isactivated when the criteria are not satisfied. The complete cycle is autonomous and continuous. ODAS was extensively tested for performance under conditions resembling actual operational conditions and found to be effective and reliable for extended autonomous OD. Details of the system structure and function are discussed, and test results are presented.

  15. Post-hoc simulation study to adopt a computerized adaptive testing (CAT) for a Korean Medical License Examination.

    PubMed

    Seo, Dong Gi; Choi, Jeongwook

    2018-05-17

    Computerized adaptive testing (CAT) has been adopted in license examinations due to a test efficiency and accuracy. Many research about CAT have been published to prove the efficiency and accuracy of measurement. This simulation study investigated scoring method and item selection methods to implement CAT in Korean medical license examination (KMLE). This study used post-hoc (real data) simulation design. The item bank used in this study was designed with all items in a 2017 KMLE. All CAT algorithms for this study were implemented by a 'catR' package in R program. In terms of accuracy, Rasch and 2parametric logistic (PL) model performed better than 3PL model. Modal a Posteriori (MAP) or Expected a Posterior (EAP) provided more accurate estimates than MLE and WLE. Furthermore Maximum posterior weighted information (MPWI) or Minimum expected posterior variance (MEPV) performed better than other item selection methods. In terms of efficiency, Rasch model was recommended to reduce test length. Simulation study should be performed under varied test conditions before adopting a live CAT. Based on a simulation study, specific scoring and item selection methods should be predetermined before implementing a live CAT.

  16. Computational Planning in Facial Surgery.

    PubMed

    Zachow, Stefan

    2015-10-01

    This article reflects the research of the last two decades in computational planning for cranio-maxillofacial surgery. Model-guided and computer-assisted surgery planning has tremendously developed due to ever increasing computational capabilities. Simulators for education, planning, and training of surgery are often compared with flight simulators, where maneuvers are also trained to reduce a possible risk of failure. Meanwhile, digital patient models can be derived from medical image data with astonishing accuracy and thus can serve for model surgery to derive a surgical template model that represents the envisaged result. Computerized surgical planning approaches, however, are often still explorative, meaning that a surgeon tries to find a therapeutic concept based on his or her expertise using computational tools that are mimicking real procedures. Future perspectives of an improved computerized planning may be that surgical objectives will be generated algorithmically by employing mathematical modeling, simulation, and optimization techniques. Planning systems thus act as intelligent decision support systems. However, surgeons can still use the existing tools to vary the proposed approach, but they mainly focus on how to transfer objectives into reality. Such a development may result in a paradigm shift for future surgery planning. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. Analysis of Mass Averaged Tissue Doses in CAM, CAF, MAX, and FAX

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Qualls, Garry D.; Clowdsley, Martha S.; Blattnig, Steve R.; Simonsen, Lisa C.; Walker, Steven A.; Singleterry, Robert C.

    2009-01-01

    To estimate astronaut health risk due to space radiation, one must have the ability to calculate exposure-related quantities averaged over specific organs and tissue types. In this study, we first examine the anatomical properties of the Computerized Anatomical Man (CAM), Computerized Anatomical Female (CAF), Male Adult voXel (MAX), and Female Adult voXel (FAX) models by comparing the masses of various tissues to the reference values specified by the International Commission on Radiological Protection (ICRP). Major discrepancies are found between the CAM and CAF tissue masses and the ICRP reference data for almost all of the tissues. We next examine the distribution of target points used with the deterministic transport code HZETRN to compute mass averaged exposure quantities. A numerical algorithm is used to generate multiple point distributions for many of the effective dose tissues identified in CAM, CAF, MAX, and FAX. It is concluded that the previously published CAM and CAF point distributions were under-sampled and that the set of point distributions presented here should be adequate for future studies involving CAM, CAF, MAX, or FAX. It is concluded that MAX and FAX are more accurate than CAM and CAF for space radiation analyses.

  18. Introducing minimum Fisher regularisation tomography to AXUV and soft x-ray diagnostic systems of the COMPASS tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mlynar, J.; Weinzettl, V.; Imrisek, M.

    2012-10-15

    The contribution focuses on plasma tomography via the minimum Fisher regularisation (MFR) algorithm applied on data from the recently commissioned tomographic diagnostics on the COMPASS tokamak. The MFR expertise is based on previous applications at Joint European Torus (JET), as exemplified in a new case study of the plasma position analyses based on JET soft x-ray (SXR) tomographic reconstruction. Subsequent application of the MFR algorithm on COMPASS data from cameras with absolute extreme ultraviolet (AXUV) photodiodes disclosed a peaked radiating region near the limiter. Moreover, its time evolution indicates transient plasma edge cooling following a radial plasma shift. In themore » SXR data, MFR demonstrated that a high resolution plasma positioning independent of the magnetic diagnostics would be possible provided that a proper calibration of the cameras on an x-ray source is undertaken.« less

  19. A wave model of refraction of laser beams with a discrete change in intensity in their cross section and their application for diagnostics of extended nonstationary phase objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raskovskaya, I L

    2015-08-31

    A beam model with a discrete change in the cross-sectional intensity is proposed to describe refraction of laser beams formed on the basis of diffractive optical elements. In calculating the wave field of the beams of this class under conditions of strong refraction, in contrast to the traditional asymptotics of geometric optics which assumes a transition to the infinite limits of integration and obtaining an analytical solution, it is proposed to calculate the integral in the vicinity of stationary points. This approach allows the development of a fast algorithm for correct calculation of the wave field of the laser beamsmore » that are employed in probing and diagnostics of extended optically inhomogeneous media. Examples of the algorithm application for diagnostics of extended nonstationary objects in liquid are presented. (laser beams)« less

  20. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: methods of a decision-maker-researcher partnership systematic review.

    PubMed

    Haynes, R Brian; Wilczynski, Nancy L

    2010-02-05

    Computerized clinical decision support systems are information technology-based systems designed to improve clinical decision-making. As with any healthcare intervention with claims to improve process of care or patient outcomes, decision support systems should be rigorously evaluated before widespread dissemination into clinical practice. Engaging healthcare providers and managers in the review process may facilitate knowledge translation and uptake. The objective of this research was to form a partnership of healthcare providers, managers, and researchers to review randomized controlled trials assessing the effects of computerized decision support for six clinical application areas: primary preventive care, therapeutic drug monitoring and dosing, drug prescribing, chronic disease management, diagnostic test ordering and interpretation, and acute care management; and to identify study characteristics that predict benefit. The review was undertaken by the Health Information Research Unit, McMaster University, in partnership with Hamilton Health Sciences, the Hamilton, Niagara, Haldimand, and Brant Local Health Integration Network, and pertinent healthcare service teams. Following agreement on information needs and interests with decision-makers, our earlier systematic review was updated by searching Medline, EMBASE, EBM Review databases, and Inspec, and reviewing reference lists through 6 January 2010. Data extraction items were expanded according to input from decision-makers. Authors of primary studies were contacted to confirm data and to provide additional information. Eligible trials were organized according to clinical area of application. We included randomized controlled trials that evaluated the effect on practitioner performance or patient outcomes of patient care provided with a computerized clinical decision support system compared with patient care without such a system. Data will be summarized using descriptive summary measures, including proportions for categorical variables and means for continuous variables. Univariable and multivariable logistic regression models will be used to investigate associations between outcomes of interest and study specific covariates. When reporting results from individual studies, we will cite the measures of association and p-values reported in the studies. If appropriate for groups of studies with similar features, we will conduct meta-analyses. A decision-maker-researcher partnership provides a model for systematic reviews that may foster knowledge translation and uptake.

  1. Algorithms for the diagnosis and treatment of restless legs syndrome in primary care

    PubMed Central

    2011-01-01

    Background Restless legs syndrome (RLS) is a neurological disorder with a lifetime prevalence of 3-10%. in European studies. However, the diagnosis of RLS in primary care remains low and mistreatment is common. Methods The current article reports on the considerations of RLS diagnosis and management that were made during a European Restless Legs Syndrome Study Group (EURLSSG)-sponsored task force consisting of experts and primary care practioners. The task force sought to develop a better understanding of barriers to diagnosis in primary care practice and overcome these barriers with diagnostic and treatment algorithms. Results The barriers to diagnosis identified by the task force include the presentation of symptoms, the language used to describe them, the actual term "restless legs syndrome" and difficulties in the differential diagnosis of RLS. Conclusion The EURLSSG task force reached a consensus and agreed on the diagnostic and treatment algorithms published here. PMID:21352569

  2. The Manchester Uveitis Clinic: the first 3000 patients--epidemiology and casemix.

    PubMed

    Jones, Nicholas P

    2015-04-01

    To demonstrate the demography, anatomical, and diagnostic classification of patients with uveitis attending the Manchester Uveitis Clinic (MUC), a specialist uveitis clinic in the northwest of England, UK. Retrospective retrieval of data on a computerized database incorporating all new referrals to MUC from 1991 to 2013. A total of 3000 new patients with uveitis were seen during a 22-year period. The anatomical types seen were anterior 46%; intermediate 11.1%; posterior 21.8%; and panuveitis 21.1%. The most common diagnoses were Fuchs heterochromic uveitis (11.5% of total), sarcoidosis-related uveitis (9.7%), idiopathic intermediate uveitis (7.9%), idiopathic acute anterior uveitis (7.0%), and toxoplasmosis (6.9%). Syphilis and tuberculosis-associated uveitis increased markedly in frequency during the study period. The uveitis casemix in this region reflects a predominantly white Caucasian population in a temperate country, but with changing characteristics owing to increasing immigration, enhanced diagnostic techniques, changes in referral pattern, and some real changes in disease incidence.

  3. Case-Deletion Diagnostics for Nonlinear Structural Equation Models

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Lu, Bin

    2003-01-01

    In this article, a case-deletion procedure is proposed to detect influential observations in a nonlinear structural equation model. The key idea is to develop the diagnostic measures based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm. An one-step pseudo approximation is proposed to reduce the…

  4. Sequential Test Strategies for Multiple Fault Isolation

    NASA Technical Reports Server (NTRS)

    Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.

    1997-01-01

    In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.

  5. When the bell tolls on Bell's palsy: finding occult malignancy in acute-onset facial paralysis.

    PubMed

    Quesnel, Alicia M; Lindsay, Robin W; Hadlock, Tessa A

    2010-01-01

    This study reports 4 cases of occult parotid malignancy presenting with sudden-onset facial paralysis to demonstrate that failure to regain tone 6 months after onset distinguishes these patients from Bell's palsy patients with delayed recovery and to propose a diagnostic algorithm for this subset of patients. A case series of 4 patients with occult parotid malignancies presenting with acute-onset unilateral facial paralysis is reported. Initial imaging on all 4 patients did not demonstrate a parotid mass. Diagnostic delays ranged from 7 to 36 months from time of onset of facial paralysis to time of diagnosis of parotid malignancy. Additional physical examination findings, especially failure to regain tone, as well as properly protocolled radiologic studies reviewed with dedicated head and neck radiologists, were helpful in arriving at the diagnosis. An algorithm to minimize diagnostic delays in this subset of acute facial paralysis patients is presented. Careful attention to facial tone, in addition to movement, is important in the diagnostic evaluation of acute-onset facial paralysis. Copyright 2010 Elsevier Inc. All rights reserved.

  6. Soft learning vector quantization and clustering algorithms based on ordered weighted aggregation operators.

    PubMed

    Karayiannis, N B

    2000-01-01

    This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.

  7. Rapid and Accurate Behavioral Health Diagnostic Screening: Initial Validation Study of a Web-Based, Self-Report Tool (the SAGE-SR).

    PubMed

    Brodey, Benjamin; Purcell, Susan E; Rhea, Karen; Maier, Philip; First, Michael; Zweede, Lisa; Sinisterra, Manuela; Nunn, M Brad; Austin, Marie-Paule; Brodey, Inger S

    2018-03-23

    The Structured Clinical Interview for DSM (SCID) is considered the gold standard assessment for accurate, reliable psychiatric diagnoses; however, because of its length, complexity, and training required, the SCID is rarely used outside of research. This paper aims to describe the development and initial validation of a Web-based, self-report screening instrument (the Screening Assessment for Guiding Evaluation-Self-Report, SAGE-SR) based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) and the SCID-5-Clinician Version (CV) intended to make accurate, broad-based behavioral health diagnostic screening more accessible within clinical care. First, study staff drafted approximately 1200 self-report items representing individual granular symptoms in the diagnostic criteria for the 8 primary SCID-CV modules. An expert panel iteratively reviewed, critiqued, and revised items. The resulting items were iteratively administered and revised through 3 rounds of cognitive interviewing with community mental health center participants. In the first 2 rounds, the SCID was also administered to participants to directly compare their Likert self-report and SCID responses. A second expert panel evaluated the final pool of items from cognitive interviewing and criteria in the DSM-5 to construct the SAGE-SR, a computerized adaptive instrument that uses branching logic from a screener section to administer appropriate follow-up questions to refine the differential diagnoses. The SAGE-SR was administered to healthy controls and outpatient mental health clinic clients to assess test duration and test-retest reliability. Cutoff scores for screening into follow-up diagnostic sections and criteria for inclusion of diagnoses in the differential diagnosis were evaluated. The expert panel reduced the initial 1200 test items to 664 items that panel members agreed collectively represented the SCID items from the 8 targeted modules and DSM criteria for the covered diagnoses. These 664 items were iteratively submitted to 3 rounds of cognitive interviewing with 50 community mental health center participants; the expert panel reviewed session summaries and agreed on a final set of 661 clear and concise self-report items representing the desired criteria in the DSM-5. The SAGE-SR constructed from this item pool took an average of 14 min to complete in a nonclinical sample versus 24 min in a clinical sample. Responses to individual items can be combined to generate DSM criteria endorsements and differential diagnoses, as well as provide indices of individual symptom severity. Preliminary measures of test-retest reliability in a small, nonclinical sample were promising, with good to excellent reliability for screener items in 11 of 13 diagnostic screening modules (intraclass correlation coefficient [ICC] or kappa coefficients ranging from .60 to .90), with mania achieving fair test-retest reliability (ICC=.50) and other substance use endorsed too infrequently for analysis. The SAGE-SR is a computerized adaptive self-report instrument designed to provide rigorous differential diagnostic information to clinicians. ©Benjamin Brodey, Susan E Purcell, Karen Rhea, Philip Maier, Michael First, Lisa Zweede, Manuela Sinisterra, M Brad Nunn, Marie-Paule Austin, Inger S Brodey. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 23.03.2018.

  8. A novel computer-assisted image analysis of [123I]β-CIT SPECT images improves the diagnostic accuracy of parkinsonian disorders.

    PubMed

    Goebel, Georg; Seppi, Klaus; Donnemiller, Eveline; Warwitz, Boris; Wenning, Gregor K; Virgolini, Irene; Poewe, Werner; Scherfler, Christoph

    2011-04-01

    The purpose of this study was to develop an observer-independent algorithm for the correct classification of dopamine transporter SPECT images as Parkinson's disease (PD), multiple system atrophy parkinson variant (MSA-P), progressive supranuclear palsy (PSP) or normal. A total of 60 subjects with clinically probable PD (n = 15), MSA-P (n = 15) and PSP (n = 15), and 15 age-matched healthy volunteers, were studied with the dopamine transporter ligand [(123)I]β-CIT. Parametric images of the specific-to-nondisplaceable equilibrium partition coefficient (BP(ND)) were generated. Following a voxel-wise ANOVA, cut-off values were calculated from the voxel values of the resulting six post-hoc t-test maps. The percentages of the volume of an individual BP(ND) image remaining below and above the cut-off values were determined. The higher percentage of image volume from all six cut-off matrices was used to classify an individual's image. For validation, the algorithm was compared to a conventional region of interest analysis. The predictive diagnostic accuracy of the algorithm in the correct assignment of a [(123)I]β-CIT SPECT image was 83.3% and increased to 93.3% on merging the MSA-P and PSP groups. In contrast the multinomial logistic regression of mean region of interest values of the caudate, putamen and midbrain revealed a diagnostic accuracy of 71.7%. In contrast to a rater-driven approach, this novel method was superior in classifying [(123)I]β-CIT-SPECT images as one of four diagnostic entities. In combination with the investigator-driven visual assessment of SPECT images, this clinical decision support tool would help to improve the diagnostic yield of [(123)I]β-CIT SPECT in patients presenting with parkinsonism at their initial visit.

  9. Item response theory - A first approach

    NASA Astrophysics Data System (ADS)

    Nunes, Sandra; Oliveira, Teresa; Oliveira, Amílcar

    2017-07-01

    The Item Response Theory (IRT) has become one of the most popular scoring frameworks for measurement data, frequently used in computerized adaptive testing, cognitively diagnostic assessment and test equating. According to Andrade et al. (2000), IRT can be defined as a set of mathematical models (Item Response Models - IRM) constructed to represent the probability of an individual giving the right answer to an item of a particular test. The number of Item Responsible Models available to measurement analysis has increased considerably in the last fifteen years due to increasing computer power and due to a demand for accuracy and more meaningful inferences grounded in complex data. The developments in modeling with Item Response Theory were related with developments in estimation theory, most remarkably Bayesian estimation with Markov chain Monte Carlo algorithms (Patz & Junker, 1999). The popularity of Item Response Theory has also implied numerous overviews in books and journals, and many connections between IRT and other statistical estimation procedures, such as factor analysis and structural equation modeling, have been made repeatedly (Van der Lindem & Hambleton, 1997). As stated before the Item Response Theory covers a variety of measurement models, ranging from basic one-dimensional models for dichotomously and polytomously scored items and their multidimensional analogues to models that incorporate information about cognitive sub-processes which influence the overall item response process. The aim of this work is to introduce the main concepts associated with one-dimensional models of Item Response Theory, to specify the logistic models with one, two and three parameters, to discuss some properties of these models and to present the main estimation procedures.

  10. Telemedicine in Cardiology - Perspectives in Bosnia and Herzegovina

    PubMed Central

    Naser, Nabil; Tandir, Salih; Begic, Edin

    2017-01-01

    Introduction: Aim of article was to present perspectives of telemedicine in the field of cardiology in Bosnia and Herzegovina. Material and methods: Article has descriptive character and present review of literature. Results: Information technology can have the application in the education of students, starting from basic medical sciences up to clinical subjects. Information technologies are used for ECG analysis, 24h ECG Holter monitoring, which detects different rhythm disorders. By developing software packages for electrocardiogram analysis, which can be divided and interpreted by mobile phones, and complete the whole of the patient in the ambulance, specialist, experienced specialists, or even consultations in various illnesses and cities. Image segmentation algorithms have significance in the quantization and diagnostics of anatomic and pathological structures, and 3D representation has an important role in education, topography and clinical anatomy, radiology, pathology, as well as in clinical cardiology itself, especially in the sphere of coronary arteries identification in the multislice computerized angiography of coronary arteries. Interactive video consultations with subspecialists from the state and the region in adult cardiology, adult interventional cardiology, cardiovascular surgery, pediatric invasive and non-invasive cardiology enable better access to heart specialists and subspecialist, accurate diagnosis, better treatment, reduction of mortality, and a significant reduction in costs. Conclusion: Telemedicine by slow steps in entering the soil of Bosnia and Herzegovina, but the potential exists. It is necessary to educate the medical staff, as well as to provide a tempting environment for software engineers. Investing in infrastructure and equipment is imperative, as well as a positive climate for the its implementation. PMID:29284918

  11. Drug induced liver injury with analysis of alternative causes as confounding variables.

    PubMed

    Teschke, Rolf; Danan, Gaby

    2018-04-01

    Drug-induced liver injury (DILI) is rare compared to the worldwide frequent acute or chronic liver diseases. Therefore, patients included in series of suspected DILI are at high risk of not having DILI, whereby alternative causes may confound the DILI diagnosis. The aim of this review is to evaluate published case series of DILI for alternative causes. Relevant studies were identified using a computerized search of the Medline database for publications from 1993 through 30 October 2017. We used the following terms: drug hepatotoxicity, drug induced liver injury, hepatotoxic drugs combined with diagnosis, causality assessment and alternative causes. Alternative causes as variables confounding the DILI diagnosis emerged in 22 published DILI case series, ranging from 4 to 47%. Among 13 335 cases of suspected DILI, alternative causes were found to be more likely in 4555 patients (34.2%), suggesting that the suspected DILI was probably not DILI. Biliary diseases such as biliary obstruction, cholangitis, choledocholithiasis, primary biliary cholangitis and primary sclerosing cholangitis were among the most missed diagnoses. Alternative causes included hepatitis B, C and E, cytomegalovirus, Epstein-Barr virus, ischemic hepatitis, cardiac hepatopathy, autoimmune hepatitis, nonalcoholic fatty liver disease, nonalcoholic steatohepatitis, and alcoholic liver disease. In more than one-third of published global DILI case series, alternative causes as published in these reports confounded the DILI diagnosis. In the future, published DILI case series should include only patients with secured DILI diagnosis, preferentially established by prospective use of scored items provided by robust diagnostic algorithms such as the updated Roussel Uclaf causality assessment method. © 2018 The British Pharmacological Society.

  12. Computational Algorithmization: Limitations in Problem Solving Skills in Computational Sciences Majors at University of Oriente

    ERIC Educational Resources Information Center

    Castillo, Antonio S.; Berenguer, Isabel A.; Sánchez, Alexander G.; Álvarez, Tomás R. R.

    2017-01-01

    This paper analyzes the results of a diagnostic study carried out with second year students of the computational sciences majors at University of Oriente, Cuba, to determine the limitations that they present in computational algorithmization. An exploratory research was developed using quantitative and qualitative methods. The results allowed…

  13. Diagnostic Accuracy of the Veteran Affairs' Traumatic Brain Injury Screen.

    PubMed

    Louise Bender Pape, Theresa; Smith, Bridget; Babcock-Parziale, Judith; Evans, Charlesnika T; Herrold, Amy A; Phipps Maieritsch, Kelly; High, Walter M

    2018-01-31

    To comprehensively estimate the diagnostic accuracy and reliability of the Department of Veterans Affairs (VA) Traumatic Brain Injury (TBI) Clinical Reminder Screen (TCRS). Cross-sectional, prospective, observational study using the Standards for Reporting of Diagnostic Accuracy criteria. Three VA Polytrauma Network Sites. Operation Iraqi Freedom, Operation Enduring Freedom veterans (N=433). TCRS, Comprehensive TBI Evaluation, Structured TBI Diagnostic Interview, Symptom Attribution and Classification Algorithm, and Clinician-Administered Posttraumatic Stress Disorder (PTSD) Scale. Forty-five percent of veterans screened positive on the TCRS for TBI. For detecting occurrence of historical TBI, the TCRS had a sensitivity of .56 to .74, a specificity of .63 to .93, a positive predictive value (PPV) of 25% to 45%, a negative predictive value (NPV) of 91% to 94%, and a diagnostic odds ratio (DOR) of 4 to 13. For accuracy of attributing active symptoms to the TBI, the TCRS had a sensitivity of .64 to .87, a specificity of .59 to .89, a PPV of 26% to 32%, an NPV of 92% to 95%, and a DOR of 6 to 9. The sensitivity was higher for veterans with PTSD (.80-.86) relative to veterans without PTSD (.57-.82). The specificity, however, was higher among veterans without PTSD (.75-.81) relative to veterans with PTSD (.36-.49). All indices of diagnostic accuracy changed when participants with questionably valid (QV) test profiles were eliminated from analyses. The utility of the TCRS to screen for mild TBI (mTBI) depends on the stringency of the diagnostic reference standard to which it is being compared, the presence/absence of PTSD, and QV test profiles. Further development, validation, and use of reproducible diagnostic algorithms for symptom attribution after possible mTBI would improve diagnostic accuracy. Published by Elsevier Inc.

  14. Towards a Standard Psychometric Diagnostic Interview for Subjects at Ultra High Risk of Psychosis: CAARMS versus SIPS

    PubMed Central

    Fusar-Poli, P.; Cappucciati, M.; Rutigliano, G.; Lee, T. Y.; Beverly, Q.; Bonoldi, I.; Lelli, J.; Kaar, S. J.; Gago, E.; Rocchetti, M.; Patel, R.; Bhavsar, V.; Tognin, S.; Badger, S.; Calem, M.; Lim, K.; Kwon, J. S.; Perez, J.; McGuire, P.

    2016-01-01

    Background. Several psychometric instruments are available for the diagnostic interview of subjects at ultra high risk (UHR) of psychosis. Their diagnostic comparability is unknown. Methods. All referrals to the OASIS (London) or CAMEO (Cambridgeshire) UHR services from May 13 to Dec 14 were interviewed for a UHR state using both the CAARMS 12/2006 and the SIPS 5.0. Percent overall agreement, kappa, the McNemar-Bowker χ 2 test, equipercentile methods, and residual analyses were used to investigate diagnostic outcomes and symptoms severity or frequency. A conversion algorithm (CONVERT) was validated in an independent UHR sample from the Seoul Youth Clinic (Seoul). Results. There was overall substantial CAARMS-versus-SIPS agreement in the identification of UHR subjects (n = 212, percent overall agreement = 86%; kappa = 0.781, 95% CI from 0.684 to 0.878; McNemar-Bowker test = 0.069), with the exception of the brief limited intermittent psychotic symptoms (BLIPS) subgroup. Equipercentile-linking table linked symptoms severity and frequency across the CAARMS and SIPS. The conversion algorithm was validated in 93 UHR subjects, showing excellent diagnostic accuracy (CAARMS to SIPS: ROC area 0.929; SIPS to CAARMS: ROC area 0.903). Conclusions. This study provides initial comparability data between CAARMS and SIPS and will inform ongoing multicentre studies and clinical guidelines for the UHR psychometric diagnostic interview. PMID:27314005

  15. [Cost-effectiveness of the deep vein thrombosis diagnosis process in primary care].

    PubMed

    Fuentes Camps, Eva; Luis del Val García, José; Bellmunt Montoya, Sergi; Hmimina Hmimina, Sara; Gómez Jabalera, Efren; Muñoz Pérez, Miguel Ángel

    2016-04-01

    To analyse the cost effectiveness of the application of diagnostic algorithms in patients with a first episode of suspected deep vein thrombosis (DVT) in Primary Care compared with systematic referral to specialised centres. Observational, cross-sectional, analytical study. Patients from hospital emergency rooms referred from Primary Care to complete clinical evaluation and diagnosis. A total of 138 patients with symptoms of a first episode of DVT were recruited; 22 were excluded (no Primary Care report, symptoms for more than 30 days, anticoagulant treatment, and previous DVT). Of the 116 patients finally included, 61% women and the mean age was 71 years. Variables from the Wells and Oudega clinical probability scales, D-dimer (portable and hospital), Doppler ultrasound, and direct costs generated by the three algorithms analysed: all patients were referred systematically, referral according to Wells and Oudega scale. DVT was confirmed in 18.9%. The two clinical probability scales showed a sensitivity of 100% (95% CI: 85.1 to 100) and a specificity of about 40%. With the application of the scales, one third of all referrals to hospital emergency rooms could have been avoided (P<.001). The diagnostic cost could have been reduced by € 8,620 according to Oudega and € 9,741 according to Wells, per 100 patients visited. The application of diagnostic algorithms when a DVT is suspected could lead to better diagnostic management by physicians, and a more cost effective process. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  16. Accuracy of administrative data for surveillance of healthcare-associated infections: a systematic review

    PubMed Central

    van Mourik, Maaike S M; van Duijn, Pleun Joppe; Moons, Karel G M; Bonten, Marc J M; Lee, Grace M

    2015-01-01

    Objective Measuring the incidence of healthcare-associated infections (HAI) is of increasing importance in current healthcare delivery systems. Administrative data algorithms, including (combinations of) diagnosis codes, are commonly used to determine the occurrence of HAI, either to support within-hospital surveillance programmes or as free-standing quality indicators. We conducted a systematic review evaluating the diagnostic accuracy of administrative data for the detection of HAI. Methods Systematic search of Medline, Embase, CINAHL and Cochrane for relevant studies (1995–2013). Methodological quality assessment was performed using QUADAS-2 criteria; diagnostic accuracy estimates were stratified by HAI type and key study characteristics. Results 57 studies were included, the majority aiming to detect surgical site or bloodstream infections. Study designs were very diverse regarding the specification of their administrative data algorithm (code selections, follow-up) and definitions of HAI presence. One-third of studies had important methodological limitations including differential or incomplete HAI ascertainment or lack of blinding of assessors. Observed sensitivity and positive predictive values of administrative data algorithms for HAI detection were very heterogeneous and generally modest at best, both for within-hospital algorithms and for formal quality indicators; accuracy was particularly poor for the identification of device-associated HAI such as central line associated bloodstream infections. The large heterogeneity in study designs across the included studies precluded formal calculation of summary diagnostic accuracy estimates in most instances. Conclusions Administrative data had limited and highly variable accuracy for the detection of HAI, and their judicious use for internal surveillance efforts and external quality assessment is recommended. If hospitals and policymakers choose to rely on administrative data for HAI surveillance, continued improvements to existing algorithms and their robust validation are imperative. PMID:26316651

  17. The diagnosis of urinary tract infections in young children (DUTY): protocol for a diagnostic and prospective observational study to derive and validate a clinical algorithm for the diagnosis of UTI in children presenting to primary care with an acute illness.

    PubMed

    Downing, Harriet; Thomas-Jones, Emma; Gal, Micaela; Waldron, Cherry-Ann; Sterne, Jonathan; Hollingworth, William; Hood, Kerenza; Delaney, Brendan; Little, Paul; Howe, Robin; Wootton, Mandy; Macgowan, Alastair; Butler, Christopher C; Hay, Alastair D

    2012-07-19

    Urinary tract infection (UTI) is common in children, and may cause serious illness and recurrent symptoms. However, obtaining a urine sample from young children in primary care is challenging and not feasible for large numbers. Evidence regarding the predictive value of symptoms, signs and urinalysis for UTI in young children is urgently needed to help primary care clinicians better identify children who should be investigated for UTI. This paper describes the protocol for the Diagnosis of Urinary Tract infection in Young children (DUTY) study. The overall study aim is to derive and validate a cost-effective clinical algorithm for the diagnosis of UTI in children presenting to primary care acutely unwell. DUTY is a multicentre, diagnostic and prospective observational study aiming to recruit at least 7,000 children aged before their fifth birthday, being assessed in primary care for any acute, non-traumatic, illness of ≤ 28 days duration. Urine samples will be obtained from eligible consented children, and data collected on medical history and presenting symptoms and signs. Urine samples will be dipstick tested in general practice and sent for microbiological analysis. All children with culture positive urines and a random sample of children with urine culture results in other, non-positive categories will be followed up to record symptom duration and healthcare resource use. A diagnostic algorithm will be constructed and validated and an economic evaluation conducted.The primary outcome will be a validated diagnostic algorithm using a reference standard of a pure/predominant growth of at least >103, but usually >105 CFU/mL of one, but no more than two uropathogens.We will use logistic regression to identify the clinical predictors (i.e. demographic, medical history, presenting signs and symptoms and urine dipstick analysis results) most strongly associated with a positive urine culture result. We will then use economic evaluation to compare the cost effectiveness of the candidate prediction rules. This study will provide novel, clinically important information on the diagnostic features of childhood UTI and the cost effectiveness of a validated prediction rule, to help primary care clinicians improve the efficiency of their diagnostic strategy for UTI in young children.

  18. The diagnosis of urinary tract infections in young children (DUTY): protocol for a diagnostic and prospective observational study to derive and validate a clinical algorithm for the diagnosis of UTI in children presenting to primary care with an acute illness

    PubMed Central

    2012-01-01

    Background Urinary tract infection (UTI) is common in children, and may cause serious illness and recurrent symptoms. However, obtaining a urine sample from young children in primary care is challenging and not feasible for large numbers. Evidence regarding the predictive value of symptoms, signs and urinalysis for UTI in young children is urgently needed to help primary care clinicians better identify children who should be investigated for UTI. This paper describes the protocol for the Diagnosis of Urinary Tract infection in Young children (DUTY) study. The overall study aim is to derive and validate a cost-effective clinical algorithm for the diagnosis of UTI in children presenting to primary care acutely unwell. Methods/design DUTY is a multicentre, diagnostic and prospective observational study aiming to recruit at least 7,000 children aged before their fifth birthday, being assessed in primary care for any acute, non-traumatic, illness of ≤ 28 days duration. Urine samples will be obtained from eligible consented children, and data collected on medical history and presenting symptoms and signs. Urine samples will be dipstick tested in general practice and sent for microbiological analysis. All children with culture positive urines and a random sample of children with urine culture results in other, non-positive categories will be followed up to record symptom duration and healthcare resource use. A diagnostic algorithm will be constructed and validated and an economic evaluation conducted. The primary outcome will be a validated diagnostic algorithm using a reference standard of a pure/predominant growth of at least >103, but usually >105 CFU/mL of one, but no more than two uropathogens. We will use logistic regression to identify the clinical predictors (i.e. demographic, medical history, presenting signs and symptoms and urine dipstick analysis results) most strongly associated with a positive urine culture result. We will then use economic evaluation to compare the cost effectiveness of the candidate prediction rules. Discussion This study will provide novel, clinically important information on the diagnostic features of childhood UTI and the cost effectiveness of a validated prediction rule, to help primary care clinicians improve the efficiency of their diagnostic strategy for UTI in young children. PMID:22812651

  19. Deformable structure registration of bladder through surface mapping.

    PubMed

    Xiong, Li; Viswanathan, Akila; Stewart, Alexandra J; Haker, Steven; Tempany, Clare M; Chin, Lee M; Cormack, Robert A

    2006-06-01

    Cumulative dose distributions in fractionated radiation therapy depict the dose to normal tissues and therefore may permit an estimation of the risk of normal tissue complications. However, calculation of these distributions is highly challenging because of interfractional changes in the geometry of patient anatomy. This work presents an algorithm for deformable structure registration of the bladder and the verification of the accuracy of the algorithm using phantom and patient data. In this algorithm, the registration process involves conformal mapping of genus zero surfaces using finite element analysis, and guided by three control landmarks. The registration produces a correspondence between fractions of the triangular meshes used to describe the bladder surface. For validation of the algorithm, two types of balloons were inflated gradually to three times their original size, and several computerized tomography (CT) scans were taken during the process. The registration algorithm yielded a local accuracy of 4 mm along the balloon surface. The algorithm was then applied to CT data of patients receiving fractionated high-dose-rate brachytherapy to the vaginal cuff, with the vaginal cylinder in situ. The patients' bladder filling status was intentionally different for each fraction. The three required control landmark points were identified for the bladder based on anatomy. Out of an Institutional Review Board (IRB) approved study of 20 patients, 3 had radiographically identifiable points near the bladder surface that were used for verification of the accuracy of the registration. The verification point as seen in each fraction was compared with its predicted location based on affine as well as deformable registration. Despite the variation in bladder shape and volume, the deformable registration was accurate to 5 mm, consistently outperforming the affine registration. We conclude that the structure registration algorithm presented works with reasonable accuracy and provides a means of calculating cumulative dose distributions.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Xiong; Viswanathan, Akila; Stewart, Alexandra J.

    Cumulative dose distributions in fractionated radiation therapy depict the dose to normal tissues and therefore may permit an estimation of the risk of normal tissue complications. However, calculation of these distributions is highly challenging because of interfractional changes in the geometry of patient anatomy. This work presents an algorithm for deformable structure registration of the bladder and the verification of the accuracy of the algorithm using phantom and patient data. In this algorithm, the registration process involves conformal mapping of genus zero surfaces using finite element analysis, and guided by three control landmarks. The registration produces a correspondence between fractionsmore » of the triangular meshes used to describe the bladder surface. For validation of the algorithm, two types of balloons were inflated gradually to three times their original size, and several computerized tomography (CT) scans were taken during the process. The registration algorithm yielded a local accuracy of 4 mm along the balloon surface. The algorithm was then applied to CT data of patients receiving fractionated high-dose-rate brachytherapy to the vaginal cuff, with the vaginal cylinder in situ. The patients' bladder filling status was intentionally different for each fraction. The three required control landmark points were identified for the bladder based on anatomy. Out of an Institutional Review Board (IRB) approved study of 20 patients, 3 had radiographically identifiable points near the bladder surface that were used for verification of the accuracy of the registration. The verification point as seen in each fraction was compared with its predicted location based on affine as well as deformable registration. Despite the variation in bladder shape and volume, the deformable registration was accurate to 5 mm, consistently outperforming the affine registration. We conclude that the structure registration algorithm presented works with reasonable accuracy and provides a means of calculating cumulative dose distributions.« less

Top