Olsen, Nikki S; Shorrock, Steven T
2010-03-01
This article evaluates an adaptation of the human factors analysis and classification system (HFACS) adopted by the Australian Defence Force (ADF) to classify factors that contribute to incidents. Three field studies were undertaken to assess the reliability of HFACS-ADF in the context of a particular ADF air traffic control (ATC) unit. Study one was designed to assess inter-coder consensus between many coders for two incident reports. Study two was designed to assess inter-coder consensus between one participant and the previous original analysts for a large set of incident reports. Study three was designed to test intra-coder consistency for four participants over many months. For all studies, agreement was low at the level of both fine-level HFACS-ADF descriptors and high-level HFACS-type categories. A survey of participants suggested that they were not confident that HFACS-ADF could be used consistently. The three field studies reported suggest that the ADF adaptation of HFACS is unreliable for incident analysis at the ATC unit level, and may therefore be invalid in this context. Several reasons for the results are proposed, associated with the underlying HFACS model and categories, the HFACS-ADF adaptations, the context of use, and the conduct of the studies. Copyright 2009 Elsevier Ltd. All rights reserved.
Inter-Coder Agreement in One-to-Many Classification: Fuzzy Kappa.
Kirilenko, Andrei P; Stepchenkova, Svetlana
2016-01-01
Content analysis involves classification of textual, visual, or audio data. The inter-coder agreement is estimated by making two or more coders to classify the same data units, with subsequent comparison of their results. The existing methods of agreement estimation, e.g., Cohen's kappa, require that coders place each unit of content into one and only one category (one-to-one coding) from the pre-established set of categories. However, in certain data domains (e.g., maps, photographs, databases of texts and images), this requirement seems overly restrictive. The restriction could be lifted, provided that there is a measure to calculate the inter-coder agreement in the one-to-many protocol. Building on the existing approaches to one-to-many coding in geography and biomedicine, such measure, fuzzy kappa, which is an extension of Cohen's kappa, is proposed. It is argued that the measure is especially compatible with data from certain domains, when holistic reasoning of human coders is utilized in order to describe the data and access the meaning of communication.
Inter-Coder Agreement in One-to-Many Classification: Fuzzy Kappa
Kirilenko, Andrei P.; Stepchenkova, Svetlana
2016-01-01
Content analysis involves classification of textual, visual, or audio data. The inter-coder agreement is estimated by making two or more coders to classify the same data units, with subsequent comparison of their results. The existing methods of agreement estimation, e.g., Cohen’s kappa, require that coders place each unit of content into one and only one category (one-to-one coding) from the pre-established set of categories. However, in certain data domains (e.g., maps, photographs, databases of texts and images), this requirement seems overly restrictive. The restriction could be lifted, provided that there is a measure to calculate the inter-coder agreement in the one-to-many protocol. Building on the existing approaches to one-to-many coding in geography and biomedicine, such measure, fuzzy kappa, which is an extension of Cohen’s kappa, is proposed. It is argued that the measure is especially compatible with data from certain domains, when holistic reasoning of human coders is utilized in order to describe the data and access the meaning of communication. PMID:26933956
Entropy coders for image compression based on binary forward classification
NASA Astrophysics Data System (ADS)
Yoo, Hoon; Jeong, Jechang
2000-12-01
Entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.
A Subband Coding Method for HDTV
NASA Technical Reports Server (NTRS)
Chung, Wilson; Kossentini, Faouzi; Smith, Mark J. T.
1995-01-01
This paper introduces a new HDTV coder based on motion compensation, subband coding, and high order conditional entropy coding. The proposed coder exploits the temporal and spatial statistical dependencies inherent in the HDTV signal by using intra- and inter-subband conditioning for coding both the motion coordinates and the residual signal. The new framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission. Experimental results show that the coder outperforms MPEG-2, while still maintaining relatively low complexity.
Reliability in Cross-National Content Analysis.
ERIC Educational Resources Information Center
Peter, Jochen; Lauf, Edmund
2002-01-01
Investigates how coder characteristics such as language skills, political knowledge, coding experience, and coding certainty affected inter-coder and coder-training reliability. Shows that language skills influenced both reliability types. Suggests that cross-national researchers should pay more attention to cross-national assessments of…
Reliability of cause of death coding: an international comparison.
Antini, Carmen; Rajs, Danuta; Muñoz-Quezada, María Teresa; Mondaca, Boris Andrés Lucero; Heiss, Gerardo
2015-07-01
This study evaluates the agreement of nosologic coding of cardiovascular causes of death between a Chilean coder and one in the United States, in a stratified random sample of death certificates of persons aged ≥ 60, issued in 2008 in the Valparaíso and Metropolitan regions, Chile. All causes of death were converted to ICD-10 codes in parallel by both coders. Concordance was analyzed with inter-coder agreement and Cohen's kappa coefficient by level of specification ICD-10 code for the underlying cause and the total causes of death coding. Inter-coder agreement was 76.4% for all causes of death and 80.6% for the underlying cause (agreement at the four-digit level), with differences by the level of specification of the ICD-10 code, by line of the death certificate, and by number of causes of death per certificate. Cohen's kappa coefficient was 0.76 (95%CI: 0.68-0.84) for the underlying cause and 0.75 (95%CI: 0.74-0.77) for the total causes of death. In conclusion, causes of death coding and inter-coder agreement for cardiovascular diseases in two regions of Chile are comparable to an external benchmark and with reports from other countries.
Finch, Caroline F; Orchard, John W; Twomey, Dara M; Saad Saleem, Muhammad; Ekegren, Christina L; Lloyd, David G; Elliott, Bruce C
2014-04-01
To compare Orchard Sports Injury Classification System (OSICS-10) sports medicine diagnoses assigned by a clinical and non-clinical coder. Assessment of intercoder agreement. Community Australian football. 1082 standardised injury surveillance records. Direct comparison of the four-character hierarchical OSICS-10 codes assigned by two independent coders (a sports physician and an epidemiologist). Adjudication by a third coder (biomechanist). The coders agreed on the first character 95% of the time and on the first two characters 86% of the time. They assigned the same four-digit OSICS-10 code for only 46% of the 1082 injuries. The majority of disagreements occurred for the third character; 85% were because one coder assigned a non-specific 'X' code. The sports physician code was deemed correct in 53% of cases and the epidemiologist in 44%. Reasons for disagreement included the physician not using all of the collected information and the epidemiologist lacking specific anatomical knowledge. Sports injury research requires accurate identification and classification of specific injuries and this study found an overall high level of agreement in coding according to OSICS-10. The fact that the majority of the disagreements occurred for the third OSICS character highlights the fact that increasing complexity and diagnostic specificity in injury coding can result in a loss of reliability and demands a high level of anatomical knowledge. Injury report form details need to reflect this level of complexity and data management teams need to include a broad range of expertise.
Finch, Caroline F; Orchard, John W; Twomey, Dara M; Saad Saleem, Muhammad; Ekegren, Christina L; Lloyd, David G; Elliott, Bruce C
2014-01-01
Objective To compare Orchard Sports Injury Classification System (OSICS-10) sports medicine diagnoses assigned by a clinical and non-clinical coder. Design Assessment of intercoder agreement. Setting Community Australian football. Participants 1082 standardised injury surveillance records. Main outcome measurements Direct comparison of the four-character hierarchical OSICS-10 codes assigned by two independent coders (a sports physician and an epidemiologist). Adjudication by a third coder (biomechanist). Results The coders agreed on the first character 95% of the time and on the first two characters 86% of the time. They assigned the same four-digit OSICS-10 code for only 46% of the 1082 injuries. The majority of disagreements occurred for the third character; 85% were because one coder assigned a non-specific ‘X’ code. The sports physician code was deemed correct in 53% of cases and the epidemiologist in 44%. Reasons for disagreement included the physician not using all of the collected information and the epidemiologist lacking specific anatomical knowledge. Conclusions Sports injury research requires accurate identification and classification of specific injuries and this study found an overall high level of agreement in coding according to OSICS-10. The fact that the majority of the disagreements occurred for the third OSICS character highlights the fact that increasing complexity and diagnostic specificity in injury coding can result in a loss of reliability and demands a high level of anatomical knowledge. Injury report form details need to reflect this level of complexity and data management teams need to include a broad range of expertise. PMID:22919021
ERIC Educational Resources Information Center
Kang, Namjun
If content analysis is to satisfy the requirement of objectivity, measures and procedures must be reliable. Reliability is usually measured by the proportion of agreement of all categories identically coded by different coders. For such data to be empirically meaningful, a high degree of inter-coder reliability must be demonstrated. Researchers in…
Developing a digital photography-based method for dietary analysis in self-serve dining settings.
Christoph, Mary J; Loman, Brett R; Ellison, Brenna
2017-07-01
Current population-based methods for assessing dietary intake, including food frequency questionnaires, food diaries, and 24-h dietary recall, are limited in their ability to objectively measure food intake. Digital photography has been identified as a promising addition to these techniques but has rarely been assessed in self-serve settings. We utilized digital photography to examine university students' food choices and consumption in a self-serve dining hall setting. Research assistants took pre- and post-photos of students' plates during lunch and dinner to assess selection (presence), servings, and consumption of MyPlate food groups. Four coders rated the same set of approximately 180 meals for inter-rater reliability analyses; approximately 50 additional meals were coded twice by each coder to assess intra-rater agreement. Inter-rater agreement on the selection, servings, and consumption of food groups was high at 93.5%; intra-rater agreement was similarly high with an average of 95.6% agreement. Coders achieved the highest rates of agreement in assessing if a food group was present on the plate (95-99% inter-rater agreement, depending on food group) and estimating the servings of food selected (81-98% inter-rater agreement). Estimating consumption, particularly for items such as beans and cheese that were often in mixed dishes, was more challenging (77-94% inter-rater agreement). Results suggest that the digital photography method presented is feasible for large studies in real-world environments and can provide an objective measure of food selection, servings, and consumption with a high degree of agreement between coders; however, to make accurate claims about the state of dietary intake in all-you-can-eat, self-serve settings, researchers will need to account for the possibility of diners taking multiple trips through the serving line. Copyright © 2017 Elsevier Ltd. All rights reserved.
Medical Image Compression Using a New Subband Coding Method
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug
1995-01-01
A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.
The challenge of mapping between two medical coding systems.
Wojcik, Barbara E; Stein, Catherine R; Devore, Raymond B; Hassell, L Harrison
2006-11-01
Deployable medical systems patient conditions (PCs) designate groups of patients with similar medical conditions and, therefore, similar treatment requirements. PCs are used by the U.S. military to estimate field medical resources needed in combat operations. Information associated with each of the 389 PCs is based on subject matter expert opinion, instead of direct derivation from standard medical codes. Currently, no mechanisms exist to tie current or historical medical data to PCs. Our study objective was to determine whether reliable conversion between PC codes and International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) diagnosis codes is possible. Data were analyzed for three professional coders assigning all applicable ICD-9-CM diagnosis codes to each PC code. Inter-rater reliability was measured by using Cohen's K statistic and percent agreement. Methods were developed to calculate kappa statistics when multiple responses could be selected from many possible categories. Overall, we found moderate support for the possibility of reliable conversion between PCs and ICD-9-CM diagnoses (mean kappa = 0.61). Current PCs should be modified into a system that is verifiable with real data.
From Novice to Expert: Problem Solving in ICD-10-PCS Procedural Coding
Rousse, Justin Thomas
2013-01-01
The benefits of converting to ICD-10-CM/PCS have been well documented in recent years. One of the greatest challenges in the conversion, however, is how to train the workforce in the code sets. The International Classification of Diseases, Tenth Revision, Procedure Coding System (ICD-10-PCS) has been described as a language requiring higher-level reasoning skills because of the system's increased granularity. Training and problem-solving strategies required for correct procedural coding are unclear. The objective of this article is to propose that the acquisition of rule-based logic will need to be augmented with self-evaluative and critical thinking. Awareness of how this process works is helpful for established coders as well as for a new generation of coders who will master the complexities of the system. PMID:23861674
Apeldoorn, Adri T.; van Helvoirt, Hans; Ostelo, Raymond W.; Meihuizen, Hanneke; Kamper, Steven J.; van Tulder, Maurits W.; de Vet, Henrica C. W.
2016-01-01
Study design Observational inter-rater reliability study. Objectives To examine: (1) the inter-rater reliability of a modified version of Delitto et al.’s classification-based algorithm for patients with low back pain; (2) the influence of different levels of familiarity with the system; and (3) the inter-rater reliability of algorithm decisions in patients who clearly fit into a subgroup (clear classifications) and those who do not (unclear classifications). Methods Patients were examined twice on the same day by two of three participating physical therapists with different levels of familiarity with the system. Patients were classified into one of four classification groups. Raters were blind to the others’ classification decision. In order to quantify the inter-rater reliability, percentages of agreement and Cohen’s Kappa were calculated. Results A total of 36 patients were included (clear classification n = 23; unclear classification n = 13). The overall rate of agreement was 53% and the Kappa value was 0·34 [95% confidence interval (CI): 0·11–0·57], which indicated only fair inter-rater reliability. Inter-rater reliability for patients with a clear classification (agreement 52%, Kappa value 0·29) was not higher than for patients with an unclear classification (agreement 54%, Kappa value 0·33). Familiarity with the system (i.e. trained with written instructions and previous research experience with the algorithm) did not improve the inter-rater reliability. Conclusion Our pilot study challenges the inter-rater reliability of the classification procedure in clinical practice. Therefore, more knowledge is needed about factors that affect the inter-rater reliability, in order to improve the clinical applicability of the classification scheme. PMID:27559279
Michie, Susan; Hyder, Natasha; Walia, Asha; West, Robert
2011-04-01
Individual behavioural support for smoking cessation is effective but little is known about the 'active ingredients'. As a first step to establishing this, it is essential to have a consistent terminology for specifying intervention content. This study aimed to develop for the first time a reliable taxonomy of behaviour change techniques (BCTs) used within individual behavioural support for smoking cessation. Two source documents describing recommended practice were identified and analysed by two coders into component BCTs. The resulting taxonomy of BCTs was applied to 43 treatment manuals obtained from the English Stop Smoking Services (SSSs). In the first 28 of these, pairs of coders applied the taxonomy independently and inter-coder reliability was assessed. The BCTs were also categorised by two coders according to their main function and inter-coder reliability for this was assessed. Forty-three BCTs were identified which could be classified into four functions: 1) directly addressing motivation e.g. providing rewards contingent on abstinence, 2) maximising self-regulatory capacity or skills e.g. facilitating barrier identification and problem solving, 3) promoting adjuvant activities e.g. advising on stop-smoking medication, and 4) supporting other BCTs e.g. building general rapport. Percentage agreement in identifying BCTs and of categorising BCTs into their functions ranged from 86% to 95% and discrepancies were readily resolved through discussion. It is possible to develop a reliable taxonomy of BCTs used in behavioural support for smoking cessation which can provide a starting point for investigating the association between intervention content and outcome and can form a basis for determining competences required to undertake the role of stop smoking specialist. Copyright © 2010 Elsevier B.V. All rights reserved.
An evaluation of computer assisted clinical classification algorithms.
Chute, C G; Yang, Y; Buntrock, J
1994-01-01
The Mayo Clinic has a long tradition of indexing patient records in high resolution and volume. Several algorithms have been developed which promise to help human coders in the classification process. We evaluate variations on code browsers and free text indexing systems with respect to their speed and error rates in our production environment. The more sophisticated indexing systems save measurable time in the coding process, but suffer from incompleteness which requires a back-up system or human verification. Expert Network does the best job of rank ordering clinical text, potentially enabling the creation of thresholds for the pass through of computer coded data without human review.
Meddings, Jennifer; Saint, Sanjay; McMahon, Laurence F
2010-06-01
To evaluate whether hospital-acquired catheter-associated urinary tract infections (CA-UTIs) are accurately documented in discharge records with the use of International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes so that nonpayment is triggered, as mandated by the Centers for Medicare and Medicaid Services (CMS) Hospital-Acquired Conditions Initiative. We conducted a retrospective medical record review of 80 randomly selected adult discharges from May 2006 through September 2007 from the University of Michigan Health System (UMHS) with secondary-diagnosis urinary tract infections (UTIs). One physician-abstractor reviewed each record to categorize UTIs as catheter associated and/or hospital acquired; these results (considered "gold standard") were compared with diagnosis codes assigned by hospital coders. Annual use of the catheter association code (996.64) by UMHS coders was compared with state and US rates by using Healthcare Cost and Utilization Project data. Patient mean age was 58 years; 56 (70%) were women; median length of hospital stay was 6 days; 50 patients (62%) used urinary catheters during hospitalization. Hospital coders had listed 20 secondary-diagnosis UTIs (25%) as hospital acquired, whereas physician-abstractors indicated that 37 (46%) were hospital acquired. Hospital coders had identified no CA-UTIs (code 996.64 was never used), whereas physician-abstractors identified 36 CA-UTIs (45%; 28 hospital acquired and 8 present on admission). Catheter use often was evident only from nursing notes, which, unlike physician notes, cannot be used by coders to assign discharge codes. State and US annual rates of 996.64 coding (approximately 1% of secondary-diagnosis UTIs) were similar to those at UMHS. Hospital coders rarely use the catheter association code needed to identify CA-UTI among secondary-diagnosis UTIs. Coders often listed a UTI as present on admission, although the medical record indicated that it was hospital acquired. Because coding of hospital-acquired CA-UTI seems to be fraught with error, nonpayment according to CMS policy may not reliably occur.
Subotin, Michael; Davis, Anthony R
2016-09-01
Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Mastroleo, Nadine R; Mallett, Kimberly A; Turrisi, Rob; Ray, Anne E
2009-09-01
Despite the expanding use of undergraduate student peer counseling interventions aimed at reducing college student drinking, few programs evaluate peer counselors' competency to conduct these interventions. The present research describes the development and psychometric assessments of the Peer Proficiency Assessment (PEPA), a new tool for examining Motivational Interviewing adherence in undergraduate student peer delivered interventions. Twenty peer delivered sessions were evaluated by master and undergraduate student coders using a cross-validation design to examine peer based alcohol intervention sessions. Assessments revealed high inter-rater reliability between student and master coders and good correlations between previously established fidelity tools. Findings lend support for the use of the PEPA to examine peer counselor competency. The PEPA, training for use, inter-rater reliability information, construct and predictive validity, and tool usefulness are described.
The Simple Video Coder: A free tool for efficiently coding social video data.
Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C
2017-08-01
Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.
Using FDA reports to inform a classification for health information technology safety problems
Ong, Mei-Sing; Runciman, William; Coiera, Enrico
2011-01-01
Objective To expand an emerging classification for problems with health information technology (HIT) using reports submitted to the US Food and Drug Administration Manufacturer and User Facility Device Experience (MAUDE) database. Design HIT events submitted to MAUDE were retrieved using a standardized search strategy. Using an emerging classification with 32 categories of HIT problems, a subset of relevant events were iteratively analyzed to identify new categories. Two coders then independently classified the remaining events into one or more categories. Free-text descriptions were analyzed to identify the consequences of events. Measurements Descriptive statistics by number of reported problems per category and by consequence; inter-rater reliability analysis using the κ statistic for the major categories and consequences. Results A search of 899 768 reports from January 2008 to July 2010 yielded 1100 reports about HIT. After removing duplicate and unrelated reports, 678 reports describing 436 events remained. The authors identified four new categories to describe problems with software functionality, system configuration, interface with devices, and network configuration; the authors' classification with 32 categories of HIT problems was expanded by the addition of these four categories. Examination of the 436 events revealed 712 problems, 96% were machine-related, and 4% were problems at the human–computer interface. Almost half (46%) of the events related to hazardous circumstances. Of the 46 events (11%) associated with patient harm, four deaths were linked to HIT problems (0.9% of 436 events). Conclusions Only 0.1% of the MAUDE reports searched were related to HIT. Nevertheless, Food and Drug Administration reports did prove to be a useful new source of information about the nature of software problems and their safety implications with potential to inform strategies for safe design and implementation. PMID:21903979
Pianta, R C; Longmaid, K; Ferguson, J E
1999-06-01
Investigated an attachment-based theoretical framework and classification system, introduced by Kaplan and Main (1986), for interpreting children's family drawings. This study concentrated on the psychometric properties of the system and the relation between drawings classified using this system and teacher ratings of classroom social-emotional and behavioral functioning, controlling for child age, ethnic status, intelligence, and fine motor skills. This nonclinical sample consisted of 200 kindergarten children of diverse racial and socioeconomic status (SES). Limited support for reliability of this classification system was obtained. Kappas for overall classifications of drawings (e.g., secure) exceeded .80 and mean kappa for discrete drawing features (e.g., figures with smiles) was .82. Coders' endorsement of the presence of certain discrete drawing features predicted their overall classification at 82.5% accuracy. Drawing classification was related to teacher ratings of classroom functioning independent of child age, sex, race, SES, intelligence, and fine motor skills (with p values for the multivariate effects ranging from .043-.001). Results are discussed in terms of the psychometric properties of this system for classifying children's representations of family and the limitations of family drawing techniques for young children.
Regression periods in infancy: a case study from Catalonia.
Sadurní, Marta; Rostan, Carlos
2002-05-01
Based on Rijt-Plooij and Plooij's (1992) research on emergence of regression periods in the first two years of life, the presence of such periods in a group of 18 babies (10 boys and 8 girls, aged between 3 weeks and 14 months) from a Catalonian population was analyzed. The measurements were a questionnaire filled in by the infants' mothers, a semi-structured weekly tape-recorded interview, and observations in their homes. The procedure and the instruments used in the project follow those proposed by Rijt-Plooij and Plooij. Our results confirm the existence of the regression periods in the first year of children's life. Inter-coder agreement for trained coders was 78.2% and within-coder agreement was 90.1%. In the discussion, the possible meaning and relevance of regression periods in order to understand development from a psychobiological and social framework is commented upon.
Patel, Mehul D; Rose, Kathryn M; Owens, Cindy R; Bang, Heejung; Kaufman, Jay S
2012-03-01
Occupational data are a common source of workplace exposure and socioeconomic information in epidemiologic research. We compared the performance of two occupation coding methods, an automated software and a manual coder, using occupation and industry titles from U.S. historical records. We collected parental occupational data from 1920-40s birth certificates, Census records, and city directories on 3,135 deceased individuals in the Atherosclerosis Risk in Communities (ARIC) study. Unique occupation-industry narratives were assigned codes by a manual coder and the Standardized Occupation and Industry Coding software program. We calculated agreement between coding methods of classification into major Census occupational groups. Automated coding software assigned codes to 71% of occupations and 76% of industries. Of this subset coded by software, 73% of occupation codes and 69% of industry codes matched between automated and manual coding. For major occupational groups, agreement improved to 89% (kappa = 0.86). Automated occupational coding is a cost-efficient alternative to manual coding. However, some manual coding is required to code incomplete information. We found substantial variability between coders in the assignment of occupations although not as large for major groups.
Serial turbo trellis coded modulation using a serially concatenated coder
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)
2010-01-01
Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.
Cho, Chul-Hyun; Oh, Joo Han; Jung, Gu-Hee; Moon, Gi-Hyuk; Rhyou, In Hyeok; Yoon, Jong Pil; Lee, Ho Min
2015-10-01
As there is substantial variation in the classification and diagnosis of lateral clavicle fractures, proper management can be challenging. Although the Neer classification system modified by Craig has been widely used, no study has assessed its validity through inter- and intrarater agreement. To determine the inter- and intrarater agreement of the modified Neer classification system and associated treatment choice for lateral clavicle fractures and to assess whether 3-dimensional computed tomography (3D CT) improves the level of agreement. Cohort study (diagnosis); Level of evidence, 3. Nine experienced shoulder specialists and 9 orthopaedic fellows evaluated 52 patients with lateral clavicle fractures, completing fracture typing according to the modified Neer classification system and selecting a treatment choice for each case. Web-based assessment was performed using plain radiographs only, followed by the addition of 3D CT images 2 weeks later. This procedure was repeated 4 weeks later. Fleiss κ values were calculated to estimate the inter- and intrarater agreement. Based on plain radiographs only, the inter- and intrarater agreement of the modified Neer classification system was regarded as fair (κ = 0.344) and moderate (κ = 0.496), respectively; the inter- and intrarater agreement of treatment choice was both regarded as moderate (κ = 0.465 and 0.555, respectively). Based on the plain radiographs and 3D CT images, the inter- and intrarater agreement of the classification system was regarded as fair (κ = 0.317) and moderate (κ = 0.508), respectively; the inter- and intrarater agreement of treatment choice was regarded as moderate (κ = 0.463) and substantial (κ = 0.623), respectively. There were no significant differences in the level of agreement between the plain radiographs only and plain radiographs plus 3D CT images for any κ values (all P > .05). The level of interrater agreement of the modified Neer classification system for lateral clavicle fractures was fair. Additional 3D CT did not improve the overall level of interrater or intrarater agreement of the modified Neer classification system or associated treatment choice. To eliminate a common source of disagreement among surgeons, a new classification system to focus on unclassifiable fracture types is needed. © 2015 The Author(s).
Zhou, Yuefang; Black, Rolf; Freeman, Ruth; Herron, Daniel; Humphris, Gerry; Menzies, Rachel; Quinn, Sandra; Scott, Lesley; Waller, Annalu
2014-11-01
The VR-CoDES has been previously applied in the dental context. However, we know little about how dental patients with intellectual disabilities (ID) and complex communication needs express their emotional distress during dental visits. This is the first study explored the applicability of the VR-CoDES to a dental context involving patients with ID. Fourteen dental consultations were video recorded and coded using the VR-CoDES, assisted with the additional guidelines for the VR-CoDES in a dental context. Both inter- and intra-coder reliabilities were checked on the seven consultations where cues were observed. Sixteen cues (eight non-verbal) were identified within seven of the 14 consultations. Twenty responses were observed (12 reducing space) with four multiple responses. Cohen's Kappa were 0.76 (inter-coder) and 0.88 (intra-coder). With the additional guidelines, cues and responses were reliably identified. Cue expression was exhibited by non-verbal expression of emotion with people with ID in the literature. Further guidance is needed to improve the coding accuracy on multiple providers' responses and to investigate potential impacts of conflicting responses on patients. The findings provided a useful initial step towards an ongoing exploration of how healthcare providers identify and manage emotional distress of patients with ID. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Reliability of SNOMED-CT Coding by Three Physicians using Two Terminology Browsers
Chiang, Michael F.; Hwang, John C.; Yu, Alexander C.; Casper, Daniel S.; Cimino, James J.; Starren, Justin
2006-01-01
SNOMED-CT has been promoted as a reference terminology for electronic health record (EHR) systems. Many important EHR functions are based on the assumption that medical concepts will be coded consistently by different users. This study is designed to measure agreement among three physicians using two SNOMED-CT terminology browsers to encode 242 concepts from five ophthalmology case presentations in a publicly-available clinical journal. Inter-coder reliability, based on exact coding match by each physician, was 44% using one browser and 53% using the other. Intra-coder reliability testing revealed that a different SNOMED-CT code was obtained up to 55% of the time when the two browsers were used by one user to encode the same concept. These results suggest that the reliability of SNOMED-CT coding is imperfect, and may be a function of browsing methodology. A combination of physician training, terminology refinement, and browser improvement may help increase the reproducibility of SNOMED-CT coding. PMID:17238317
Coder Drift: A Reliability Problem for Teacher Observations.
ERIC Educational Resources Information Center
Marston, Paul T.; And Others
The results of two experiments support the hypothesis of "coder drift" which is defined as change that takes place while trained coders are using a system for a number of classroom observation sessions. The coding system used was a modification of the low-inference Flanders System of Interaction Analysis which calls for assigning…
Assessing Attachment in Psychotherapy: Validation of the Patient Attachment Coding System (PACS).
Talia, Alessandro; Miller-Bottome, Madeleine; Daniel, Sarah I F
2017-01-01
The authors present and validate the Patient Attachment Coding System (PACS), a transcript-based instrument that assesses clients' in-session attachment based on any session of psychotherapy, in multiple treatment modalities. One-hundred and sixty clients in different types of psychotherapy (cognitive-behavioural, cognitive-behavioural-enhanced, psychodynamic, relational, supportive) and from three different countries were administered the Adult Attachment Interview (AAI) prior to treatment, and one session for each client was rated with the PACS by independent coders. Results indicate strong inter-rater reliability, and high convergent validity of the PACS scales and classifications with the AAI. These results present the PACS as a practical alternative to the AAI in psychotherapy research and suggest that clinicians using the PACS can assess clients' attachment status on an ongoing basis by monitoring clients' verbal activity. These results also provide information regarding the ways in which differences in attachment status play out in therapy sessions and further the study of attachment in psychotherapy from a pre-treatment client factor to a process variable. Copyright © 2015 John Wiley & Sons, Ltd. The Patient Attachment Coding System is a valid measure of attachment that can classify clients' attachment based on any single psychotherapy transcript, in many therapeutic modalities Client differences in attachment manifest in part independently of the therapist's contributions Client adult attachment patterns are likely to affect psychotherapeutic processes. Copyright © 2015 John Wiley & Sons, Ltd.
Magrabi, Farah; Ong, Mei-Sing; Runciman, William; Coiera, Enrico
2010-01-01
To analyze patient safety incidents associated with computer use to develop the basis for a classification of problems reported by health professionals. Incidents submitted to a voluntary incident reporting database across one Australian state were retrieved and a subset (25%) was analyzed to identify 'natural categories' for classification. Two coders independently classified the remaining incidents into one or more categories. Free text descriptions were analyzed to identify contributing factors. Where available medical specialty, time of day and consequences were examined. Descriptive statistics; inter-rater reliability. A search of 42,616 incidents from 2003 to 2005 yielded 123 computer related incidents. After removing duplicate and unrelated incidents, 99 incidents describing 117 problems remained. A classification with 32 types of computer use problems was developed. Problems were grouped into information input (31%), transfer (20%), output (20%) and general technical (24%). Overall, 55% of problems were machine related and 45% were attributed to human-computer interaction. Delays in initiating and completing clinical tasks were a major consequence of machine related problems (70%) whereas rework was a major consequence of human-computer interaction problems (78%). While 38% (n=26) of the incidents were reported to have a noticeable consequence but no harm, 34% (n=23) had no noticeable consequence. Only 0.2% of all incidents reported were computer related. Further work is required to expand our classification using incident reports and other sources of information about healthcare IT problems. Evidence based user interface design must focus on the safe entry and retrieval of clinical information and support users in detecting and correcting errors and malfunctions.
Bakken, Suzanne; Cimino, James J.; Haskell, Robert; Kukafka, Rita; Matsumoto, Cindi; Chan, Garrett K.; Huff, Stanley M.
2000-01-01
Objective: The purpose of this study was to test the adequacy of the Clinical LOINC (Logical Observation Identifiers, Names, and Codes) semantic structure as a terminology model for standardized assessment measures. Methods: After extension of the definitions, 1,096 items from 35 standardized assessment instruments were dissected into the elements of the Clinical LOINC semantic structure. An additional coder dissected at least one randomly selected item from each instrument. When multiple scale types occurred in a single instrument, a second coder dissected one randomly selected item representative of each scale type. Results: The results support the adequacy of the Clinical LOINC semantic structure as a terminology model for standardized assessments. Using the revised definitions, the coders were able to dissect into the elements of Clinical LOINC all the standardized assessment items in the sample instruments. Percentage agreement for each element was as follows: component, 100 percent; property, 87.8 percent; timing, 82.9 percent; system/sample, 100 percent; scale, 92.6 percent; and method, 97.6 percent. Discussion: This evaluation was an initial step toward the representation of standardized assessment items in a manner that facilitates data sharing and re-use. Further clarification of the definitions, especially those related to time and property, is required to improve inter-rater reliability and to harmonize the representations with similar items already in LOINC. PMID:11062226
Patange Subba Rao, Sheethal Prasad; Lewis, James; Haddad, Ziad; Paringe, Vishal; Mohanty, Khitish
2014-10-01
The aim of the study was to evaluate inter-observer reliability and intra-observer reproducibility between the three-column classification and Schatzker classification systems using 2D and 3D CT models. Fifty-two consecutive patients with tibial plateau fractures were evaluated by five orthopaedic surgeons. All patients were classified into Schatzker and three-column classification systems using x-rays and 2D and 3D CT images. The inter-observer reliability was evaluated in the first round and the intra-observer reliability was determined during the second round 2 weeks later. The average intra-observer reproducibility for the three-column classification was from substantial to excellent in all sub classifications, as compared with Schatzker classification. The inter-observer kappa values increased from substantial to excellent in three-column classification and to moderate in Schatzker classification The average values for three-column classification for all the categories are as follows: (I-III) k2D = 0.718, 95% CI 0.554-0.864, p < 0.0001 and average 3D = 0.874, 95% CI 0.754-0.890, p < 0.0001. For Schatzker classification system, the average values for all six categories are as follows: (I-VI) k2D = 0.536, 95% CI 0.365-0.685, p < 0.0001 and average k3D = 0.552 95% CI 0.405-0.700, p < 0.0001. The values are statistically significant. Statistically significant inter-observer values in both rounds were noted with the three-column classification, making it statistically an excellent agreement. The intra-observer reproducibility for the three-column classification improved as compared with the Schatzker classification. The three-column classification seems to be an effective way to characterise and classify fractures of tibial plateau.
Vikström, Anna; Skånér, Ylva; Strender, Lars-Erik; Nilsson, Gunnar H
2007-01-01
Background Terminologies and classifications are used for different purposes and have different structures and content. Linking or mapping terminologies and classifications has been pointed out as a possible way to achieve various aims as well as to attain additional advantages in describing and documenting health care data. The objectives of this study were: • to explore and develop rules to be used in a mapping process • to evaluate intercoder reliability and the assessed degree of concordance when the 'Swedish primary health care version of the International Classification of Diseases version 10' (ICD-10) is matched to the Systematized Nomenclature of Medicine, Clinical Terms (SNOMED CT) • to describe characteristics in the coding systems that are related to obstacles to high quality mapping. Methods Mapping (interpretation, matching, assessment and rule development) was done by two coders. The Swedish primary health care version of ICD-10 with 972 codes was randomly divided into an allotment of three sets of categories, used in three mapping sequences, A, B and C. Mapping was done independently by the coders and new rules were developed between the sequences. Intercoder reliability was measured by comparing the results after each set. The extent of matching was assessed as either 'partly' or 'completely concordant' Results General principles for mapping were outlined before the first sequence, A. New mapping rules had significant impact on the results between sequences A - B (p < 0.01) and A - C (p < 0.001). The intercoder reliability in our study reached 83%. Obstacles to high quality mapping were mainly a lack of agreement by the coders due to structural and content factors in SNOMED CT and in the current ICD-10 version. The predominant reasons for this were difficulties in interpreting the meaning of the categories in the current ICD-10 version, and the presence of many related concepts in SNOMED CT. Conclusion Mapping from ICD-10-categories to SNOMED CT needs clear and extensive rules. It is possible to reach high intercoder reliability in mapping from ICD-10-categories to SNOMED CT. However, several obstacles to high quality mapping remain due to structure and content characteristics in both coding systems. PMID:17472757
Ringdal, Kjetil G; Skaga, Nils Oddvar; Steen, Petter Andreas; Hestnes, Morten; Laake, Petter; Jones, J Mary; Lossius, Hans Morten
2013-01-01
Pre-injury comorbidities can influence the outcomes of severely injured patients. Pre-injury comorbidity status, graded according to the American Society of Anesthesiologists Physical Status (ASA-PS) classification system, is an independent predictor of survival in trauma patients and is recommended as a comorbidity score in the Utstein Trauma Template for Uniform Reporting of Data. Little is known about the reliability of pre-injury ASA-PS scores. The objective of this study was to examine whether the pre-injury ASA-PS system was a reliable scale for grading comorbidity in trauma patients. Nineteen Norwegian trauma registry coders were invited to participate in a reliability study in which 50 real but anonymised patient medical records were distributed. Reliability was analysed using quadratic weighted kappa (κ(w)) analysis with 95% CI as the primary outcome measure and unweighted kappa (κ) analysis, which included unknown values, as a secondary outcome measure. Fifteen of the invitees responded to the invitation, and ten participated. We found moderate (κ(w)=0.77 [95% CI: 0.64-0.87]) to substantial (κ(w)=0.95 [95% CI: 0.89-0.99]) rater-against-reference standard reliability using κ(w) and fair (κ=0.46 [95% CI: 0.29-0.64]) to substantial (κ=0.83 [95% CI: 0.68-0.94]) reliability using κ. The inter-rater reliability ranged from moderate (κ(w)=0.66 [95% CI: 0.45-0.81]) to substantial (κ(w)=0.96 [95% CI: 0.88-1.00]) for κ(w) and from slight (κ=0.36 [95% CI: 0.21-0.54]) to moderate (κ=0.75 [95% CI: 0.62-0.89]) for κ. The rater-against-reference standard reliability varied from moderate to substantial for the primary outcome measure and from fair to substantial for the secondary outcome measure. The study findings indicate that the pre-injury ASA-PS scale is a reliable score for classifying comorbidity in trauma patients. Copyright © 2012 Elsevier Ltd. All rights reserved.
Health information management: an introduction to disease classification and coding.
Mony, Prem Kumar; Nagaraj, C
2007-01-01
Morbidity and mortality data constitute an important component of a health information system and their coding enables uniform data collation and analysis as well as meaningful comparisons between regions or countries. Strengthening the recording and reporting systems for health monitoring is a basic requirement for an efficient health information management system. Increased advocacy for and awareness of a uniform coding system together with adequate capacity building of physicians, coders and other allied health and information technology personnel would pave the way for a valid and reliable health information management system in India. The core requirements for the implementation of disease coding are: (i) support from national/institutional health administrators, (ii) widespread availability of the ICD-10 material for morbidity and mortality coding; (iii) enhanced human and financial resources; and (iv) optimal use of informatics. We describe the methodology of a disease classification and codification system as also its applications for developing and maintaining an effective health information management system for India.
Ong, Mei-Sing; Runciman, William; Coiera, Enrico
2010-01-01
Objective To analyze patient safety incidents associated with computer use to develop the basis for a classification of problems reported by health professionals. Design Incidents submitted to a voluntary incident reporting database across one Australian state were retrieved and a subset (25%) was analyzed to identify ‘natural categories’ for classification. Two coders independently classified the remaining incidents into one or more categories. Free text descriptions were analyzed to identify contributing factors. Where available medical specialty, time of day and consequences were examined. Measurements Descriptive statistics; inter-rater reliability. Results A search of 42 616 incidents from 2003 to 2005 yielded 123 computer related incidents. After removing duplicate and unrelated incidents, 99 incidents describing 117 problems remained. A classification with 32 types of computer use problems was developed. Problems were grouped into information input (31%), transfer (20%), output (20%) and general technical (24%). Overall, 55% of problems were machine related and 45% were attributed to human–computer interaction. Delays in initiating and completing clinical tasks were a major consequence of machine related problems (70%) whereas rework was a major consequence of human–computer interaction problems (78%). While 38% (n=26) of the incidents were reported to have a noticeable consequence but no harm, 34% (n=23) had no noticeable consequence. Conclusion Only 0.2% of all incidents reported were computer related. Further work is required to expand our classification using incident reports and other sources of information about healthcare IT problems. Evidence based user interface design must focus on the safe entry and retrieval of clinical information and support users in detecting and correcting errors and malfunctions. PMID:20962128
Lossless medical image compression with a hybrid coder
NASA Astrophysics Data System (ADS)
Way, Jing-Dar; Cheng, Po-Yuen
1998-10-01
The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.
The infant disorganised attachment classification: "Patterning within the disturbance of coherence".
Reijman, Sophie; Foster, Sarah; Duschinsky, Robbie
2018-03-01
Since its introduction by Main and Solomon in 1990, the infant disorganised attachment classification has functioned as a predictor of mental health in developmental psychology research. It has also been used by practitioners as an indicator of inadequate parenting and developmental risk, at times with greater confidence than research would support. Although attachment disorganisation takes many forms, it is generally understood to reflect a child's experience of being repeatedly alarmed by their parent's behaviour. In this paper we analyse how the infant disorganised attachment classification has been stabilised and interpreted, reporting results from archival study, ethnographic observations at four training institutes for coding disorganised attachment, interviews with researchers, certified coders and clinicians, and focus groups with child welfare practitioners. Our analysis points to the role of power/knowledge disjunctures in hindering communication between key groups: Main and Solomon and their readers; the oral culture of coders and the written culture of published papers; the research community and practitioners. We highlight how understandings of disorganised attachment have been magnetised by a simplified image of a child fearful of his or her own parent. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Maximum aposteriori joint source/channel coding
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Gibson, Jerry D.
1991-01-01
A maximum aposteriori probability (MAP) approach to joint source/channel coder design is presented in this paper. This method attempts to explore a technique for designing joint source/channel codes, rather than ways of distributing bits between source coders and channel coders. For a nonideal source coder, MAP arguments are used to design a decoder which takes advantage of redundancy in the source coder output to perform error correction. Once the decoder is obtained, it is analyzed with the purpose of obtaining 'desirable properties' of the channel input sequence for improving overall system performance. Finally, an encoder design which incorporates these properties is proposed.
The ICI classification for calcaneal injuries: a validation study.
Frima, Herman; Eshuis, Rienk; Mulder, Paul; Leenen, Luke
2012-06-01
The integral classification of injuries (ICI), by Zwipp et al. has been developed as a classification system for injuries of the bones, joints, cartilage and ligaments of the foot. It follows the principles of the comprehensive classification of fractures by Müller et al. The ICI was developed for 'everyday use' and scientific purposes. Our aim was to perform a validation study for this classification system applied to the calcaneal injuries. A panel of five experienced trauma and orthopaedic surgeons evaluated the ICI score in 20 calcaneal injuries. After 2 months, a second classification was performed in a different order. Inter- and intra-observer variability were evaluated by kappa statistics. Panel members were not able to evaluate capsule and ligamental injuries based on X-ray and computed tomography (CT) films. Two injuries were excluded for logistical reasons. The inter-observer agreement based on 18 injuries of bone and joints was slight; kappa 0.14 (90% confidence interval (CI): 0.05-0.22). The intra-observer agreement was fair; kappa 0.31 (90% CI: 0.22-0.41). Overall, the panel rated the system as very complicated and not practical. The ICI is a complicated classification system with slight to fair inter- and intra-observer variabilities. It might not be a practical classification system for calcaneal injuries in 'everyday use' or scientific purposes. Copyright © 2011 Elsevier Ltd. All rights reserved.
2014-01-01
Background The inter-patient classification schema and the Association for the Advancement of Medical Instrumentation (AAMI) standards are important to the construction and evaluation of automated heartbeat classification systems. The majority of previously proposed methods that take the above two aspects into consideration use the same features and classification method to classify different classes of heartbeats. The performance of the classification system is often unsatisfactory with respect to the ventricular ectopic beat (VEB) and supraventricular ectopic beat (SVEB). Methods Based on the different characteristics of VEB and SVEB, a novel hierarchical heartbeat classification system was constructed. This was done in order to improve the classification performance of these two classes of heartbeats by using different features and classification methods. First, random projection and support vector machine (SVM) ensemble were used to detect VEB. Then, the ratio of the RR interval was compared to a predetermined threshold to detect SVEB. The optimal parameters for the classification models were selected on the training set and used in the independent testing set to assess the final performance of the classification system. Meanwhile, the effect of different lead configurations on the classification results was evaluated. Results Results showed that the performance of this classification system was notably superior to that of other methods. The VEB detection sensitivity was 93.9% with a positive predictive value of 90.9%, and the SVEB detection sensitivity was 91.1% with a positive predictive value of 42.2%. In addition, this classification process was relatively fast. Conclusions A hierarchical heartbeat classification system was proposed based on the inter-patient data division to detect VEB and SVEB. It demonstrated better classification performance than existing methods. It can be regarded as a promising system for detecting VEB and SVEB of unknown patients in clinical practice. PMID:24981916
Urrutia, Julio; Zamora, Tomas; Yurac, Ratko; Campos, Mauricio; Palma, Joaquin; Mobarec, Sebastian; Prada, Carlos
2017-03-01
An agreement study. The aim of this study was to perform an independent interobserver and intraobserver agreement assessment of the AOSpine subaxial cervical spine injury classification system. The AOSpine subaxial cervical spine injury classification system was recently described. It showed substantial inter- and intraobserver agreement in the study describing it; however, an independent evaluation has not been performed. Anteroposterior and lateral radiographs, computed tomography scans, and magnetic resonance imaging of 65 patients with acute traumatic subaxial cervical spine injuries were selected and classified using the morphologic grading of the subaxial cervical spine injury classification system by 6 evaluators (3 spine surgeons and 3 orthopedic surgery residents). After a 6-week interval, the 65 cases were presented to the same evaluators in a random sequence for repeat evaluation. The kappa coefficient (κ) was used to determine the inter- and intraobserver agreement. The interobserver agreement was substantial when considering the fracture main types (A, B, C, or F), with κ = 0.61 (0.57-0.64), but moderate when considering the subtypes: κ = 0.57 (0.54-0.60). The intraobserver agreement was substantial considering the fracture types, with κ = 0.68 (0.62-0.74) and considering subtypes, κ = 0.62 (0.57-0.66). No significant differences were observed between spine surgeons and orthopedic residents in the overall inter- and intraobserver agreement, or in the inter- and intraobserver agreement of specific A, B, C, or F type of injuries. This classification allows adequate agreement among different observers and by the same observer on separate occasions. Future prospective studies should determine whether this classification allows surgeons to decide the best treatment for patients with subaxial cervical spine injuries. 3.
ERIC Educational Resources Information Center
Hidecker, Mary Jo Cooley; Ho, Nhan Thi; Dodge, Nancy; Hurvitz, Edward A.; Slaughter, Jaime; Workinger, Marilyn Seif; Kent, Ray D.; Rosenbaum, Peter; Lenski, Madeleine; Messaros, Bridget M.; Vanderbeek, Suzette B.; Deroos, Steven; Paneth, Nigel
2012-01-01
Aim: To investigate the relationships among the Gross Motor Function Classification System (GMFCS), Manual Ability Classification System (MACS), and Communication Function Classification System (CFCS) in children with cerebral palsy (CP). Method: Using questionnaires describing each scale, mothers reported GMFCS, MACS, and CFCS levels in 222…
2013-01-01
Background The Parent-Infant Relationship Global Assessment Scale (PIR-GAS) signifies a conceptually relevant development in the multi-axial, developmentally sensitive classification system DC:0-3R for preschool children. However, information about the reliability and validity of the PIR-GAS is rare. A review of the available empirical studies suggests that in research, PIR-GAS ratings can be based on a ten-minute videotaped interaction sequence. The qualification of raters may be very heterogeneous across studies. Methods To test whether the use of the PIR-GAS still allows for a reliable assessment of the parent-infant relationship, our study compared a PIR-GAS ratings based on a full-information procedure across multiple settings with ratings based on a ten-minute video by two doctoral candidates of medicine. For each mother-child dyad at a family day hospital (N = 48), we obtained two video ratings and one full-information rating at admission to therapy and at discharge. This pre-post design allowed for a replication of our findings across the two measurement points. We focused on the inter-rater reliability between the video coders, as well as between the video and full-information procedure, including mean differences and correlations between the raters. Additionally, we examined aspects of the validity of video and full-information ratings based on their correlation with measures of child and maternal psychopathology. Results Our results showed that a ten-minute video and full-information PIR-GAS ratings were not interchangeable. Most results at admission could be replicated by the data obtained at discharge. We concluded that a higher degree of standardization of the assessment procedure should increase the reliability of the PIR-GAS, and a more thorough theoretical foundation of the manual should increase its validity. PMID:23705962
On scalable lossless video coding based on sub-pixel accurate MCTF
NASA Astrophysics Data System (ADS)
Yea, Sehoon; Pearlman, William A.
2006-01-01
We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.
Biocoder: A programming language for standardizing and automating biology protocols
2010-01-01
Background Published descriptions of biology protocols are often ambiguous and incomplete, making them difficult to replicate in other laboratories. However, there is increasing benefit to formalizing the descriptions of protocols, as laboratory automation systems (such as microfluidic chips) are becoming increasingly capable of executing them. Our goal in this paper is to improve both the reproducibility and automation of biology experiments by using a programming language to express the precise series of steps taken. Results We have developed BioCoder, a C++ library that enables biologists to express the exact steps needed to execute a protocol. In addition to being suitable for automation, BioCoder converts the code into a readable, English-language description for use by biologists. We have implemented over 65 protocols in BioCoder; the most complex of these was successfully executed by a biologist in the laboratory using BioCoder as the only reference. We argue that BioCoder exposes and resolves ambiguities in existing protocols, and could provide the software foundations for future automation platforms. BioCoder is freely available for download at http://research.microsoft.com/en-us/um/india/projects/biocoder/. Conclusions BioCoder represents the first practical programming system for standardizing and automating biology protocols. Our vision is to change the way that experimental methods are communicated: rather than publishing a written account of the protocols used, researchers will simply publish the code. Our experience suggests that this practice is tractable and offers many benefits. We invite other researchers to leverage BioCoder to improve the precision and completeness of their protocols, and also to adapt and extend BioCoder to new domains. PMID:21059251
A new DWT/MC/DPCM video compression framework based on EBCOT
NASA Astrophysics Data System (ADS)
Mei, L. M.; Wu, H. R.; Tan, D. M.
2005-07-01
A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.
ERIC Educational Resources Information Center
Fox, Edward A.
1987-01-01
Discusses the CODER system, which was developed to investigate the application of artificial intelligence methods to increase the effectiveness of information retrieval systems, particularly those involving heterogeneous documents. Highlights include the use of PROLOG programing, blackboard-based designs, knowledge engineering, lexicological…
A Classification Scheme for Analyzing Mobile Apps Used to Prevent and Manage Disease in Late Life
Wang, Aiguo; Lu, Xin; Chen, Hongtu; Li, Changqun; Levkoff, Sue
2014-01-01
Background There are several mobile apps that offer tools for disease prevention and management among older adults, and promote health behaviors that could potentially reduce or delay the onset of disease. A classification scheme that categorizes apps could be useful to both older adult app users and app developers. Objective The objective of our study was to build and evaluate the effectiveness of a classification scheme that classifies mobile apps available for older adults in the “Health & Fitness” category of the iTunes App Store. Methods We constructed a classification scheme for mobile apps according to three dimensions: (1) the Precede-Proceed Model (PPM), which classifies mobile apps in terms of predisposing, enabling, and reinforcing factors for behavior change; (2) health care process, specifically prevention versus management of disease; and (3) health conditions, including physical health and mental health. Content analysis was conducted by the research team on health and fitness apps designed specifically for older adults, as well as those applicable to older adults, released during the months of June and August 2011 and August 2012. Face validity was assessed by a different group of individuals, who were not related to the study. A reliability analysis was conducted to confirm the accuracy of the coding scheme of the sample apps in this study. Results After applying sample inclusion and exclusion criteria, a total of 119 apps were included in the study sample, of which 26/119 (21.8%) were released in June 2011, 45/119 (37.8%) in August 2011, and 48/119 (40.3%) in August 2012. Face validity was determined by interviewing 11 people, who agreed that this scheme accurately reflected the nature of this application. The entire study sample was successfully coded, demonstrating satisfactory inter-rater reliability by two independent coders (95.8% initial concordance and 100% concordance after consensus was reached). The apps included in the study sample were more likely to be used for the management of disease than prevention of disease (109/119, 91.6% vs 15/119, 12.6%). More apps contributed to physical health rather than mental health (81/119, 68.1% vs 47/119, 39.5%). Enabling apps (114/119, 95.8%) were more common than reinforcing (20/119, 16.8%) or predisposing apps (10/119, 8.4%). Conclusions The findings, including face validity and inter-rater reliability, support the integrity of the proposed classification scheme for categorizing mobile apps for older adults in the “Health and Fitness” category available in the iTunes App Store. Using the proposed classification system, older adult app users would be better positioned to identify apps appropriate for their needs, and app developers would be able to obtain the distributions of available mobile apps for health-related concerns of older adults more easily. PMID:25098687
Reliability in content analysis: The case of semantic feature norms classification.
Bolognesi, Marianna; Pilgram, Roosmaryn; van den Heerik, Romy
2017-12-01
Semantic feature norms (e.g., STIMULUS: car → RESPONSE:
Therapist-delivered and self-help interventions for gambling problems: A review of contents.
Rodda, Simone; Merkouris, Stephanie S; Abraham, Charles; Hodgins, David C; Cowlishaw, Sean; Dowling, Nicki A
2018-06-13
Background and aims To date, no systematic approach to identifying the content and characteristics of psychological interventions used to reduce gambling or problem gambling has been developed. This study aimed to develop a reliable classification system capable of identifying intervention characteristics that could, potentially, account for greater or lesser effectiveness. Methods Intervention descriptions were content analyzed to identify common and differentiating characteristics. A coder manual was developed and applied by three independent coders to identify the presence or absence of defined characteristics in 46 psychological and self-help gambling interventions. Results The final classification taxonomy, entitled Gambling Intervention System of CharacTerization (GIST), included 35 categories of intervention characteristics. These were assigned to four groups: (a) types of change techniques (18 categories; e.g., cognitive restructuring and relapse prevention), (b) participant and study characteristics (6 categories; e.g., recruitment strategy and remuneration policy), and (c) characteristics of the delivery and conduct of interventions (11 categories; e.g., modality of delivery and therapist involvement), and (d) evaluation characteristics (e.g., type of control group). Interrater reliability of identification of defined characteristics was high (κ = 0.80-1.00). Discussion This research provides a tool that allows systematic identification of intervention characteristics, thereby enabling consideration, not only of whether interventions are effective or not, but also of which domain-relevant characteristics account for greater or lesser effectiveness. The taxonomy also facilitates standardized description of intervention content in a field in which many diverse interventions have been evaluated. Conclusion Application of this coding tool has the potential to accelerate the development of more efficient and effective therapist-delivered and self-directed interventions to reduce gambling problems.
Zhou, Yuefang; Cameron, Elaine; Forbes, Gillian; Humphris, Gerry
2012-08-01
To develop and validate the St Andrews Behavioural Interaction Coding Scheme (SABICS): a tool to record nurse-child interactive behaviours. The SABICS was developed primarily from observation of video recorded interactions; and refined through an iterative process of applying the scheme to new data sets. Its practical applicability was assessed via implementation of the scheme on specialised behavioural coding software. Reliability was calculated using Cohen's Kappa. Discriminant validity was assessed using logistic regression. The SABICS contains 48 codes. Fifty-five nurse-child interactions were successfully coded through administering the scheme on The Observer XT8.0 system. Two visualization results of interaction patterns demonstrated the scheme's capability of capturing complex interaction processes. Cohen's Kappa was 0.66 (inter-coder) and 0.88 and 0.78 (two intra-coders). The frequency of nurse behaviours, such as "instruction" (OR = 1.32, p = 0.027) and "praise" (OR = 2.04, p = 0.027), predicted a child receiving the intervention. The SABICS is a unique system to record interactions between dental nurses and 3-5 years old children. It records and displays complex nurse-child interactive behaviours. It is easily administered and demonstrates reasonable psychometric properties. The SABICS has potential for other paediatric settings. Its development procedure may be helpful for other similar coding scheme development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Analyzing the Structure and Content of Public Health Messages
Morrison, Frances P.; Kukafka, Rita; Johnson, Stephen B.
2005-01-01
Background Health messages are crucial to the field of public health in effecting behavior change, but little research is available to assist writers in composing the overall structure of a message. In order to develop software to assist individuals in constructing effective messages, the structure of existing health messages must be understood, and an appropriate method for analyzing health message structure developed. Methods 72 messages from expert sources were used for development of the method, which was then tested for reproducibility using ten randomly selected health messages. Four raters analyzed the messages and inter-coder agreement was calculated. Results A method for analyzing the structure of the messages was developed using sublanguage analysis and discourse analysis. Overall kappa between four coders was 0.69. Conclusion A novel framework for characterizing health message structure and a method for analyzing messages appears to be reproducible and potentially useful for creating an authoring tool. PMID:16779098
Smith, Justin D; Dishion, Thomas J; Brown, Kimbree; Ramos, Karina; Knoble, Naomi B; Shaw, Daniel S; Wilson, Melvin N
2016-01-01
The valid and reliable assessment of fidelity is critical at all stages of intervention research and is particularly germane to interpreting the results of efficacy and implementation trials. Ratings of protocol adherence typically are reliable, but ratings of therapist competence are plagued by low reliability. Because family context and case conceptualization guide the therapist's delivery of interventions, the reliability of fidelity ratings might be improved if the coder is privy to client context in the form of an ecological assessment. We conducted a randomized experiment to test this hypothesis. A subsample of 46 families with 5-year-old children from a multisite randomized trial who participated in the feedback session of the Family Check-Up (FCU) intervention were selected. We randomly assigned FCU feedback sessions to be rated for fidelity to the protocol using the COACH rating system either after the coder reviewed the results of a recent ecological assessment or had not. Inter-rater reliability estimates of fidelity ratings were meaningfully higher for the assessment information condition compared to the no-information condition. Importantly, the reliability of the COACH mean score was found to be statistically significantly higher in the information condition. These findings suggest that the reliability of observational ratings of fidelity, particularly when the competence or quality of delivery is considered, could be improved by providing assessment data to the coders. Our findings might be most applicable to assessment-driven interventions, where assessment data explicitly guides therapist's selection of intervention strategies tailored to the family's context and needs, but they could also apply to other intervention programs and observational coding of context-dependent therapy processes, such as the working alliance.
Smith, Justin D.; Dishion, Thomas J.; Brown, Kimbree; Ramos, Karina; Knoble, Naomi B.; Shaw, Daniel S.; Wilson, Melvin N.
2015-01-01
The valid and reliable assessment of fidelity is critical at all stages of intervention research and is particularly germane to interpreting the results of efficacy and implementation trials. Ratings of protocol adherence typically are reliable, but ratings of therapist competence are plagued by low reliability. Because family context and case conceptualization guide the therapist's delivery of interventions, the reliability of fidelity ratings might be improved if the coder is privy to client context in the form of an ecological assessment. We conducted a randomized experiment to test this hypothesis. A subsample of 46 families with 5-year-old children from a multisite randomized trial who participated in the feedback session of the Family Check-Up (FCU) intervention were selected. We randomly assigned FCU feedback sessions to be rated for fidelity to the protocol using the COACH rating system either after the coder reviewed the results of a recent ecological assessment or had not. Inter-rater reliability estimates of fidelity ratings were meaningfully higher for the assessment information condition compared to the no-information condition. Importantly, the reliability of the COACH mean score was found to be statistically significantly higher in the information condition. These findings suggest that the reliability of observational ratings of fidelity, particularly when the competence or quality of delivery is considered, could be improved by providing assessment data to the coders. Our findings might be most applicable to assessment-driven interventions, where assessment data explicitly guides therapist's selection of intervention strategies tailored to the family's context and needs, but they could also apply to other intervention programs and observational coding of context-dependent therapy processes, such as the working alliance. PMID:26271300
ERIC Educational Resources Information Center
Archer, Marc; Steele, Miriam; Lan, Jijun; Jin, Xiaochun; Herreros, Francisca; Steele, Howard
2015-01-01
The first distribution of Chinese infant-mother (n = 61) attachment classifications categorised by trained and reliability-tested coders is reported with statistical comparisons to US norms and previous Chinese distributions. Three-way distribution was 15% insecure-avoidant, 62% secure, 13% insecure-resistant, and 4-way distribution was 13%…
Urrutia, Julio; Zamora, Tomas; Campos, Mauricio; Yurac, Ratko; Palma, Joaquin; Mobarec, Sebastian; Prada, Carlos
2016-07-01
We performed an agreement study using two subaxial cervical spine classification systems: the AOSpine and the Allen and Ferguson (A&F) classifications. We sought to determine which scheme allows better agreement by different evaluators and by the same evaluator on different occasions. Complete imaging studies of 65 patients with subaxial cervical spine injuries were classified by six evaluators (three spine sub-specialists and three senior orthopaedic surgery residents) using the AOSpine subaxial cervical spine classification system and the A&F scheme. The cases were displayed in a random sequence after a 6-week interval for repeat evaluation. The Kappa coefficient (κ) was used to determine inter- and intra-observer agreement. Inter-observer: considering the main AO injury types, the agreement was substantial for the AOSpine classification [κ = 0.61 (0.57-0.64)]; using AO sub-types, the agreement was moderate [κ = 0.57 (0.54-0.60)]. For the A&F classification, the agreement [κ = 0.46 (0.42-0.49)] was significantly lower than using the AOSpine scheme. Intra-observer: the agreement was substantial considering injury types [κ = 0.68 (0.62-0.74)] and considering sub-types [κ = 0.62 (0.57-0.66)]. Using the A&F classification, the agreement was also substantial [κ = 0.66 (0.61-0.71)]. No significant differences were observed between spine surgeons and orthopaedic residents in the overall inter- and intra-observer agreement, or in the inter- and intra-observer agreement of specific type of injuries. The AOSpine classification (using the four main injury types or at the sub-types level) allows a significantly better agreement than the A&F classification. The A&F scheme does not allow reliable communication between medical professionals.
Comparison of Danish dichotomous and BI-RADS classifications of mammographic density.
Hodge, Rebecca; Hellmann, Sophie Sell; von Euler-Chelpin, My; Vejborg, Ilse; Andersen, Zorana Jovanovic
2014-06-01
In the Copenhagen mammography screening program from 1991 to 2001, mammographic density was classified either as fatty or mixed/dense. This dichotomous mammographic density classification system is unique internationally, and has not been validated before. To compare the Danish dichotomous mammographic density classification system from 1991 to 2001 with the density BI-RADS classifications, in an attempt to validate the Danish classification system. The study sample consisted of 120 mammograms taken in Copenhagen in 1991-2001, which tested false positive, and which were in 2012 re-assessed and classified according to the BI-RADS classification system. We calculated inter-rater agreement between the Danish dichotomous mammographic classification as fatty or mixed/dense and the four-level BI-RADS classification by the linear weighted Kappa statistic. Of the 120 women, 32 (26.7%) were classified as having fatty and 88 (73.3%) as mixed/dense mammographic density, according to Danish dichotomous classification. According to BI-RADS density classification, 12 (10.0%) women were classified as having predominantly fatty (BI-RADS code 1), 46 (38.3%) as having scattered fibroglandular (BI-RADS code 2), 57 (47.5%) as having heterogeneously dense (BI-RADS 3), and five (4.2%) as having extremely dense (BI-RADS code 4) mammographic density. The inter-rater variability assessed by weighted kappa statistic showed a substantial agreement (0.75). The dichotomous mammographic density classification system utilized in early years of Copenhagen's mammographic screening program (1991-2001) agreed well with the BI-RADS density classification system.
Reliability of a four-column classification for tibial plateau fractures.
Martínez-Rondanelli, Alfredo; Escobar-González, Sara Sofía; Henao-Alzate, Alejandro; Martínez-Cano, Juan Pablo
2017-09-01
A four-column classification system offers a different way of evaluating tibial plateau fractures. The aim of this study is to compare the intra-observer and inter-observer reliability between four-column and classic classifications. This is a reliability study, which included patients presenting with tibial plateau fractures between January 2013 and September 2015 in a level-1 trauma centre. Four orthopaedic surgeons blindly classified each fracture according to four different classifications: AO, Schatzker, Duparc and four-column. Kappa, intra-observer and inter-observer concordance were calculated for the reliability analysis. Forty-nine patients were included. The mean age was 39 ± 14.2 years, with no gender predominance (men: 51%; women: 49%), and 67% of the fractures included at least one of the posterior columns. The intra-observer and inter-observer concordance were calculated for each classification: four-column (84%/79%), Schatzker (60%/71%), AO (50%/59%) and Duparc (48%/58%), with a statistically significant difference among them (p = 0.001/p = 0.003). Kappa coefficient for intr-aobserver and inter-observer evaluations: Schatzker 0.48/0.39, four-column 0.61/0.34, Duparc 0.37/0.23, and AO 0.34/0.11. The proposed four-column classification showed the highest intra and inter-observer agreement. When taking into account the agreement that occurs by chance, Schatzker classification showed the highest inter-observer kappa, but again the four-column had the highest intra-observer kappa value. The proposed classification is a more inclusive classification for the posteromedial and posterolateral fractures. We suggest, therefore, that it be used in addition to one of the classic classifications in order to better understand the fracture pattern, as it allows more attention to be paid to the posterior columns, it improves the surgical planning and allows the surgical approach to be chosen more accurately.
Fainsinger, Robin L; Nekolaichuk, Cheryl L
2008-06-01
The purpose of this paper is to provide an overview of the development of a "TNM" cancer pain classification system for advanced cancer patients, the Edmonton Classification System for Cancer Pain (ECS-CP). Until we have a common international language to discuss cancer pain, understanding differences in clinical and research experience in opioid rotation and use remains problematic. The complexity of the cancer pain experience presents unique challenges for the classification of pain. To date, no universally accepted pain classification measure can accurately predict the complexity of pain management, particularly for patients with cancer pain that is difficult to treat. In response to this gap in clinical assessment, the Edmonton Staging System (ESS), a classification system for cancer pain, was developed. Difficulties in definitions and interpretation of some aspects of the ESS restricted acceptance and widespread use. Construct, inter-rater reliability, and predictive validity evidence have contributed to the development of the ECS-CP. The five features of the ECS-CP--Pain Mechanism, Incident Pain, Psychological Distress, Addictive Behavior and Cognitive Function--have demonstrated value in predicting pain management complexity. The development of a standardized classification system that is comprehensive, prognostic and simple to use could provide a common language for clinical management and research of cancer pain. An international study to assess the inter-rater reliability and predictive value of the ECS-CP is currently in progress.
Position measurement of the direct drive motor of Large Aperture Telescope
NASA Astrophysics Data System (ADS)
Li, Ying; Wang, Daxing
2010-07-01
Along with the development of space and astronomy science, production of large aperture telescope and super large aperture telescope will definitely become the trend. It's one of methods to solve precise drive of large aperture telescope using direct drive technology unified designed of electricity and magnetism structure. A direct drive precise rotary table with diameter of 2.5 meters researched and produced by us is a typical mechanical & electrical integration design. This paper mainly introduces position measurement control system of direct drive motor. In design of this motor, position measurement control system requires having high resolution, and precisely aligning the position of rotor shaft and making measurement, meanwhile transferring position information to position reversing information corresponding to needed motor pole number. This system has chosen high precision metal band coder and absolute type coder, processing information of coders, and has sent 32-bit RISC CPU making software processing, and gained high resolution composite coder. The paper gives relevant laboratory test results at the end, indicating the position measurement can apply to large aperture telescope control system. This project is subsidized by Chinese National Natural Science Funds (10833004).
Design of a robust baseband LPC coder for speech transmission over 9.6 kbit/s noisy channels
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Russell, W. H.; Higgins, A. L.
1982-04-01
This paper describes the design of a baseband Linear Predictive Coder (LPC) which transmits speech over 9.6 kbit/sec synchronous channels with random bit errors of up to 1%. Presented are the results of our investigation of a number of aspects of the baseband LPC coder with the goal of maximizing the quality of the transmitted speech. Important among these aspects are: bandwidth of the baseband, coding of the baseband residual, high-frequency regeneration, and error protection of important transmission parameters. The paper discusses these and other issues, presents the results of speech-quality tests conducted during the various stages of optimization, and describes the details of the optimized speech coder. This optimized speech coding algorithm has been implemented as a real-time full-duplex system on an array processor. Informal listening tests of the real-time coder have shown that the coder produces good speech quality in the absence of channel bit errors and introduces only a slight degradation in quality for channel bit error rates of up to 1%.
Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial
Hallgren, Kevin A.
2012-01-01
Many research designs require the assessment of inter-rater reliability (IRR) to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR. PMID:22833776
How Dental Team Members describe Adverse Events
Maramaldi, Peter; Walji, Muhammad F.; White, Joel; Etoulu, Jini; Kahn, Maria; Vaderhobli, Ram; Kwatra, Japneet; Delattre, Veronique F.; Hebballi, Nutan B.; Stewart, Denice; Kent, Karla; Yansane, Alfa; Ramoni, Rachel B.; Kalenderian, Elsbeth
2016-01-01
Background There is increased recognition that patients suffer adverse events (AEs) or harm caused by treatments in dentistry, and little is known about how dental providers describe these events. Understanding how providers view AEs is essential to building a safer environment in dental practice. Methods Dental providers and domain experts were interviewed through focus groups and in-depth interviews and asked to identify the types of AEs that may occur in dental settings. Results The first order listing of the interview and focus group findings yielded 1,514 items that included both causes and AEs. 632 causes were coded into one of the eight categories of the Eindhoven classification. 882 AEs were coded into 12 categories of a newly developed dental AE classification. Inter-rater reliability was moderate among coders. The list was reanalyzed and duplicate items were removed leaving a total of 747 unique AEs and 540 causes. The most frequently identified AE types were “Aspiration/ingestion” at 14% (n=142), “Wrong-site, wrong-procedure, wrong-patient errors” at 13%, “Hard tissue damage” at 13%, and “Soft tissue damage” at 12%. Conclusions Dental providers identified a large and diverse list of AEs. These events ranged from “death due to cardiac arrest” to “jaw fatigue from lengthy procedures”. Practical Implications Identifying threats to patient safety is a key element of improving dental patient safety. An inventory of dental AEs underpins efforts to track, prevent, and mitigate these events. PMID:27269376
2006-07-06
measurements are applied to metrically defined units and these are used to characterize and compare documents” ( Denzin & Lincoln , 1994, p. 464). Stacks...errors in data interpretation; incorrect sampling; generalization; and inter-coder reliability, calling its validity into question. Denzin and Lincoln ...a content analysis may be “unable to capture the context within which a written text has meaning” ( Denzin & Lincoln , 1994, p. 464). However, the
Urrutia, Julio; Besa, Pablo; Campos, Mauricio; Cikutovic, Pablo; Cabezon, Mario; Molina, Marcelo; Cruz, Juan Pablo
2016-09-01
Grading inter-vertebral disc degeneration (IDD) is important in the evaluation of many degenerative conditions, including patients with low back pain. Magnetic resonance imaging (MRI) is considered the best imaging instrument to evaluate IDD. The Pfirrmann classification is commonly used to grade IDD; the authors describing this classification showed an adequate agreement using it; however, there has been a paucity of independent agreement studies using this grading system. The aim of this study was to perform an independent inter- and intra-observer agreement study using the Pfirrmann classification. T2-weighted sagittal images of 79 patients consecutively studied with lumbar spine MRI were classified using the Pfirrmann grading system by six evaluators (three spine surgeons and three radiologists). After a 6-week interval, the 79 cases were presented to the same evaluators in a random sequence for repeat evaluation. The intra-class correlation coefficient (ICC) and the weighted kappa (wκ) were used to determine the inter- and intra-observer agreement. The inter-observer agreement was excellent, with an ICC = 0.94 (0.93-0.95) and wκ = 0.83 (0.74-0.91). There were no differences between spine surgeons and radiologists. Likewise, there were no differences in agreement evaluating the different lumbar discs. Most differences among observers were only of one grade. Intra-observer agreement was also excellent with ICC = 0.86 (0.83-0.89) and wκ = 0.89 (0.85-0.93). In this independent study, the Pfirrmann classification demonstrated an adequate agreement among different observers and by the same observer on separate occasions. Furthermore, it allows communication between radiologists and spine surgeons.
Reliability of intracerebral hemorrhage classification systems: A systematic review.
Rannikmäe, Kristiina; Woodfield, Rebecca; Anderson, Craig S; Charidimou, Andreas; Chiewvit, Pipat; Greenberg, Steven M; Jeng, Jiann-Shing; Meretoja, Atte; Palm, Frederic; Putaala, Jukka; Rinkel, Gabriel Je; Rosand, Jonathan; Rost, Natalia S; Strbian, Daniel; Tatlisumak, Turgut; Tsai, Chung-Fen; Wermer, Marieke Jh; Werring, David; Yeh, Shin-Joe; Al-Shahi Salman, Rustam; Sudlow, Cathie Lm
2016-08-01
Accurately distinguishing non-traumatic intracerebral hemorrhage (ICH) subtypes is important since they may have different risk factors, causal pathways, management, and prognosis. We systematically assessed the inter- and intra-rater reliability of ICH classification systems. We sought all available reliability assessments of anatomical and mechanistic ICH classification systems from electronic databases and personal contacts until October 2014. We assessed included studies' characteristics, reporting quality and potential for bias; summarized reliability with kappa value forest plots; and performed meta-analyses of the proportion of cases classified into each subtype. We included 8 of 2152 studies identified. Inter- and intra-rater reliabilities were substantial to perfect for anatomical and mechanistic systems (inter-rater kappa values: anatomical 0.78-0.97 [six studies, 518 cases], mechanistic 0.89-0.93 [three studies, 510 cases]; intra-rater kappas: anatomical 0.80-1 [three studies, 137 cases], mechanistic 0.92-0.93 [two studies, 368 cases]). Reporting quality varied but no study fulfilled all criteria and none was free from potential bias. All reliability studies were performed with experienced raters in specialist centers. Proportions of ICH subtypes were largely consistent with previous reports suggesting that included studies are appropriately representative. Reliability of existing classification systems appears excellent but is unknown outside specialist centers with experienced raters. Future reliability comparisons should be facilitated by studies following recently published reporting guidelines. © 2016 World Stroke Organization.
De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul
2017-03-01
Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.
A 4.8 kbps code-excited linear predictive coder
NASA Technical Reports Server (NTRS)
Tremain, Thomas E.; Campbell, Joseph P., Jr.; Welch, Vanoy C.
1988-01-01
A secure voice system STU-3 capable of providing end-to-end secure voice communications (1984) was developed. The terminal for the new system will be built around the standard LPC-10 voice processor algorithm. The performance of the present STU-3 processor is considered to be good, its response to nonspeech sounds such as whistles, coughs and impulse-like noises may not be completely acceptable. Speech in noisy environments also causes problems with the LPC-10 voice algorithm. In addition, there is always a demand for something better. It is hoped that LPC-10's 2.4 kbps voice performance will be complemented with a very high quality speech coder operating at a higher data rate. This new coder is one of a number of candidate algorithms being considered for an upgraded version of the STU-3 in late 1989. The problems of designing a code-excited linear predictive (CELP) coder to provide very high quality speech at a 4.8 kbps data rate that can be implemented on today's hardware are considered.
Filing Reprints: Can Office Staff Help?
Putnam, R. W.; Gass, D. A.; Curry, Lynn
1985-01-01
Filing systems for reprints must be tailored to the individual's practice profile, to maximize usefulness as a resource for clinical problem solving. However, the clerical time involved often reduces the physician's ability to maintain such a filing system. The authors tested two hypotheses that using the International Classification of Health Problems in Primary Care (ICHPPC) nurses or receptionists could code, cross reference and file reprints after the physician has selected the articles. Contents pages of five primary care journals were given to two academic family physicians, two practicing physicians, a research assistant and two receptionists, one of whom had used ICHPPC to record patient encounters. All coders except the second receptionist, who was unfamiliar with ICHPPC, reached good agreement in coding. Filing reprints may therefore be done by trained staff for groups of physicians. PMID:21274020
Schwab, Fabienne; Redling, Katharina; Siebert, Matthias; Schötzau, Andy; Schoenenberger, Cora-Ann; Zanetti-Dällenbach, Rosanna
2016-11-01
Our aim was to prospectively evaluate inter- and intra-observer agreement between Breast Imaging Reporting and Data System (BI-RADS) classifications and Tsukuba elasticity scores (TSs) of breast lesions. The study included 164 breast lesions (63 malignant, 101 benign). The BI-RADS classification and TS of each breast lesion was assessed by the examiner and twice by three reviewers at an interval of 2 months. Weighted κ values for inter-observer agreement ranged from moderate to substantial for BI-RADS classification (κ = 0.585-0.738) and was substantial for TS (κ = 0.608-0.779). Intra-observer agreement was almost perfect for ultrasound (US) BI-RADS (κ = 0.847-0.872) and TS (κ = 0.879-0.914). Overall, individual reviewers are highly self-consistent (almost perfect intra-observer agreement) with respect to BI-RADS classification and TS, whereas inter-observer agreement was moderate to substantial. Comprehensive training is essential for achieving high agreement and minimizing the impact of subjectivity. Our results indicate that breast US and real-time elastography can achieve high diagnostic performance. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Validation of the one pass measure for motivational interviewing competence.
McMaster, Fiona; Resnicow, Ken
2015-04-01
This paper examines the psychometric properties of the OnePass coding system: a new, user-friendly tool for evaluating practitioner competence in motivational interviewing (MI). We provide data on reliability and validity with the current gold-standard: Motivational Interviewing Treatment Integrity tool (MITI). We compared scores from 27 videotaped MI sessions performed by student counselors trained in MI and simulated patients using both OnePass and MITI, with three different raters for each tool. Reliability was estimated using intra-class coefficients (ICCs), and validity was assessed using Pearson's r. OnePass had high levels of inter-rater reliability with 19/23 items found from substantial to almost perfect agreement. Taking the pair of scores with the highest inter-rater reliability on the MITI, the concurrent validity between the two measures ranged from moderate to high. Validity was highest for evocation, autonomy, direction and empathy. OnePass appears to have good inter-rater reliability while capturing similar dimensions of MI as the MITI. Despite the moderate concurrent validity with the MITI, the OnePass shows promise in evaluating both traditional and novel interpretations of MI. OnePass may be a useful tool for developing and improving practitioner competence in MI where access to MITI coders is limited. Copyright © 2015. Published by Elsevier Ireland Ltd.
The risk of upcoding in casemix systems: a comparative study.
Steinbusch, Paul J M; Oostenbrink, Jan B; Zuurbier, Joost J; Schaepkens, Frans J M
2007-05-01
With the introduction of a diagnosis related group (DRG) classification system in the Netherlands in 2005 it has become relevant to investigate the risk of upcoding. The problem of upcoding in the US casemix system is substantial. In 2004, the US Centres for Medicare and Medicaid estimated that the total number of improper Medicare payments for the Prospective Payment system for acute inpatient care (both short term and long term) amounted to US$ 4.8 billion (5.2%). By comparing the casemix systems in the US, Australian and Dutch healthcare systems, this article illustrates why certain casemix systems are more open to the risk of upcoding than other systems. This study identifies various market, control and casemix characteristics determining the weaknesses of a casemix reimbursement system to upcoding. It can be concluded that fewer opportunities for upcoding occur in casemix systems that do not allow for-profit ownership and in which the coder's salary does not depend on the outcome of the classification process. In addition, casemix systems in which the first point in time of registration is at the beginning of the care process and in which there are a limited number of occasions to alter the registration are less vulnerable to the risk of upcoding. Finally, the risk of upcoding is smaller in casemix systems that use classification criteria that are medically meaningful and aligned with clinical practice. Comparing the US, Australian and Dutch systems the following conclusions can be drawn. Given the combined occurrences of for-profit hospitals and the use of the secondary diagnosis criterion to classify DRGs, the US casemix system tends to be more open to upcoding than the Australian system. The strength of the Dutch system is related to the detailed classification scheme, using medically meaningful classification criteria. Nevertheless, the detailed classification scheme also causes a weakness, because of its increased complexity compared with the US and Australian system. It is recommended that researchers and policy makers carefully consider all relevant market, control and casemix characteristics when developing and restructuring casemix reimbursement systems.
An Evolving Ecosystem for Natural Language Processing in Department of Veterans Affairs.
Garvin, Jennifer H; Kalsy, Megha; Brandt, Cynthia; Luther, Stephen L; Divita, Guy; Coronado, Gregory; Redd, Doug; Christensen, Carrie; Hill, Brent; Kelly, Natalie; Treitler, Qing Zeng
2017-02-01
In an ideal clinical Natural Language Processing (NLP) ecosystem, researchers and developers would be able to collaborate with others, undertake validation of NLP systems, components, and related resources, and disseminate them. We captured requirements and formative evaluation data from the Veterans Affairs (VA) Clinical NLP Ecosystem stakeholders using semi-structured interviews and meeting discussions. We developed a coding rubric to code interviews. We assessed inter-coder reliability using percent agreement and the kappa statistic. We undertook 15 interviews and held two workshop discussions. The main areas of requirements related to; design and functionality, resources, and information. Stakeholders also confirmed the vision of the second generation of the Ecosystem and recommendations included; adding mechanisms to better understand terms, measuring collaboration to demonstrate value, and datasets/tools to navigate spelling errors with consumer language, among others. Stakeholders also recommended capability to: communicate with developers working on the next version of the VA electronic health record (VistA Evolution), provide a mechanism to automatically monitor download of tools and to automatically provide a summary of the downloads to Ecosystem contributors and funders. After three rounds of coding and discussion, we determined the percent agreement of two coders to be 97.2% and the kappa to be 0.7851. The vision of the VA Clinical NLP Ecosystem met stakeholder needs. Interviews and discussion provided key requirements that inform the design of the VA Clinical NLP Ecosystem.
2014-01-01
Background Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). Methods This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. Results The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. Conclusions A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients. PMID:24903422
Huang, Huifang; Liu, Jie; Zhu, Qiang; Wang, Ruiping; Hu, Guangshu
2014-06-05
Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients.
Rickard, Mandy; Easterbrook, Bethany; Kim, Soojin; Farrokhyar, Forough; Stein, Nina; Arora, Steven; Belostotsky, Vladamir; DeMaria, Jorge; Lorenzo, Armando J; Braga, Luis H
2017-02-01
The urinary tract dilation (UTD) classification system was introduced to standardize terminology in the reporting of hydronephrosis (HN), and bridge a gap between pre- and postnatal classification such as the Society for Fetal Urology (SFU) grading system. Herein we compare the intra/inter-rater reliability of both grading systems. SFU (I-IV) and UTD (I-III) grades were independently assigned by 13 raters (9 pediatric urology staff, 2 nephrologists, 2 radiologists), twice, 3 weeks apart, to 50 sagittal postnatal ultrasonographic views of hydronephrotic kidneys. Data regarding ureteral measurements and bladder abnormalities were included to allow proper UTD categorization. Ten images were repeated to assess intra-rater reliability. Krippendorff's alpha coefficient was used to measure overall and by grade intra/inter-rater reliability. Reliability between specialties and training levels were also analyzed. Overall inter-rater reliability was slightly higher for SFU (α = 0.842, 95% CI 0.812-0.879, in session 1; and α = 0.808, 95% CI 0.775-0.839, in session 2) than for UTD (α = 0.774, 95% CI 0.715-0.827, in session 1; and α = 0.679, 95% CI 0.605-0.750, in session 2). Reliability for intermediate grades (SFU II/III and UTD 2) of HN was poor regardless of the system. Reliabilities for SFU and UTD classifications among Urology, Nephrology, and Radiology, as well as between training levels were not significantly different. Despite the introduction of HN grading systems to standardize the interpretation and reporting of renal ultrasound in infants with HN, none have been proven superior in allowing clinicians to distinguish between "moderate" grades. While this study demonstrated high reliability in distinguishing between "mild" (SFU I/II and UTD 1) and "severe" (SFU IV and UTD 3) grades of HN, the overall reliability between specialties was poor. This is in keeping with a previous report of modest inter-rater reliability of the SFU system. This drawback is likely explained by the subjective interpretation required to assign grades, which can be impacted by experience, image quality, and scanning technique. As shown in the figure, which demonstrates SFU II (a) and SFU III (b), as assigned by a radiologist, it is possible to make an argument that either of these images can be classified into both categories that were observed during the grading sessions of this study. Although both systems have acceptable reliability, the SFU grading system showed higher overall intra/inter-rater reliability regardless of rater specialty than the UTD classification. Inter-rater reliability for SFU grades II/III and UTD 2 was low, highlighting the limitations of both classifications in regards to properly segregating moderate HN grades. Copyright © 2016 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Plante, Jarrad D.; Cox, Thomas D.
2016-01-01
Service-learning has a longstanding history in higher education in and includes three main tenets: academic learning, meaningful community service, and civic learning. The Carnegie Foundation for the Advancement of Teaching created an elective classification system called the Carnegie Community Engagement Classification for higher education…
Urrutia, Julio; Zamora, Tomas; Klaber, Ianiv; Carmona, Maximiliano; Palma, Joaquin; Campos, Mauricio; Yurac, Ratko
2016-04-01
It has been postulated that the complex patterns of spinal injuries have prevented adequate agreement using thoraco-lumbar spinal injuries (TLSI) classifications; however, limb fracture classifications have also shown variable agreements. This study compared agreement using two TLSI classifications with agreement using two classifications of fractures of the trochanteric area of the proximal femur (FTAPF). Six evaluators classified the radiographs and computed tomography scans of 70 patients with acute TLSI using the Denis and the new AO Spine thoraco-lumbar injury classifications. Additionally, six evaluators classified the radiographs of 70 patients with FTAPF using the Tronzo and the AO schemes. Six weeks later, all cases were presented in a random sequence for repeat assessment. The Kappa coefficient (κ) was used to determine agreement. Inter-observer agreement: For TLSI, using the AOSpine classification, the mean κ was 0.62 (0.57-0.66) considering fracture types, and 0.55 (0.52-0.57) considering sub-types; using the Denis classification, κ was 0.62 (0.59-0.65). For FTAPF, with the AO scheme, the mean κ was 0.58 (0.54-0.63) considering fracture types and 0.31 (0.28-0.33) considering sub-types; for the Tronzo classification, κ was 0.54 (0.50-0.57). Intra-observer agreement: For TLSI, using the AOSpine scheme, the mean κ was 0.77 (0.72-0.83) considering fracture types, and 0.71 (0.67-0.76) considering sub-types; for the Denis classification, κ was 0.76 (0.71-0.81). For FTAPF, with the AO scheme, the mean κ was 0.75 (0.69-0.81) considering fracture types and 0.45 (0.39-0.51) considering sub-types; for the Tronzo classification, κ was 0.64 (0.58-0.70). Using the main types of AO classifications, inter- and intra-observer agreement of TLSI were comparable to agreement evaluating FTAPF; including sub-types, inter- and intra-observer agreement evaluating TLSI were significantly better than assessing FTAPF. Inter- and intra-observer agreements using the Denis classification were also significantly better than agreement using the Tronzo scheme. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mikkelsen, Kim Lyngby; Thommesen, Jacob; Andersen, Henning Boje
2013-01-01
Objectives Validation of a Danish patient safety incident classification adapted from the World Health Organizaton's International Classification for Patient Safety (ICPS-WHO). Design Thirty-three hospital safety management experts classified 58 safety incident cases selected to represent all types and subtypes of the Danish adaptation of the ICPS (ICPS-DK). Outcome Measures Two measures of inter-rater agreement: kappa and intra-class correlation (ICC). Results An average number of incident types used per case per rater was 2.5. The mean ICC was 0.521 (range: 0.199–0.809) and the mean kappa was 0.513 (range: 0.193–0.804). Kappa and ICC showed high correlation (r = 0.99). An inverse correlation was found between the prevalence of type and inter-rater reliability. Results are discussed according to four factors known to determine the inter-rater agreement: skill and motivation of raters; clarity of case descriptions; clarity of the operational definitions of the types and the instructions guiding the coding process; adequacy of the underlying classification scheme. Conclusions The incident types of the ICPS-DK are adequate, exhaustive and well suited for classifying and structuring incident reports. With a mean kappa a little above 0.5 the inter-rater agreement of the classification system is considered ‘fair’ to ‘good’. The wide variation in the inter-rater reliability and low reliability and poor discrimination among the highly prevalent incident types suggest that for these types, precisely defined incident sub-types may be preferred. This evaluation of the reliability and usability of WHO's ICPS should be useful for healthcare administrations that consider or are in the process of adapting the ICPS. PMID:23287641
The Development of a Checklist to Enhance Methodological Quality in Intervention Programs.
Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa
2016-01-01
The methodological quality of primary studies is an important issue when performing meta-analyses or systematic reviews. Nevertheless, there are no clear criteria for how methodological quality should be analyzed. Controversies emerge when considering the various theoretical and empirical definitions, especially in relation to three interrelated problems: the lack of representativeness, utility, and feasibility. In this article, we (a) systematize and summarize the available literature about methodological quality in primary studies; (b) propose a specific, parsimonious, 12-items checklist to empirically define the methodological quality of primary studies based on a content validity study; and (c) present an inter-coder reliability study for the resulting 12-items. This paper provides a precise and rigorous description of the development of this checklist, highlighting the clearly specified criteria for the inclusion of items and a substantial inter-coder agreement in the different items. Rather than simply proposing another checklist, however, it then argues that the list constitutes an assessment tool with respect to the representativeness, utility, and feasibility of the most frequent methodological quality items in the literature, one that provides practitioners and researchers with clear criteria for choosing items that may be adequate to their needs. We propose individual methodological features as indicators of quality, arguing that these need to be taken into account when designing, implementing, or evaluating an intervention program. This enhances methodological quality of intervention programs and fosters the cumulative knowledge based on meta-analyses of these interventions. Future development of the checklist is discussed.
The Development of a Checklist to Enhance Methodological Quality in Intervention Programs
Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa
2016-01-01
The methodological quality of primary studies is an important issue when performing meta-analyses or systematic reviews. Nevertheless, there are no clear criteria for how methodological quality should be analyzed. Controversies emerge when considering the various theoretical and empirical definitions, especially in relation to three interrelated problems: the lack of representativeness, utility, and feasibility. In this article, we (a) systematize and summarize the available literature about methodological quality in primary studies; (b) propose a specific, parsimonious, 12-items checklist to empirically define the methodological quality of primary studies based on a content validity study; and (c) present an inter-coder reliability study for the resulting 12-items. This paper provides a precise and rigorous description of the development of this checklist, highlighting the clearly specified criteria for the inclusion of items and a substantial inter-coder agreement in the different items. Rather than simply proposing another checklist, however, it then argues that the list constitutes an assessment tool with respect to the representativeness, utility, and feasibility of the most frequent methodological quality items in the literature, one that provides practitioners and researchers with clear criteria for choosing items that may be adequate to their needs. We propose individual methodological features as indicators of quality, arguing that these need to be taken into account when designing, implementing, or evaluating an intervention program. This enhances methodological quality of intervention programs and fosters the cumulative knowledge based on meta-analyses of these interventions. Future development of the checklist is discussed. PMID:27917143
Design and performance of an analysis-by-synthesis class of predictive speech coders
NASA Technical Reports Server (NTRS)
Rose, Richard C.; Barnwell, Thomas P., III
1990-01-01
The performance of a broad class of analysis-by-synthesis linear predictive speech coders is quantified experimentally. The class of coders includes a number of well-known techniques as well as a very large number of speech coders which have not been named or studied. A general formulation for deriving the parametric representation used in all of the coders in the class is presented. A new coder, named the self-excited vocoder, is discussed because of its good performance with low complexity, and because of the insight this coder gives to analysis-by-synthesis coders in general. The results of a study comparing the performances of different members of this class are presented. The study takes the form of a series of formal subjective and objective speech quality tests performed on selected coders. The results of this study lead to some interesting and important observations concerning the controlling parameters for analysis-by-synthesis speech coders.
Soleymani, Zahra; Joveini, Ghodsiye; Baghestani, Ahmad Reza
2015-03-01
This study developed a Farsi language Communication Function Classification System and then tested its reliability and validity. Communication Function Classification System is designed to classify the communication functions of individuals with cerebral palsy. Up until now, there has been no instrument for assessment of this communication function in Iran. The English Communication Function Classification System was translated into Farsi and cross-culturally modified by a panel of experts. Professionals and parents then assessed the content validity of the modified version. A backtranslation of the Farsi version was confirmed by the developer of the English Communication Function Classification System. Face validity was assessed by therapists and parents of 10 patients. The Farsi Communication Function Classification System was administered to 152 individuals with cerebral palsy (age, 2 to 18 years; median age, 10 years; mean age, 9.9 years; standard deviation, 4.3 years). Inter-rater reliability was analyzed between parents, occupational therapists, and speech and language pathologists. The test-retest reliability was assessed for 75 patients with a 14 day interval between tests. The inter-rater reliability of the Communication Function Classification System was 0.81 between speech and language pathologists and occupational therapists, 0.74 between parents and occupational therapists, and 0.88 between parents and speech and language pathologists. The test-retest reliability was 0.96 for occupational therapists, 0.98 for speech and language pathologists, and 0.94 for parents. The findings suggest that the Farsi version of Communication Function Classification System is a reliable and valid measure that can be used in clinical settings to assess communication function in patients with cerebral palsy. Copyright © 2015 Elsevier Inc. All rights reserved.
Audit of Clinical Coding of Major Head and Neck Operations
Mitra, Indu; Malik, Tass; Homer, Jarrod J; Loughran, Sean
2009-01-01
INTRODUCTION Within the NHS, operations are coded using the Office of Population Censuses and Surveys (OPCS) classification system. These codes, together with diagnostic codes, are used to generate Healthcare Resource Group (HRG) codes, which correlate to a payment bracket. The aim of this study was to determine whether allocated procedure codes for major head and neck operations were correct and reflective of the work undertaken. HRG codes generated were assessed to determine accuracy of remuneration. PATIENTS AND METHODS The coding of consecutive major head and neck operations undertaken in a tertiary referral centre over a retrospective 3-month period were assessed. Procedure codes were initially ascribed by professional hospital coders. Operations were then recoded by the surgical trainee in liaison with the head of clinical coding. The initial and revised procedure codes were compared and used to generate HRG codes, to determine whether the payment banding had altered. RESULTS A total of 34 cases were reviewed. The number of procedure codes generated initially by the clinical coders was 99, whereas the revised codes generated 146. Of the original codes, 47 of 99 (47.4%) were incorrect. In 19 of the 34 cases reviewed (55.9%), the HRG code remained unchanged, thus resulting in the correct payment. Six cases were never coded, equating to £15,300 loss of payment. CONCLUSIONS These results highlight the inadequacy of this system to reward hospitals for the work carried out within the NHS in a fair and consistent manner. The current coding system was found to be complicated, ambiguous and inaccurate, resulting in loss of remuneration. PMID:19220944
Yoshida, Masahito; Collin, Phillipe; Josseaume, Thierry; Lädermann, Alexandre; Goto, Hideyuki; Sugimoto, Katumasa; Otsuka, Takanobu
2018-01-01
Magnetic resonance (MR) imaging is common in structural and qualitative assessment of the rotator cuff post-operatively. Rotator cuff integrity has been thought to be associated with clinical outcome. The purpose of this study was to evaluate the inter-observer reliability of cuff integrity (Sugaya's classification) and assess the correlation between Sugaya's classification and the clinical outcome. It was hypothesized that Sugaya's classification would show good reliability and good correlation with the clinical outcome. Post-operative MR images were taken two years post-operatively, following arthroscopic rotator cuff repair. For assessment of inter-rater reliability, all radiographic evaluations for the supraspinatus muscle were done by two orthopaedic surgeons and one radiologist. Rotator cuff integrity was classified into five categories, according to Sugaya's classification. Fatty infiltration was graded into four categories, based on the Fuchs' classification grading system. Muscle hypotrophy was graded as four grades, according to the scale proposed by Warner. The clinical outcome was assessed according to the constant scoring system pre-operatively and 2 years post-operatively. Of the sixty-two consecutive patients with full-thickness rotator cuff tears, fifty-two patients were reviewed in this study. These subjects included twenty-three men and twenty-nine women, with an average age of fifty-seven years. In terms of the inter-rater reliability between orthopaedic surgeons, Sugaya's classification showed the highest agreement [ICC (2.1) = 0.82] for rotator cuff integrity. The grade of fatty infiltration and muscle atrophy demonstrated good agreement, respectively (0.722 and 0.758). With regard to the inter-rater reliability between orthopaedic surgeon and radiologist, Sugaya's classification showed good reliability [ICC (2.1) = 0.70]. On the other hand, fatty infiltration and muscle hypotrophy classifications demonstrated fair and moderate agreement [ICC (2.1) = 0.39 and 0.49]. Although no significant correlation was found between overall post-operative constant score and Sugaya's classification, Sugaya's classification indicated significant correlation with the muscle strength score. Sugaya's classification showed repeatability and good agreement between the orthopaedist and radiologist, who are involved in the patient care for the rotator cuff tear. Common classification of rotator cuff integrity with good reliability will give appropriate information for clinicians to improve the patient care of the rotator cuff tear. This classification also would be helpful to predict the strength of arm abduction in the scapular plane. IV.
Bayesian logistic regression approaches to predict incorrect DRG assignment.
Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural
2018-05-07
Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.
Lesko, Mehdi M; Woodford, Maralyn; White, Laura; O'Brien, Sarah J; Childs, Charmaine; Lecky, Fiona E
2010-08-06
The purpose of Abbreviated Injury Scale (AIS) is to code various types of Traumatic Brain Injuries (TBI) based on their anatomical location and severity. The Marshall CT Classification is used to identify those subgroups of brain injured patients at higher risk of deterioration or mortality. The purpose of this study is to determine whether and how AIS coding can be translated to the Marshall Classification Initially, a Marshall Class was allocated to each AIS code through cross-tabulation. This was agreed upon through several discussion meetings with experts from both fields (clinicians and AIS coders). Furthermore, in order to make this translation possible, some necessary assumptions with regards to coding and classification of mass lesions and brain swelling were essential which were all approved and made explicit. The proposed method involves two stages: firstly to determine all possible Marshall Classes which a given patient can attract based on allocated AIS codes; via cross-tabulation and secondly to assign one Marshall Class to each patient through an algorithm. This method can be easily programmed in computer softwares and it would enable future important TBI research programs using trauma registry data.
2010-01-01
Background The purpose of Abbreviated Injury Scale (AIS) is to code various types of Traumatic Brain Injuries (TBI) based on their anatomical location and severity. The Marshall CT Classification is used to identify those subgroups of brain injured patients at higher risk of deterioration or mortality. The purpose of this study is to determine whether and how AIS coding can be translated to the Marshall Classification Methods Initially, a Marshall Class was allocated to each AIS code through cross-tabulation. This was agreed upon through several discussion meetings with experts from both fields (clinicians and AIS coders). Furthermore, in order to make this translation possible, some necessary assumptions with regards to coding and classification of mass lesions and brain swelling were essential which were all approved and made explicit. Results The proposed method involves two stages: firstly to determine all possible Marshall Classes which a given patient can attract based on allocated AIS codes; via cross-tabulation and secondly to assign one Marshall Class to each patient through an algorithm. Conclusion This method can be easily programmed in computer softwares and it would enable future important TBI research programs using trauma registry data. PMID:20691038
Progressive video coding for noisy channels
NASA Astrophysics Data System (ADS)
Kim, Beong-Jo; Xiong, Zixiang; Pearlman, William A.
1998-10-01
We extend the work of Sherwood and Zeger to progressive video coding for noisy channels. By utilizing a 3D extension of the set partitioning in hierarchical trees (SPIHT) algorithm, we cascade the resulting 3D SPIHT video coder with a rate-compatible punctured convolutional channel coder for transmission of video over a binary symmetric channel. Progressive coding is achieved by increasing the target rate of the 3D embedded SPIHT video coder as the channel condition improves. The performance of our proposed coding system is acceptable at low transmission rate and bad channel conditions. Its low complexity makes it suitable for emerging applications such as video over wireless channels.
Stroke subtyping for genetic association studies? A comparison of the CCS and TOAST classifications.
Lanfranconi, Silvia; Markus, Hugh S
2013-12-01
A reliable and reproducible classification system of stroke subtype is essential for epidemiological and genetic studies. The Causative Classification of Stroke system is an evidence-based computerized algorithm with excellent inter-rater reliability. It has been suggested that, compared to the Trial of ORG 10172 in Acute Stroke Treatment classification, it increases the proportion of cases with defined subtype that may increase power in genetic association studies. We compared Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke system classifications in a large cohort of well-phenotyped stroke patients. Six hundred ninety consecutively recruited patients with first-ever ischemic stroke were classified, using review of clinical data and original imaging, according to the Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke system classifications. There was excellent agreement subtype assigned by between Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke system (kappa = 0·85). The agreement was excellent for the major individual subtypes: large artery atherosclerosis kappa = 0·888, small-artery occlusion kappa = 0·869, cardiac embolism kappa = 0·89, and undetermined category kappa = 0·884. There was only moderate agreement (kappa = 0·41) for the subjects with at least two competing underlying mechanism. Thirty-five (5·8%) patients classified as undetermined by Trial of ORG 10172 in Acute Stroke Treatment were assigned to a definite subtype by Causative Classification of Stroke system. Thirty-two subjects assigned to a definite subtype by Trial of ORG 10172 in Acute Stroke Treatment were classified as undetermined by Causative Classification of Stroke system. There is excellent agreement between classification using Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke systems but no evidence that Causative Classification of Stroke system reduced the proportion of patients classified to undetermined subtypes. The excellent inter-rater reproducibility and web-based semiautomated nature make Causative Classification of Stroke system suitable for multicenter studies, but the benefit of reclassifying cases already classified using the Trial of ORG 10172 in Acute Stroke Treatment system on existing databases is likely to be small. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
Development and Reliability Testing of the FEDS System for Classifying Glenohumeral Instability
Kuhn, John E.; Helmer, Tara T.; Dunn, Warren R.; Throckmorton V, Thomas W.
2010-01-01
Background Classification systems for glenohumeral instability (GHI) are opinion based, not validated, and poorly defined. This study is designed to methodologically develop and test a GHI classification system. Methods: Classification System Development A systematic literature review identified 18 systems for classifying GHI. The frequency characteristics used was recorded. Additionally 31 members of the American Shoulder and Elbow Surgeons responded to a survey to identify features important to characterize GHI. Frequency, Etiology, Direction, and Severity (FEDS), were found to be most important. Frequency was defined as solitary (one episode), occasional (2–5x/year), or frequent (>5x/year). Etiology was defined as traumatic or atraumatic. Direction referred to the primary direction of instability (anterior, posterior, or inferior). Severity was defined as either subluxation or dislocation. Methods: Reliability Testing Fifty GHI patients completed a questionnaire at their initial visit. One of six sports medicine fellowship trained physicians completed a similar questionnaire after examining the patient. Patients returned after two weeks and were examined by the original physician and two other physicians. Inter- and intra-rater agreement for the FEDS classification system was calculated. Results Agreement between patients and physicians was lowest for frequency (39%; k=0.130) and highest for direction (82%; k=0.636). Physician intra-rater agreement was 84– 97% for the individual FEDS characteristics (k=0.69 to 0.87)). Physician inter-rater agreement ranged from 82–90% (k=0.44 to 0.76). Conclusions The FEDS system has content validity and is highly reliable for classifying GHI. Physical examination using provocative testing to determine the primary direction of instability produces very high levels of inter- and intra-rater agreement. Level of evidence Level II, Development of Diagnostic Criteria with Consecutive Series of Patients, Diagnosis Study. PMID:21277809
Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder
NASA Technical Reports Server (NTRS)
Glover, Daniel R. (Inventor)
1995-01-01
Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.
NASA Astrophysics Data System (ADS)
Riera-Palou, Felip; den Brinker, Albertus C.
2007-12-01
This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).
Objective speech quality evaluation of real-time speech coders
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Russell, W. H.; Huggins, A. W. F.
1984-02-01
This report describes the work performed in two areas: subjective testing of a real-time 16 kbit/s adaptive predictive coder (APC) and objective speech quality evaluation of real-time coders. The speech intelligibility of the APC coder was tested using the Diagnostic Rhyme Test (DRT), and the speech quality was tested using the Diagnostic Acceptability Measure (DAM) test, under eight operating conditions involving channel error, acoustic background noise, and tandem link with two other coders. The test results showed that the DRT and DAM scores of the APC coder equalled or exceeded the corresponding test scores fo the 32 kbit/s CVSD coder. In the area of objective speech quality evaluation, the report describes the development, testing, and validation of a procedure for automatically computing several objective speech quality measures, given only the tape-recordings of the input speech and the corresponding output speech of a real-time speech coder.
2013-09-01
Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington...of MASTER OF SCIENCE IN HUMAN SYSTEMS INTEGRATION from the NAVAL POSTGRADUATE SCHOOL September 2013 Author: Jason Bilbro...22 Figure 9. Training slide example with speaker notes ......................................... 31
Comparing Features for Classification of MEG Responses to Motor Imagery.
Halme, Hanna-Leena; Parkkonen, Lauri
2016-01-01
Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio-spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system.
NOBLE - Flexible concept recognition for large-scale biomedical natural language processing.
Tseytlin, Eugene; Mitchell, Kevin; Legowski, Elizabeth; Corrigan, Julia; Chavan, Girish; Jacobson, Rebecca S
2016-01-14
Natural language processing (NLP) applications are increasingly important in biomedical data analysis, knowledge engineering, and decision support. Concept recognition is an important component task for NLP pipelines, and can be either general-purpose or domain-specific. We describe a novel, flexible, and general-purpose concept recognition component for NLP pipelines, and compare its speed and accuracy against five commonly used alternatives on both a biological and clinical corpus. NOBLE Coder implements a general algorithm for matching terms to concepts from an arbitrary vocabulary set. The system's matching options can be configured individually or in combination to yield specific system behavior for a variety of NLP tasks. The software is open source, freely available, and easily integrated into UIMA or GATE. We benchmarked speed and accuracy of the system against the CRAFT and ShARe corpora as reference standards and compared it to MMTx, MGrep, Concept Mapper, cTAKES Dictionary Lookup Annotator, and cTAKES Fast Dictionary Lookup Annotator. We describe key advantages of the NOBLE Coder system and associated tools, including its greedy algorithm, configurable matching strategies, and multiple terminology input formats. These features provide unique functionality when compared with existing alternatives, including state-of-the-art systems. On two benchmarking tasks, NOBLE's performance exceeded commonly used alternatives, performing almost as well as the most advanced systems. Error analysis revealed differences in error profiles among systems. NOBLE Coder is comparable to other widely used concept recognition systems in terms of accuracy and speed. Advantages of NOBLE Coder include its interactive terminology builder tool, ease of configuration, and adaptability to various domains and tasks. NOBLE provides a term-to-concept matching system suitable for general concept recognition in biomedical NLP pipelines.
Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps
NASA Technical Reports Server (NTRS)
Gerson, Ira A.; Jasiuk, Mark A.
1990-01-01
Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.
A new PUB-working group on SLope InterComparison Experiments (SLICE)
NASA Astrophysics Data System (ADS)
McGuire, K.; Retter, M.; Freer, J.; Troch, P.; McDonnell, J.
2006-05-01
The International Association of Hydrological Sciences (IAHS) decade on Prediction in Ungauged Basins (PUB) has the scientific goal to shift hydrology from calibration reliant models to new and rich understanding- based models. To support this, six PUB science themes have been developed under the PUB Science Steering group. Theme 1 covers basin inter-comparison and classification. The SLope InterComparison Experiment (SLICE) is a newly-formed working group aligned with theme 1. Its 2- year target is to promote the improved understanding of regional hydrological characteristics via hillslope inter- comparison studies and top-down analysis of data from hillslope experiments from around the world. It will further deliver the major building blocks of a catchment classification system. A first workshop of SLICE took place 26-28 September 2005 at the HJ Andrews Experimental Forest, Oregon, USA. 40 participants from seven countries were in attendance. The program consisted of keynote presentations on the state-of-the-art of hillslope hydrology, outlining a hillslope classification system, and through small group discussion, a focus on the following questions: a.) How can we capture flow path heterogeneity at the hillslope scale with residence time distributions? b.) Can networks help characterize hillslope subsurface systems? c.) What patterns are useful to characterize in a hillslope comparison context? d.) How does bedrock permeability condition hillslope response? e.) Can we actually observe pressure waves in the field and/or how likely are they to exist at the hillslope continuum scale? The poster presents an overview of the workshop outcomes and directions of future work.
Wiig, Ola; Terjesen, Terje; Svenningsen, Svein
2002-10-01
We evaluated the inter-observer agreement of radiographic methods when evaluating patients with Perthes' disease. The radiographs were assessed at the time of diagnosis and at the 1-year follow-up by local orthopaedic surgeons (O) and 2 experienced pediatric orthopedic surgeons (TT and SS). The Catterall, Salter-Thompson, and Herring lateral pillar classifications were compared, and the femoral head coverage (FHC), center-edge angle (CE-angle), and articulo-trochanteric distance (ATD) were measured in the affected and normal hips. On the primary evaluation, the lateral pillar and Salter-Thompson classifications had a higher level of agreement among the observers than the Catterall classification, but none of the classifications showed good agreement (weighted kappa values between O and SS 0.56, 0.54, 0.49, respectively). Combining Catterall groups 1 and 2 into one group, and groups 3 and 4 into another resulted in better agreement (kappa 0.55) than with the original 4-group system. The agreement was also better (kappa 0.62-0.70) between experienced than between less experienced examiners for all classifications. The femoral head coverage was a more reliable and accurate measure than the CE-angle for quantifying the acetabular covering of the femoral head, as indicated by higher intraclass correlation coefficients (ICC) and smaller inter-observer differences. The ATD showed good agreement in all comparisons and had low interobserver differences. We conclude that all classifications of femoral head involvement are adequate in clinical work if the radiographic assessment is done by experienced examiners. When they are less experienced examiners, a 2-group classification or the lateral pillar classification is more reliable. For evaluation of containment of the femoral head, FHC is more appropriate than the CE-angle.
Boguslav, Mayla; Cohen, Kevin Bretonnel
2017-01-01
Human-annotated data is a fundamental part of natural language processing system development and evaluation. The quality of that data is typically assessed by calculating the agreement between the annotators. It is widely assumed that this agreement between annotators is the upper limit on system performance in natural language processing: if humans can't agree with each other about the classification more than some percentage of the time, we don't expect a computer to do any better. We trace the logical positivist roots of the motivation for measuring inter-annotator agreement, demonstrate the prevalence of the widely-held assumption about the relationship between inter-annotator agreement and system performance, and present data that suggest that inter-annotator agreement is not, in fact, an upper bound on language processing system performance.
Implications of DSM-5 for the diagnosis of pediatric eating disorders.
Limburg, Karina; Shu, Chloe Y; Watson, Hunna J; Hoiles, Kimberley J; Egan, Sarah J
2018-05-01
The aim of the study was to compare the DSM-IV, DSM-5, and ICD-10 eating disorders (ED) nomenclatures to assess their value in the classification of pediatric eating disorders. We investigated the prevalence of the disorders in accordance with each system's diagnostic criteria, diagnostic concordance between the systems, and interrater reliability. Participants were 1062 children and adolescents assessed at intake to a specialist Eating Disorders Program (91.6% female, mean age 14.5 years, SD = 1.75). Measures were collected from routine intake assessments. DSM-5 categorization led to a lower prevalence of unspecified EDs when compared with DSM-IV. There was almost complete overlap for specified EDs. Kappa values indicated almost excellent agreement between the two coders on all three diagnostic systems, although there was higher interrater reliability for DSM-5 and ICD-10 when compared with DSM-IV. DSM-5 nomenclature is useful in classifying eating disorders in pediatric clinical samples. © 2018 Wiley Periodicals, Inc.
Inferior turbinate classification system, grades 1 to 4: development and validation study.
Camacho, Macario; Zaghi, Soroush; Certal, Victor; Abdullatif, Jose; Means, Casey; Acevedo, Jason; Liu, Stanley; Brietzke, Scott E; Kushida, Clete A; Capasso, Robson
2015-02-01
To develop a validated inferior turbinate grading scale. Development and validation study. Phase 1 development (alpha test) consisted of a proposal of 10 different inferior turbinate grading scales (>1,000 clinic patients). Phase 2 validation (beta test) utilized 10 providers grading 27 standardized endoscopic photos of inferior turbinates using two different classification systems. Phase 3 validation (pilot study) consisted of 100 live consecutive clinic patients (n = 200 inferior turbinates) who were each prospectively graded by 18 different combinations of two independent raters, and grading was repeated by each of the same two raters, two separate times for each patient. In the development phase, 25% (grades 1-4) and 33% (grades 1-4) were the most useful systems. In the validation phase, the 25% classification system was found to be the best balance between potential clinical utility and ability to grade; the photo grading demonstrated a Cohen's kappa (κ) = 0.4671 ± 0.0082 (moderate inter-rater agreement). Live-patient grading with the 25% classification system demonstrated an overall inter-rater reliability of 71.5% (95% confidence interval [CI]: 64.8-77.3), with overall substantial agreement (κ = 0.704 ± 0.028). Intrarater reliability was 91.5% (95% CI: 88.7-94.3). Distribution for the 200 inferior turbinates was as follows: 25% quartile = grade 1, 50% quartile (median) = grade 2, 75% quartile = grade 3, and 90% quartile = grade 4. Mean turbinate size was 2.22 (95% CI: 2.07-2.34; standard deviation 1.02). Categorical κ was as follows: grade 1, 0.8541 ± 0.0289; grade 2, 0.7310 ± 0.0289; grade 3, 0.6997 ± 0.0289, and grade 4, 0.7760 ± 0.0289. The 25% (grades 1-4) inferior turbinate classification system is a validated grading scale with high intrarater and inter-rater reliability. This system can facilitate future research by tracking the effect of interventions on inferior turbinates. 2c. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Low-rate image coding using vector quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makur, A.
1990-01-01
This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less
Ringdal, Kjetil G; Skaga, Nils Oddvar; Hestnes, Morten; Steen, Petter Andreas; Røislien, Jo; Rehn, Marius; Røise, Olav; Krüger, Andreas J; Lossius, Hans Morten
2013-05-01
Injury severity is most frequently classified using the Abbreviated Injury Scale (AIS) as a basis for the Injury Severity Score (ISS) and the New Injury Severity Score (NISS), which are used for assessment of overall injury severity in the multiply injured patient and in outcome prediction. European trauma registries recommended the AIS 2008 edition, but the levels of inter-rater agreement and reliability of ISS and NISS, associated with its use, have not been reported. Nineteen Norwegian AIS-certified trauma registry coders were invited to score 50 real, anonymised patient medical records using AIS 2008. Rater agreements for ISS and NISS were analysed using Bland-Altman plots with 95% limits of agreement (LoA). A clinically acceptable LoA range was set at ± 9 units. Reliability was analysed using a two-way mixed model intraclass correlation coefficient (ICC) statistics with corresponding 95% confidence intervals (CI) and hierarchical agglomerative clustering. Ten coders submitted their coding results. Of their AIS codes, 2189 (61.5%) agreed with a reference standard, 1187 (31.1%) real injuries were missed, and 392 non-existing injuries were recorded. All LoAs were wider than the predefined, clinically acceptable limit of ± 9, for both ISS and NISS. The joint ICC (range) between each rater and the reference standard was 0.51 (0.29,0.86) for ISS and 0.51 (0.27,0.78) for NISS. The joint ICC (range) for inter-rater reliability was 0.49 (0.19,0.85) for ISS and 0.49 (0.16,0.82) for NISS. Univariate linear regression analyses indicated a significant relationship between the number of correctly AIS-coded injuries and total number of cases coded during the rater's career, but no significant relationship between the rater-against-reference ISS and NISS ICC values and total number of cases coded during the rater's career. Based on AIS 2008, ISS and NISS were not reliable for summarising anatomic injury severity in this study. This result indicates a limitation in their use as benchmarking tools for trauma system performance. Copyright © 2012 Elsevier Ltd. All rights reserved.
Comparing Features for Classification of MEG Responses to Motor Imagery
Halme, Hanna-Leena; Parkkonen, Lauri
2016-01-01
Background Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. Methods MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio—spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. Results The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. Conclusions We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system. PMID:27992574
Reliability of injury grading systems for patients with blunt splenic trauma.
Olthof, D C; van der Vlies, C H; Scheerder, M J; de Haan, R J; Beenen, L F M; Goslings, J C; van Delden, O M
2014-01-01
The most widely used grading system for blunt splenic injury is the American Association for the Surgery of Trauma (AAST) organ injury scale. In 2007 a new grading system was developed. This 'Baltimore CT grading system' is superior to the AAST classification system in predicting the need for angiography and embolization or surgery. The objective of this study was to assess inter- and intraobserver reliability between radiologists in classifying splenic injury according to both grading systems. CT scans of 83 patients with blunt splenic injury admitted between 1998 and 2008 to an academic Level 1 trauma centre were retrospectively reviewed. Inter and intrarater reliability were expressed in Cohen's or weighted Kappa values. Overall weighted interobserver Kappa coefficients for the AAST and 'Baltimore CT grading system' were respectively substantial (kappa=0.80) and almost perfect (kappa=0.85). Average weighted intraobserver Kappa's values were in the 'almost perfect' range (AAST: kappa=0.91, 'Baltimore CT grading system': kappa=0.81). The present study shows that overall the inter- and intraobserver reliability for grading splenic injury according to the AAST grading system and 'Baltimore CT grading system' are equally high. Because of the integration of vascular injury, the 'Baltimore CT grading system' supports clinical decision making. We therefore recommend use of this system in the classification of splenic injury. Copyright © 2012 Elsevier Ltd. All rights reserved.
Burke, Shane M; Hwang, Steven W; Mehan, William A; Bedi, Harprit S; Ogbuji, Richard; Riesenburger, Ron I
2016-07-01
Cross-specialty inter-rater reliability has not been explicitly reported for imaging characteristics that are thought to be important in lumbar intervertebral disc degeneration. Sufficient cross-specialty reliability is an essential consideration if radiographic stratification of symptomatic patients to specific treatment modalities is to ever be realized. Therefore the purpose of this study was to directly compare the assessment of such characteristics between neurosurgeons and neuroradiologists. Sixty consecutive patients with a diagnosis of lumbago and appropriate imaging were selected for inclusion. Lumbar MRI were evaluated using the Tufts Degenerative Disc Classification by two neurosurgeons and two neuroradiologists. Inter-rater reliability was assessed using Cohen's κ values both within and between specialties. A sensitivity analysis was performed for a modified grading system, which excluded high intensity zones (HIZ), due to poor cross-specialty inter-rater reliability of HIZ between specialties. The reliability of HIZ between neurosurgeons and neuroradiologists was fair in two of the four cross-specialty comparisons in this study (neurosurgeon 1 versus both radiologists κ=0.364 and κ=0.290). Removing HIZ from the classification improved inter-rater reliability for all comparisons within and between specialties (0.465⩽κ⩽0.576). In addition, intra-rater reliability remained in the moderate to substantial range (0.523⩽κ⩽0.649). Given our findings and corroboration with previous studies, identification of HIZ seems to have a markedly variable reliability. Thus we recommend modification of the original Tufts Degenerative Disc Classification by removing HIZ in order to make the overall grade provided by this classification more reproducible when scored by practitioners of different training backgrounds. Copyright © 2015 Elsevier Ltd. All rights reserved.
This study examined inter-analyst classification variability based on training site signature selection only for six classifications from a 10 km2 Landsat ETM+ image centered over a highly heterogeneous area in south-central Virginia. Six analysts classified the image...
NASA Astrophysics Data System (ADS)
Feria, Erlan H.
2008-04-01
In this third of a multi-paper series the discovery of a space dual for the laws of motion is reported and named the laws of retention. This space-time duality in physics is found to inherently surface from a latency-information theory (LIT) that is treated in the first two papers of this multi-paper series. A motion-coder and a retention-coder are fundamental elements of a LIT's recognition-communication system. While a LIT's motion-coder addresses motion-time issues of knowledge motion, a LIT's retention-coder addresses retention-space issues of knowledge retention. For the design of a motion-coder, such as a modulation-antenna system, the laws of motion in physics are used while for the design of a retention-coder, such as a write/read memory, the newly advanced laws of retention can be used. Furthermore, while the laws of motion reflect a configuration of space certainty, the laws of retention reflect a passing of time uncertainty. Since the retention duals of motion concepts are too many to cover in a single publication, the discussion will be centered on the retention duals for Newton's Principia and the gravitational law, Coulomb's electrical law, Maxwell's equations, Einstein's relativity theory, quantum mechanics, and the uncertainty principle. Furthermore the retention duals will be illustrated with an uncharged and non-rotating black hole (UNBH). A UNBH is the retention dual of a vacuum since the UNBH and vacuum offer, from a theoretical perspective, the least resistance to knowledge retention and motion, respectively. Using this space-time duality insight it will be shown that the speed of light in a vacuum of c M=2.9979 x 10 8 meters/sec has a retention dual, herein called the pace of dark in a UNBH of c R=6.1123 x 10 63 secs/m 3 where 'pace' refers to the expected retention-time per retention-space for the 'dark' knowledge residing in a black hole.
Makkar, Steve R; Williamson, Anna; D'Este, Catherine; Redman, Sally
2017-12-19
Few measures of research use in health policymaking are available, and the reliability of such measures has yet to be evaluated. A new measure called the Staff Assessment of Engagement with Evidence (SAGE) incorporates an interview that explores policymakers' research use within discrete policy documents and a scoring tool that quantifies the extent of policymakers' research use based on the interview transcript and analysis of the policy document itself. We aimed to conduct a preliminary investigation of the usability, sensitivity, and reliability of the scoring tool in measuring research use by policymakers. Nine experts in health policy research and two independent coders were recruited. Each expert used the scoring tool to rate a random selection of 20 interview transcripts, and each independent coder rated 60 transcripts. The distribution of scores among experts was examined, and then, interrater reliability was tested within and between the experts and independent coders. Average- and single-measure reliability coefficients were computed for each SAGE subscales. Experts' scores ranged from the limited to extensive scoring bracket for all subscales. Experts as a group also exhibited at least a fair level of interrater agreement across all subscales. Single-measure reliability was at least fair except for three subscales: Relevance Appraisal, Conceptual Use, and Instrumental Use. Average- and single-measure reliability among independent coders was good to excellent for all subscales. Finally, reliability between experts and independent coders was fair to excellent for all subscales. Among experts, the scoring tool was comprehensible, usable, and sensitive to discriminate between documents with varying degrees of research use. Secondly, the scoring tool yielded scores with good reliability among the independent coders. There was greater variability among experts, although as a group, the tool was fairly reliable. The alignment between experts' and independent coders' ratings indicates that the independent coders were scoring in a manner comparable to health policy research experts. If the present findings are replicated in a larger sample, end users (e.g. policy agency staff) could potentially be trained to use SAGE to reliably score research use within their agencies, which would provide a cost-effective and time-efficient approach to utilising this measure in practice.
An evaluation of classification systems for stillbirth
Flenady, Vicki; Frøen, J Frederik; Pinar, Halit; Torabi, Rozbeh; Saastad, Eli; Guyon, Grace; Russell, Laurie; Charles, Adrian; Harrison, Catherine; Chauke, Lawrence; Pattinson, Robert; Koshy, Rachel; Bahrin, Safiah; Gardener, Glenn; Day, Katie; Petersson, Karin; Gordon, Adrienne; Gilshenan, Kristen
2009-01-01
Background Audit and classification of stillbirths is an essential part of clinical practice and a crucial step towards stillbirth prevention. Due to the limitations of the ICD system and lack of an international approach to an acceptable solution, numerous disparate classification systems have emerged. We assessed the performance of six contemporary systems to inform the development of an internationally accepted approach. Methods We evaluated the following systems: Amended Aberdeen, Extended Wigglesworth; PSANZ-PDC, ReCoDe, Tulip and CODAC. Nine teams from 7 countries applied the classification systems to cohorts of stillbirths from their regions using 857 stillbirth cases. The main outcome measures were: the ability to retain the important information about the death using the InfoKeep rating; the ease of use according to the Ease rating (both measures used a five-point scale with a score <2 considered unsatisfactory); inter-observer agreement and the proportion of unexplained stillbirths. A randomly selected subset of 100 stillbirths was used to assess inter-observer agreement. Results InfoKeep scores were significantly different across the classifications (p ≤ 0.01) due to low scores for Wigglesworth and Aberdeen. CODAC received the highest mean (SD) score of 3.40 (0.73) followed by PSANZ-PDC, ReCoDe and Tulip [2.77 (1.00), 2.36 (1.21), 1.92 (1.24) respectively]. Wigglesworth and Aberdeen resulted in a high proportion of unexplained stillbirths and CODAC and Tulip the lowest. While Ease scores were different (p ≤ 0.01), all systems received satisfactory scores; CODAC received the highest score. Aberdeen and Wigglesworth showed poor agreement with kappas of 0.35 and 0.25 respectively. Tulip performed best with a kappa of 0.74. The remainder had good to fair agreement. Conclusion The Extended Wigglesworth and Amended Aberdeen systems cannot be recommended for classification of stillbirths. Overall, CODAC performed best with PSANZ-PDC and ReCoDe performing well. Tulip was shown to have the best agreement and a low proportion of unexplained stillbirths. The virtues of these systems need to be considered in the development of an international solution to classification of stillbirths. Further studies are required on the performance of classification systems in the context of developing countries. Suboptimal agreement highlights the importance of instituting measures to ensure consistency for any classification system. PMID:19538759
An evaluation of classification systems for stillbirth.
Flenady, Vicki; Frøen, J Frederik; Pinar, Halit; Torabi, Rozbeh; Saastad, Eli; Guyon, Grace; Russell, Laurie; Charles, Adrian; Harrison, Catherine; Chauke, Lawrence; Pattinson, Robert; Koshy, Rachel; Bahrin, Safiah; Gardener, Glenn; Day, Katie; Petersson, Karin; Gordon, Adrienne; Gilshenan, Kristen
2009-06-19
Audit and classification of stillbirths is an essential part of clinical practice and a crucial step towards stillbirth prevention. Due to the limitations of the ICD system and lack of an international approach to an acceptable solution, numerous disparate classification systems have emerged. We assessed the performance of six contemporary systems to inform the development of an internationally accepted approach. We evaluated the following systems: Amended Aberdeen, Extended Wigglesworth; PSANZ-PDC, ReCoDe, Tulip and CODAC. Nine teams from 7 countries applied the classification systems to cohorts of stillbirths from their regions using 857 stillbirth cases. The main outcome measures were: the ability to retain the important information about the death using the InfoKeep rating; the ease of use according to the Ease rating (both measures used a five-point scale with a score <2 considered unsatisfactory); inter-observer agreement and the proportion of unexplained stillbirths. A randomly selected subset of 100 stillbirths was used to assess inter-observer agreement. InfoKeep scores were significantly different across the classifications (p < or = 0.01) due to low scores for Wigglesworth and Aberdeen. CODAC received the highest mean (SD) score of 3.40 (0.73) followed by PSANZ-PDC, ReCoDe and Tulip [2.77 (1.00), 2.36 (1.21), 1.92 (1.24) respectively]. Wigglesworth and Aberdeen resulted in a high proportion of unexplained stillbirths and CODAC and Tulip the lowest. While Ease scores were different (p < or = 0.01), all systems received satisfactory scores; CODAC received the highest score. Aberdeen and Wigglesworth showed poor agreement with kappas of 0.35 and 0.25 respectively. Tulip performed best with a kappa of 0.74. The remainder had good to fair agreement. The Extended Wigglesworth and Amended Aberdeen systems cannot be recommended for classification of stillbirths. Overall, CODAC performed best with PSANZ-PDC and ReCoDe performing well. Tulip was shown to have the best agreement and a low proportion of unexplained stillbirths. The virtues of these systems need to be considered in the development of an international solution to classification of stillbirths. Further studies are required on the performance of classification systems in the context of developing countries. Suboptimal agreement highlights the importance of instituting measures to ensure consistency for any classification system.
The reliability and validity of the Saliba Postural Classification System
Collins, Cristiana Kahl; Johnson, Vicky Saliba; Godwin, Ellen M.; Pappas, Evangelos
2016-01-01
Objectives To determine the reliability and validity of the Saliba Postural Classification System (SPCS). Methods Two physical therapists classified pictures of 100 volunteer participants standing in their habitual posture for inter and intra-tester reliability. For validity, 54 participants stood on a force plate in a habitual and a corrected posture, while a vertical force was applied through the shoulders until the clinician felt a postural give. Data were extracted at the time the give was felt and at a time in the corrected posture that matched the peak vertical ground reaction force (VGRF) in the habitual posture. Results Inter-tester reliability demonstrated 75% agreement with a Kappa = 0.64 (95% CI = 0.524–0.756, SE = 0.059). Intra-tester reliability demonstrated 87% agreement with a Kappa = 0.8, (95% CI = 0.702–0.898, SE = 0.05) and 80% agreement with a Kappa = 0.706, (95% CI = 0.594–0818, SE = 0.057). The examiner applied a significantly higher (p < 0.001) peak vertical force in the corrected posture prior to a postural give when compared to the habitual posture. Within the corrected posture, the %VGRF was higher when the test was ongoing vs. when a postural give was felt (p < 0.001). The %VGRF was not different between the two postures when comparing the peaks (p = 0.214). Discussion The SPCS has substantial agreement for inter- and intra-tester reliability and is largely a valid postural classification system as determined by the larger vertical forces in the corrected postures. Further studies on the correlation between the SPCS and diagnostic classifications are indicated. PMID:27559288
The reliability and validity of the Saliba Postural Classification System.
Collins, Cristiana Kahl; Johnson, Vicky Saliba; Godwin, Ellen M; Pappas, Evangelos
2016-07-01
To determine the reliability and validity of the Saliba Postural Classification System (SPCS). Two physical therapists classified pictures of 100 volunteer participants standing in their habitual posture for inter and intra-tester reliability. For validity, 54 participants stood on a force plate in a habitual and a corrected posture, while a vertical force was applied through the shoulders until the clinician felt a postural give. Data were extracted at the time the give was felt and at a time in the corrected posture that matched the peak vertical ground reaction force (VGRF) in the habitual posture. Inter-tester reliability demonstrated 75% agreement with a Kappa = 0.64 (95% CI = 0.524-0.756, SE = 0.059). Intra-tester reliability demonstrated 87% agreement with a Kappa = 0.8, (95% CI = 0.702-0.898, SE = 0.05) and 80% agreement with a Kappa = 0.706, (95% CI = 0.594-0818, SE = 0.057). The examiner applied a significantly higher (p < 0.001) peak vertical force in the corrected posture prior to a postural give when compared to the habitual posture. Within the corrected posture, the %VGRF was higher when the test was ongoing vs. when a postural give was felt (p < 0.001). The %VGRF was not different between the two postures when comparing the peaks (p = 0.214). The SPCS has substantial agreement for inter- and intra-tester reliability and is largely a valid postural classification system as determined by the larger vertical forces in the corrected postures. Further studies on the correlation between the SPCS and diagnostic classifications are indicated.
NASA Technical Reports Server (NTRS)
Friend, J.
1971-01-01
A manual designed both as an instructional manual for beginning coders and as a reference manual for the coding language INSTRUCT, is presented. The manual includes the major programs necessary to implement the teaching system and lists the limitation of current implementation. A detailed description is given of how to code a lesson, what buttons to push, and what utility programs to use. Suggestions for debugging coded lessons and the error messages that may be received during assembly or while running the lesson are given.
Huffhines, Lindsay; Tunno, Angela M; Cho, Bridget; Hambrick, Erin P; Campos, Ilse; Lichty, Brittany; Jackson, Yo
2016-08-01
State social service agency case files are a common mechanism for obtaining information about a child's maltreatment history, yet these documents are often challenging for researchers to access, and then to process in a manner consistent with the requirements of social science research designs. Specifically, accessing and navigating case files is an extensive undertaking, and a task that many researchers have had to maneuver with little guidance. Even after the files are in hand and the research questions and relevant variables have been clarified, case file information about a child's maltreatment exposure can be idiosyncratic, vague, inconsistent, and incomplete, making coding such information into useful variables for statistical analyses difficult. The Modified Maltreatment Classification System (MMCS) is a popular tool used to guide the process, and though comprehensive, this coding system cannot cover all idiosyncrasies found in case files. It is not clear from the literature how researchers implement this system while accounting for issues outside of the purview of the MMCS or that arise during MMCS use. Finally, a large yet reliable file coding team is essential to the process, however, the literature lacks training guidelines and methods for establishing reliability between coders. In an effort to move the field toward a common approach, the purpose of the present discussion is to detail the process used by one large-scale study of child maltreatment, the Studying Pathways to Adjustment and Resilience in Kids (SPARK) project, a longitudinal study of resilience in youth in foster care. The article addresses each phase of case file coding, from accessing case files, to identifying how to measure constructs of interest, to dealing with exceptions to the coding system, to coding variables reliably, to training large teams of coders and monitoring for fidelity. Implications for a comprehensive and efficient approach to case file coding are discussed.
Huffhines, Lindsay; Tunno, Angela M.; Cho, Bridget; Hambrick, Erin P.; Campos, Ilse; Lichty, Brittany; Jackson, Yo
2016-01-01
State social service agency case files are a common mechanism for obtaining information about a child’s maltreatment history, yet these documents are often challenging for researchers to access, and then to process in a manner consistent with the requirements of social science research designs. Specifically, accessing and navigating case files is an extensive undertaking, and a task that many researchers have had to maneuver with little guidance. Even after the files are in hand and the research questions and relevant variables have been clarified, case file information about a child’s maltreatment exposure can be idiosyncratic, vague, inconsistent, and incomplete, making coding such information into useful variables for statistical analyses difficult. The Modified Maltreatment Classification System (MMCS) is a popular tool used to guide the process, and though comprehensive, this coding system cannot cover all idiosyncrasies found in case files. It is not clear from the literature how researchers implement this system while accounting for issues outside of the purview of the MMCS or that arise during MMCS use. Finally, a large yet reliable file coding team is essential to the process, however, the literature lacks training guidelines and methods for establishing reliability between coders. In an effort to move the field toward a common approach, the purpose of the present discussion is to detail the process used by one large-scale study of child maltreatment, the Studying Pathways to Adjustment and Resilience in Kids (SPARK) project, a longitudinal study of resilience in youth in foster care. The article addresses each phase of case file coding, from accessing case files, to identifying how to measure constructs of interest, to dealing with exceptions to the coding system, to coding variables reliably, to training large teams of coders and monitoring for fidelity. Implications for a comprehensive and efficient approach to case file coding are discussed. PMID:28138207
Reliability of routinely collected hospital data for child maltreatment surveillance.
McKenzie, Kirsten; Scott, Debbie A; Waller, Garry S; Campbell, Margaret
2011-01-05
Internationally, research on child maltreatment-related injuries has been hampered by a lack of available routinely collected health data to identify cases, examine causes, identify risk factors and explore health outcomes. Routinely collected hospital separation data coded using the International Classification of Diseases and Related Health Problems (ICD) system provide an internationally standardised data source for classifying and aggregating diseases, injuries, causes of injuries and related health conditions for statistical purposes. However, there has been limited research to examine the reliability of these data for child maltreatment surveillance purposes. This study examined the reliability of coding of child maltreatment in Queensland, Australia. A retrospective medical record review and recoding methodology was used to assess the reliability of coding of child maltreatment. A stratified sample of hospitals across Queensland was selected for this study, and a stratified random sample of cases was selected from within those hospitals. In 3.6% of cases the coders disagreed on whether any maltreatment code could be assigned (definite or possible) versus no maltreatment being assigned (unintentional injury), giving a sensitivity of 0.982 and specificity of 0.948. The review of these cases where discrepancies existed revealed that all cases had some indications of risk documented in the records. 15.5% of cases originally assigned a definite or possible maltreatment code, were recoded to a more or less definite strata. In terms of the number and type of maltreatment codes assigned, the auditor assigned a greater number of maltreatment types based on the medical documentation than the original coder assigned (22% of the auditor coded cases had more than one maltreatment type assigned compared to only 6% of the original coded data). The maltreatment types which were the most 'under-coded' by the original coder were psychological abuse and neglect. Cases coded with a sexual abuse code showed the highest level of reliability. Given the increasing international attention being given to improving the uniformity of reporting of child-maltreatment related injuries and the emphasis on the better utilisation of routinely collected health data, this study provides an estimate of the reliability of maltreatment-specific ICD-10-AM codes assigned in an inpatient setting.
Reliability of Routinely Collected Hospital Data for Child Maltreatment Surveillance
2011-01-01
Background Internationally, research on child maltreatment-related injuries has been hampered by a lack of available routinely collected health data to identify cases, examine causes, identify risk factors and explore health outcomes. Routinely collected hospital separation data coded using the International Classification of Diseases and Related Health Problems (ICD) system provide an internationally standardised data source for classifying and aggregating diseases, injuries, causes of injuries and related health conditions for statistical purposes. However, there has been limited research to examine the reliability of these data for child maltreatment surveillance purposes. This study examined the reliability of coding of child maltreatment in Queensland, Australia. Methods A retrospective medical record review and recoding methodology was used to assess the reliability of coding of child maltreatment. A stratified sample of hospitals across Queensland was selected for this study, and a stratified random sample of cases was selected from within those hospitals. Results In 3.6% of cases the coders disagreed on whether any maltreatment code could be assigned (definite or possible) versus no maltreatment being assigned (unintentional injury), giving a sensitivity of 0.982 and specificity of 0.948. The review of these cases where discrepancies existed revealed that all cases had some indications of risk documented in the records. 15.5% of cases originally assigned a definite or possible maltreatment code, were recoded to a more or less definite strata. In terms of the number and type of maltreatment codes assigned, the auditor assigned a greater number of maltreatment types based on the medical documentation than the original coder assigned (22% of the auditor coded cases had more than one maltreatment type assigned compared to only 6% of the original coded data). The maltreatment types which were the most 'under-coded' by the original coder were psychological abuse and neglect. Cases coded with a sexual abuse code showed the highest level of reliability. Conclusion Given the increasing international attention being given to improving the uniformity of reporting of child-maltreatment related injuries and the emphasis on the better utilisation of routinely collected health data, this study provides an estimate of the reliability of maltreatment-specific ICD-10-AM codes assigned in an inpatient setting. PMID:21208411
NASA Astrophysics Data System (ADS)
Khan, Asif; Ryoo, Chang-Kyung; Kim, Heung Soo
2017-04-01
This paper presents a comparative study of different classification algorithms for the classification of various types of inter-ply delaminations in smart composite laminates. Improved layerwise theory is used to model delamination at different interfaces along the thickness and longitudinal directions of the smart composite laminate. The input-output data obtained through surface bonded piezoelectric sensor and actuator is analyzed by the system identification algorithm to get the system parameters. The identified parameters for the healthy and delaminated structure are supplied as input data to the classification algorithms. The classification algorithms considered in this study are ZeroR, Classification via regression, Naïve Bayes, Multilayer Perceptron, Sequential Minimal Optimization, Multiclass-Classifier, and Decision tree (J48). The open source software of Waikato Environment for Knowledge Analysis (WEKA) is used to evaluate the classification performance of the classifiers mentioned above via 75-25 holdout and leave-one-sample-out cross-validation regarding classification accuracy, precision, recall, kappa statistic and ROC Area.
Pitch-Learning Algorithm For Speech Encoders
NASA Technical Reports Server (NTRS)
Bhaskar, B. R. Udaya
1988-01-01
Adaptive algorithm detects and corrects errors in sequence of estimates of pitch period of speech. Algorithm operates in conjunction with techniques used to estimate pitch period. Used in such parametric and hybrid speech coders as linear predictive coders and adaptive predictive coders.
Inter and intra-observer concordance for the diagnosis of portal hypertension gastropathy.
Casas, Meritxell; Vergara, Mercedes; Brullet, Enric; Junquera, Félix; Martínez-Bauer, Eva; Miquel, Mireia; Sánchez-Delgado, Jordi; Dalmau, Blai; Campo, Rafael; Calvet, Xavier
2018-03-01
At present there is no fully accepted endoscopic classification for the assessment of the severity of portal hypertensive gastropathy (PHG). Few studies have evaluated inter and intra-observer concordance or the degree of concordance between different endoscopic classifications. To evaluate inter and intra-observer agreement for the presence of portal hypertensive gastropathy and enteropathy using different endoscopic classifications. Patients with liver cirrhosis were included into the study. Enteroscopy was performed under sedation. The location of lesions and their severity was recorded. Images were videotaped and subsequently evaluated independently by three different endoscopists, one of whom was the initial endoscopist. The agreement between observations was assessed using the kappa index. Seventy-four patients (mean age 63.2 years, 53 males and 21 females) were included. The agreement between the three endoscopists regarding the presence or absence of PHG using the Tanoue and McCormack classifications was very low (kappa scores = 0.16 and 0.27, respectively). The current classifications of portal hypertensive gastropathy have a very low degree of intra and inter-observer agreement for the diagnosis and assessment of gastropathy severity.
Automated aural classification used for inter-species discrimination of cetaceans.
Binder, Carolyn M; Hines, Paul C
2014-04-01
Passive acoustic methods are in widespread use to detect and classify cetacean species; however, passive acoustic systems often suffer from large false detection rates resulting from numerous transient sources. To reduce the acoustic analyst workload, automatic recognition methods may be implemented in a two-stage process. First, a general automatic detector is implemented that produces many detections to ensure cetacean presence is noted. Then an automatic classifier is used to significantly reduce the number of false detections and classify the cetacean species. This process requires development of a robust classifier capable of performing inter-species classification. Because human analysts can aurally discriminate species, an automated aural classifier that uses perceptual signal features was tested on a cetacean data set. The classifier successfully discriminated between four species of cetaceans-bowhead, humpback, North Atlantic right, and sperm whales-with 85% accuracy. It also performed well (100% accuracy) for discriminating sperm whale clicks from right whale gunshots. An accuracy of 92% and area under the receiver operating characteristic curve of 0.97 were obtained for the relatively challenging bowhead and humpback recognition case. These results demonstrated that the perceptual features employed by the aural classifier provided powerful discrimination cues for inter-species classification of cetaceans.
Toward enhancing the distributed video coder under a multiview video codec framework
NASA Astrophysics Data System (ADS)
Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua
2016-11-01
The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.
An empirical look at the Defense Mechanism Test (DMT): reliability and construct validity.
Ekehammar, Bo; Zuber, Irena; Konstenius, Marja-Liisa
2005-07-01
Although the Defense Mechanism Test (DMT) has been in use for almost half a century, there are still quite contradictory views about whether it is a reliable instrument, and if so, what it really measures. Thus, based on data from 39 female students, we first examined DMT inter-coder reliability by analyzing the agreement among trained judges in their coding of the same DMT protocols. Second, we constructed a "parallel" photographic picture that retained all structural characteristic of the original and analyzed DMT parallel-test reliability. Third, we examined the construct validity of the DMT by (a) employing three self-report defense-mechanism inventories and analyzing the intercorrelations between DMT defense scores and corresponding defenses in these instruments, (b) studying the relationships between DMT responses and scores on trait and state anxiety, and (c) relating DMT-defense scores to measures of self-esteem. The main results showed that the DMT can be coded with high reliability by trained coders, that the parallel-test reliability is unsatisfactory compared to traditional psychometric standards, that there is a certain generalizability in the number of perceptual distortions that people display from one picture to another, and that the construct validation provided meager empirical evidence for the conclusion that the DMT measures what it purports to measure, that is, psychological defense mechanisms.
Lee, Kyung Hee; Lee, Kyung Won; Park, Ji Hoon; Han, Kyunghwa; Kim, Jihang; Lee, Sang Min; Park, Chang Min
2018-01-01
To measure inter-protocol agreement and analyze interchangeability on nodule classification between low-dose unenhanced CT and standard-dose enhanced CT. From nodule libraries containing both low-dose unenhanced and standard-dose enhanced CT, 80 solid and 80 subsolid (40 part-solid, 40 non-solid) nodules of 135 patients were selected. Five thoracic radiologists categorized each nodule into solid, part-solid or non-solid. Inter-protocol agreement between low-dose unenhanced and standard-dose enhanced images was measured by pooling κ values for classification into two (solid, subsolid) and three (solid, part-solid, non-solid) categories. Interchangeability between low-dose unenhanced and standard-dose enhanced CT for the classification into two categories was assessed using a pre-defined equivalence limit of 8 percent. Inter-protocol agreement for the classification into two categories {κ, 0.96 (95% confidence interval [CI], 0.94-0.98)} and that into three categories (κ, 0.88 [95% CI, 0.85-0.92]) was considerably high. The probability of agreement between readers with standard-dose enhanced CT was 95.6% (95% CI, 94.5-96.6%), and that between low-dose unenhanced and standard-dose enhanced CT was 95.4% (95% CI, 94.7-96.0%). The difference between the two proportions was 0.25% (95% CI, -0.85-1.5%), wherein the upper bound CI was markedly below 8 percent. Inter-protocol agreement for nodule classification was considerably high. Low-dose unenhanced CT can be used interchangeably with standard-dose enhanced CT for nodule classification.
Fox, M R; Pandolfino, J E; Sweis, R; Sauter, M; Abreu Y Abreu, A T; Anggiansah, A; Bogte, A; Bredenoord, A J; Dengler, W; Elvevi, A; Fruehauf, H; Gellersen, S; Ghosh, S; Gyawali, C P; Heinrich, H; Hemmink, M; Jafari, J; Kaufman, E; Kessing, K; Kwiatek, M; Lubomyr, B; Banasiuk, M; Mion, F; Pérez-de-la-Serna, J; Remes-Troche, J M; Rohof, W; Roman, S; Ruiz-de-León, A; Tutuian, R; Uscinowicz, M; Valdovinos, M A; Vardar, R; Velosa, M; Waśko-Czopnik, D; Weijenborg, P; Wilshire, C; Wright, J; Zerbib, F; Menne, D
2015-01-01
High-resolution esophageal manometry (HRM) is a recent development used in the evaluation of esophageal function. Our aim was to assess the inter-observer agreement for diagnosis of esophageal motility disorders using this technology. Practitioners registered on the HRM Working Group website were invited to review and classify (i) 147 individual water swallows and (ii) 40 diagnostic studies comprising 10 swallows using a drop-down menu that followed the Chicago Classification system. Data were presented using a standardized format with pressure contours without a summary of HRM metrics. The sequence of swallows was fixed for each user but randomized between users to avoid sequence bias. Participants were blinded to other entries. (i) Individual swallows were assessed by 18 practitioners (13 institutions). Consensus agreement (≤ 2/18 dissenters) was present for most cases of normal peristalsis and achalasia but not for cases of peristaltic dysmotility. (ii) Diagnostic studies were assessed by 36 practitioners (28 institutions). Overall inter-observer agreement was 'moderate' (kappa 0.51) being 'substantial' (kappa > 0.7) for achalasia type I/II and no lower than 'fair-moderate' (kappa >0.34) for any diagnosis. Overall agreement was somewhat higher among those that had performed >400 studies (n = 9; kappa 0.55) and 'substantial' among experts involved in development of the Chicago Classification system (n = 4; kappa 0.66). This prospective, randomized, and blinded study reports an acceptable level of inter-observer agreement for HRM diagnoses across the full spectrum of esophageal motility disorders for a large group of clinicians working in a range of medical institutions. Suboptimal agreement for diagnosis of peristaltic motility disorders highlights contribution of objective HRM metrics. © 2014 International Society for Diseases of the Esophagus.
Niglis, L; Collin, P; Dosch, J-C; Meyer, N; Kempf, J-F
2017-10-01
The long-term outcomes of rotator cuff repair are unclear. Recurrent tears are common, although their reported frequency varies depending on the type and interpretation challenges of the imaging method used. The primary objective of this study was to assess the intra- and inter-observer reproducibility of the MRI assessment of rotator cuff repair using the Sugaya classification 10years after surgery. The secondary objective was to determine whether poor reproducibility, if found, could be improved by using a simplified yet clinically relevant classification. Our hypothesis was that reproducibility was limited but could be improved by simplifying the classification. In a retrospective study, we assessed intra- and inter-observer agreement in interpreting 49 magnetic resonance imaging (MRI) scans performed 10years after rotator cuff repair. These 49 scans were taken at random among 609 cases that underwent re-evaluation, with imaging, for the 2015 SoFCOT symposium on 10-year and 20-year clinical and anatomical outcomes of rotator cuff repair for full-thickness tears. Each of three observers read each of the 49 scans on two separate occasions. At each reading, they assessed the supra-spinatus tendon according to the Sugaya classification in five types. Intra-observer agreement for the Sugaya type was substantial (κ=0.64) but inter-observer agreement was only fair (κ=0.39). Agreement improved when the five Sugaya types were collapsed into two categories (1-2-3 and 4-5) (intra-observer κ=0.74 and inter-observer κ=0.68). Using the Sugaya classification to assess post-operative rotator cuff healing was associated with substantial intra-observer and fair inter-observer agreement. A simpler classification into two categories improved agreement while remaining clinically relevant. II, prospective randomised low-power study. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Goode, N; Salmon, P M; Taylor, N Z; Lenné, M G; Finch, C F
2017-10-01
One factor potentially limiting the uptake of Rasmussen's (1997) Accimap method by practitioners is the lack of a contributing factor classification scheme to guide accident analyses. This article evaluates the intra- and inter-rater reliability and criterion-referenced validity of a classification scheme developed to support the use of Accimap by led outdoor activity (LOA) practitioners. The classification scheme has two levels: the system level describes the actors, artefacts and activity context in terms of 14 codes; the descriptor level breaks the system level codes down into 107 specific contributing factors. The study involved 11 LOA practitioners using the scheme on two separate occasions to code a pre-determined list of contributing factors identified from four incident reports. Criterion-referenced validity was assessed by comparing the codes selected by LOA practitioners to those selected by the method creators. Mean intra-rater reliability scores at the system (M = 83.6%) and descriptor (M = 74%) levels were acceptable. Mean inter-rater reliability scores were not consistently acceptable for both coding attempts at the system level (M T1 = 68.8%; M T2 = 73.9%), and were poor at the descriptor level (M T1 = 58.5%; M T2 = 64.1%). Mean criterion referenced validity scores at the system level were acceptable (M T1 = 73.9%; M T2 = 75.3%). However, they were not consistently acceptable at the descriptor level (M T1 = 67.6%; M T2 = 70.8%). Overall, the results indicate that the classification scheme does not currently satisfy reliability and validity requirements, and that further work is required. The implications for the design and development of contributing factors classification schemes are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rost, Martin Christopher
1988-01-01
Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.
The reliability of cause-of-death coding in The Netherlands.
Harteloh, Peter; de Bruin, Kim; Kardaun, Jan
2010-08-01
Cause-of-death statistics are a major source of information for epidemiological research or policy decisions. Information on the reliability of these statistics is important for interpreting trends in time or differences between populations. Variations in coding the underlying cause of death could hinder the attribution of observed differences to determinants of health. Therefore we studied the reliability of cause-of-death statistics in The Netherlands. We performed a double coding study. Death certificates from the month of May 2005 were coded again in 2007. Each death certificate was coded manually by four coders. Reliability was measured by calculating agreement between coders (intercoder agreement) and by calculating the consistency of each individual coder in time (intracoder agreement). Our analysis covered an amount of 10,833 death certificates. The intercoder agreement of four coders on the underlying cause of death was 78%. In 2.2% of the cases coders agreed on a change of the code assigned in 2005. The (mean) intracoder agreement of four coders was 89%. Agreement was associated with the specificity of the ICD-10 code (chapter, three digits, four digits), the age of the deceased, the number of coders and the number of diseases reported on the death certificate. The reliability of cause-of-death statistics turned out to be high (>90%) for major causes of death such as cancers and acute myocardial infarction. For chronic diseases, such as diabetes and renal insufficiency, reliability was low (<70%). The reliability of cause-of-death statistics varies by ICD-10 code/chapter. A statistical office should provide coders with (additional) rules for coding diseases with a low reliability and evaluate these rules regularly. Users of cause-of-death statistics should exercise caution when interpreting causes of death with a low reliability. Studies of reliability should take into account the number of coders involved and the number of codes on a death certificate.
Morton, Lindsay M.; Linet, Martha S.; Clarke, Christina A.; Kadin, Marshall E.; Vajdic, Claire M.; Monnereau, Alain; Maynadié, Marc; Chiu, Brian C.-H.; Marcos-Gragera, Rafael; Costantini, Adele Seniori; Cerhan, James R.; Weisenburger, Dennis D.
2010-01-01
After publication of the updated World Health Organization (WHO) classification of tumors of hematopoietic and lymphoid tissues in 2008, the Pathology Working Group of the International Lymphoma Epidemiology Consortium (InterLymph) now presents an update of the hierarchical classification of lymphoid neoplasms for epidemiologic research based on the 2001 WHO classification, which we published in 2007. The updated hierarchical classification incorporates all of the major and provisional entities in the 2008 WHO classification, including newly defined entities based on age, site, certain infections, and molecular characteristics, as well as borderline categories, early and “in situ” lesions, disorders with limited capacity for clinical progression, lesions without current International Classification of Diseases for Oncology, 3rd Edition codes, and immunodeficiency-associated lymphoproliferative disorders. WHO subtypes are defined in hierarchical groupings, with newly defined groups for small B-cell lymphomas with plasmacytic differentiation and for primary cutaneous T-cell lymphomas. We suggest approaches for applying the hierarchical classification in various epidemiologic settings, including strategies for dealing with multiple coexisting lymphoma subtypes in one patient, and cases with incomplete pathologic information. The pathology materials useful for state-of-the-art epidemiology studies are also discussed. We encourage epidemiologists to adopt the updated InterLymph hierarchical classification, which incorporates the most recent WHO entities while demonstrating their relationship to older classifications. PMID:20699439
Turner, Jennifer J; Morton, Lindsay M; Linet, Martha S; Clarke, Christina A; Kadin, Marshall E; Vajdic, Claire M; Monnereau, Alain; Maynadié, Marc; Chiu, Brian C-H; Marcos-Gragera, Rafael; Costantini, Adele Seniori; Cerhan, James R; Weisenburger, Dennis D
2010-11-18
After publication of the updated World Health Organization (WHO) classification of tumors of hematopoietic and lymphoid tissues in 2008, the Pathology Working Group of the International Lymphoma Epidemiology Consortium (InterLymph) now presents an update of the hierarchical classification of lymphoid neoplasms for epidemiologic research based on the 2001 WHO classification, which we published in 2007. The updated hierarchical classification incorporates all of the major and provisional entities in the 2008 WHO classification, including newly defined entities based on age, site, certain infections, and molecular characteristics, as well as borderline categories, early and "in situ" lesions, disorders with limited capacity for clinical progression, lesions without current International Classification of Diseases for Oncology, 3rd Edition codes, and immunodeficiency-associated lymphoproliferative disorders. WHO subtypes are defined in hierarchical groupings, with newly defined groups for small B-cell lymphomas with plasmacytic differentiation and for primary cutaneous T-cell lymphomas. We suggest approaches for applying the hierarchical classification in various epidemiologic settings, including strategies for dealing with multiple coexisting lymphoma subtypes in one patient, and cases with incomplete pathologic information. The pathology materials useful for state-of-the-art epidemiology studies are also discussed. We encourage epidemiologists to adopt the updated InterLymph hierarchical classification, which incorporates the most recent WHO entities while demonstrating their relationship to older classifications.
Mkentane, K; Van Wyk, J C; Sishi, N; Gumedze, F; Ngoepe, M; Davids, L M; Khumalo, N P
2017-01-01
Curly hair is reported to contain higher lipid content than straight hair, which may influence incorporation of lipid soluble drugs. The use of race to describe hair curl variation (Asian, Caucasian and African) is unscientific yet common in medical literature (including reports of drug levels in hair). This study investigated the reliability of a geometric classification of hair (based on 3 measurements: the curve diameter, curl index and number of waves). After ethical approval and informed consent, proximal virgin (6cm) hair sampled from the vertex of scalp in 48 healthy volunteers were evaluated. Three raters each scored hairs from 48 volunteers at two occasions each for the 8 and 6-group classifications. One rater applied the 6-group classification to 80 additional volunteers in order to further confirm the reliability of this system. The Kappa statistic was used to assess intra and inter rater agreement. Each rater classified 480 hairs on each occasion. No rater classified any volunteer's 10 hairs into the same group; the most frequently occurring group was used for analysis. The inter-rater agreement was poor for the 8-groups (k = 0.418) but improved for the 6-groups (k = 0.671). The intra-rater agreement also improved (k = 0.444 to 0.648 versus 0.599 to 0.836) for 6-groups; that for the one evaluator for all volunteers was good (k = 0.754). Although small, this is the first study to test the reliability of a geometric classification. The 6-group method is more reliable. However, a digital classification system is likely to reduce operator error. A reliable objective classification of human hair curl is long overdue, particularly with the increasing use of hair as a testing substrate for treatment compliance in Medicine.
A combined Fuzzy and Naive Bayesian strategy can be used to assign event codes to injury narratives.
Marucci-Wellman, H; Lehto, M; Corns, H
2011-12-01
Bayesian methods show promise for classifying injury narratives from large administrative datasets into cause groups. This study examined a combined approach where two Bayesian models (Fuzzy and Naïve) were used to either classify a narrative or select it for manual review. Injury narratives were extracted from claims filed with a worker's compensation insurance provider between January 2002 and December 2004. Narratives were separated into a training set (n=11,000) and prediction set (n=3,000). Expert coders assigned two-digit Bureau of Labor Statistics Occupational Injury and Illness Classification event codes to each narrative. Fuzzy and Naïve Bayesian models were developed using manually classified cases in the training set. Two semi-automatic machine coding strategies were evaluated. The first strategy assigned cases for manual review if the Fuzzy and Naïve models disagreed on the classification. The second strategy selected additional cases for manual review from the Agree dataset using prediction strength to reach a level of 50% computer coding and 50% manual coding. When agreement alone was used as the filtering strategy, the majority were coded by the computer (n=1,928, 64%) leaving 36% for manual review. The overall combined (human plus computer) sensitivity was 0.90 and positive predictive value (PPV) was >0.90 for 11 of 18 2-digit event categories. Implementing the 2nd strategy improved results with an overall sensitivity of 0.95 and PPV >0.90 for 17 of 18 categories. A combined Naïve-Fuzzy Bayesian approach can classify some narratives with high accuracy and identify others most beneficial for manual review, reducing the burden on human coders.
Validity of an Observation Method for Assessing Pain Behavior in Individuals With Multiple Sclerosis
Cook, Karon F.; Roddey, Toni S.; Bamer, Alyssa M.; Amtmann, Dagmar; Keefe, Francis J
2012-01-01
Context Pain is a common and complex experience for individuals who live with multiple sclerosis (MS) that interferes with physical, psychological and social function. A valid and reliable tool for quantifying observed pain behaviors in MS is critical to understanding how pain behaviors contribute to pain-related disability in this clinical population. Objectives To evaluate the reliability and validity of a pain behavioral observation protocol in individuals who have MS. Methods Community-dwelling volunteers with multiple sclerosis (N=30), back pain (N=5), or arthritis (N=8) were recruited based on clinician referrals, advertisements, fliers, web postings, and participation in previous research. Participants completed measures of pain severity, pain interference, and self-reported pain behaviors and were videotaped doing typical activities (e.g., walking, sitting). Two coders independently recorded frequencies of pain behaviors by category (e.g., guarding, bracing) and inter-rater reliability statistics were calculated. Naïve observers reviewed videotapes of individuals with MS and rated their pain. Spearman correlations were calculated between pain behavior frequencies and self-reported pain and pain ratings by naïve observers. Results Inter-rater reliability estimates indicated the reliability of pain codes in the MS sample. Kappa coefficients ranged from moderate agreement (sighing = 0.40) to substantial agreement (guarding = 0.83). These values were comparable to those obtained in the combined back pain and arthritis sample. Concurrent validity was supported by correlations with self-reported pain (0.46-0.53) and with self-reports of pain behaviors (0.58). Construct validity was supported by finding of 0.87 correlation between total pain behaviors observed by coders and mean pain ratings by naïve observers. Conclusion Results support use of the pain behavior observation protocol for assessing pain behaviors of individuals with MS. Valid assessments of pain behaviors of individuals with MS in could lead to creative interventions in the management of chronic pain in this population. PMID:23159684
NASA Astrophysics Data System (ADS)
Martens, Kristine; Van Camp, Marc; Van Damme, Dirk; Walraevens, Kristine
2013-08-01
Within the European Union, Habitat Directives are developed with the aim of restoration and preservation of endangered species. The level of biodiversity in coastal dune systems is generally very high compared to other natural ecosystems, but suffers from deterioration. Groundwater extraction and urbanisation are the main reasons for the decrease in biodiversity. Many restoration actions are being carried out and are focusing on the restoration of groundwater level with the aim of re-establishing rare species. These actions have different degrees of success. The evaluation of the actions is mainly based on the appearance of red list species. The groundwater classes, developed in the Netherlands, are used for the evaluation of opportunities for vegetation, while the natural variability of the groundwater level and quality are under-estimated. Vegetation is used as a seepage indicator. The existing classification is not valid in the Belgian dunes, as the vegetation observed in the study area is not in correspondence with this classification. Therefore, a new classification is needed. The new classification is based on the variability of the groundwater level on a long term with integration of ecological factors. Based on the new classification, the importance of seasonal and inter-yearly fluctuations of the water table can be deduced. Inter-yearly fluctuations are more important in recharge areas while seasonal fluctuations are dominant in discharge areas. The new classification opens opportunities for relating vegetation and groundwater dynamics.
Improved Frame Mode Selection for AMR-WB+ Based on Decision Tree
NASA Astrophysics Data System (ADS)
Kim, Jong Kyu; Kim, Nam Soo
In this letter, we propose a coding mode selection method for the AMR-WB+ audio coder based on a decision tree. In order to reduce computation while maintaining good performance, decision tree classifier is adopted with the closed loop mode selection results as the target classification labels. The size of the decision tree is controlled by pruning, so the proposed method does not increase the memory requirement significantly. Through an evaluation test on a database covering both speech and music materials, the proposed method is found to achieve a much better mode selection accuracy compared with the open loop mode selection module in the AMR-WB+.
Oberjé, Edwin J M; Dima, Alexandra L; Pijnappel, Frank J; Prins, Jan M; de Bruin, Marijn
2015-01-01
Reporting guidelines call for descriptions of control group support in equal detail as for interventions. However, how to assess the active content (behaviour change techniques (BCTs)) of treatment-as-usual (TAU) delivered to control groups in trials remains unclear. The objective of this study is to pre-test a method of assessing TAU in a multicentre cost-effectiveness trial of an HIV-treatment adherence intervention. HIV-nurses (N = 21) completed a semi-structured open-ended questionnaire enquiring about TAU adherence counselling. Two coders independently coded BCTs. Completeness and clarity of nurse responses, inter-coder reliabilities and the type of BCTs reported were examined. The clarity and completeness of nurse responses were adequate. Twenty-three of the 26 identified BCTs could be reliably coded (mean κ = .79; mean agreement rate = 96%) and three BCTs scored below κ = .60. Total number of BCTs reported per nurse ranged between 7 and 19 (M = 13.86, SD = 3.35). This study suggests that the TAU open-ended questionnaire is a feasible and reliable tool to capture active content of support provided to control participants in a multicentre adherence intervention trial. Considerable variability in the number of BCTs provided to control patients was observed, illustrating the importance of reliably collecting and accurately reporting control group support.
Primack, Brian A.; Fine, Danielle; Yang, Christopher K.; Wickett, Dustin; Zickmund, Susan
2009-01-01
Although media literacy represents an innovative venue for school-based antismoking programming, studies have not systematically compared student impressions of these and traditional programs. This study utilized data from a randomized trial comparing these two types of programs. After each program, students responded to three open-ended questions related to their assigned curriculum. Two coders, blinded to student assignments, independently coded these data. Coders had strong inter-rater agreement (kappa = 0.77). Our primary measures were spontaneously noted overall assessment, enjoyment/interest and the likelihood of changing smoking behavior. Of the 531 participants, 255 (48.0%) were randomized to the intervention (media literacy) group. Intervention participants had more net positive responses [rate ratio (RR) = 1.27, 95% confidence interval (CI) = 1.05, 1.54], more responses rating the program as compelling (RR = 1.63, 95% CI = 1.16, 2.29) and fewer responses rating the program as non-compelling (RR = 0.62, 95% CI = 0.39, 0.97). However, the intervention group was not more likely to suggest that the curriculum was likely to change behavior positively (RR = 0.57, 95% CI = 0.30, 1.06). Findings suggest that although media literacy provides a compelling format for the delivery of antitobacco programming, integration of components of traditional programming may help media literacy programs achieve maximal efficacy. PMID:19052155
Results from the Veterans Health Administration ICD-10-CM/PCS Coding Pilot Study.
Weems, Shelley; Heller, Pamela; Fenton, Susan H
2015-01-01
The Veterans Health Administration (VHA) of the US Department of Veterans Affairs has been preparing for the October 1, 2015, conversion to the International Classification of Diseases, Tenth Revision, Clinical Modification and Procedural Coding System (ICD-10-CM/PCS) for more than four years. The VHA's Office of Informatics and Analytics ICD-10 Program Management Office established an ICD-10 Learning Lab to explore expected operational challenges. This study was conducted to determine the effects of the classification system conversion on coding productivity. ICD codes are integral to VHA business processes and are used for purposes such as clinical studies, performance measurement, workload capture, cost determination, Veterans Equitable Resource Allocation (VERA) determination, morbidity and mortality classification, indexing of hospital records by disease and operations, data storage and retrieval, research purposes, and reimbursement. The data collection for this study occurred in multiple VHA sites across several months using standardized methods. It is commonly accepted that coding productivity will decrease with the implementation of ICD-10-CM/PCS. The findings of this study suggest that the decrease will be more significant for inpatient coding productivity (64.5 percent productivity decrease) than for ambulatory care coding productivity (6.7 percent productivity decrease). This study reveals the following important points regarding ICD-10-CM/PCS coding productivity: 1. Ambulatory care ICD-10-CM coding productivity is not expected to decrease as significantly as inpatient ICD-10-CM/PCS coding productivity. 2. Coder training and type of record (inpatient versus outpatient) affect coding productivity. 3. Inpatient coding productivity is decreased when a procedure requiring ICD-10-PCS coding is present. It is highly recommended that organizations perform their own analyses to determine the effects of ICD-10-CM/PCS implementation on coding productivity.
Results from the Veterans Health Administration ICD-10-CM/PCS Coding Pilot Study
Weems, Shelley; Heller, Pamela; Fenton, Susan H.
2015-01-01
The Veterans Health Administration (VHA) of the US Department of Veterans Affairs has been preparing for the October 1, 2015, conversion to the International Classification of Diseases, Tenth Revision, Clinical Modification and Procedural Coding System (ICD-10-CM/PCS) for more than four years. The VHA's Office of Informatics and Analytics ICD-10 Program Management Office established an ICD-10 Learning Lab to explore expected operational challenges. This study was conducted to determine the effects of the classification system conversion on coding productivity. ICD codes are integral to VHA business processes and are used for purposes such as clinical studies, performance measurement, workload capture, cost determination, Veterans Equitable Resource Allocation (VERA) determination, morbidity and mortality classification, indexing of hospital records by disease and operations, data storage and retrieval, research purposes, and reimbursement. The data collection for this study occurred in multiple VHA sites across several months using standardized methods. It is commonly accepted that coding productivity will decrease with the implementation of ICD-10-CM/PCS. The findings of this study suggest that the decrease will be more significant for inpatient coding productivity (64.5 percent productivity decrease) than for ambulatory care coding productivity (6.7 percent productivity decrease). This study reveals the following important points regarding ICD-10-CM/PCS coding productivity: Ambulatory care ICD-10-CM coding productivity is not expected to decrease as significantly as inpatient ICD-10-CM/PCS coding productivity.Coder training and type of record (inpatient versus outpatient) affect coding productivity.Inpatient coding productivity is decreased when a procedure requiring ICD-10-PCS coding is present. It is highly recommended that organizations perform their own analyses to determine the effects of ICD-10-CM/PCS implementation on coding productivity. PMID:26396553
Razek, Ahmed Abdel Khalek Abdel; Shamaa, Sameh; Lattif, Mahmoud Abdel; Yousef, Hanan Hamid
2017-01-01
To assess inter-observer agreement of whole-body computed tomography (WBCT) in staging and response assessment in lymphoma according to the Lugano classification. Retrospective analysis was conducted of 115 consecutive patients with lymphomas (45 females, 70 males; mean age of 46 years). Patients underwent WBCT with a 64 multi-detector CT device for staging and response assessment after a complete course of chemotherapy. Image analysis was performed by 2 reviewers according to the Lugano classification for staging and response assessment. The overall inter-observer agreement of WBCT in staging of lymphoma was excellent ( k =0.90, percent agreement=94.9%). There was an excellent inter-observer agreement for stage I ( k =0.93, percent agreement=96.4%), stage II ( k =0.90, percent agreement=94.8%), stage III ( k =0.89, percent agreement=94.6%) and stage IV ( k =0.88, percent agreement=94%). The overall inter-observer agreement in response assessment after a completer course of treatment was excellent ( k =0.91, percent agreement=95.8%). There was an excellent inter-observer agreement in progressive disease ( k =0.94, percent agreement=97.1%), stable disease ( k =0.90, percent agreement=95%), partial response ( k =0.96, percent agreement=98.1%) and complete response ( k =0.87, Percent agreement=93.3%). We concluded that WBCT is a reliable and reproducible imaging modality for staging and treatment assessment in lymphoma according to the Lugano classification.
Ubhi, Harveen Kaur; Michie, Susan; Kotz, Daniel; van Schayck, Onno C P; Selladurai, Abiram; West, Robert
2016-09-01
The aim of this study was to assess whether or not behaviour change techniques (BCTs) as well as engagement and ease-of-use features used in smartphone applications (apps) to aid smoking cessation can be identified reliably. Apps were coded for presence of potentially effective BCTs, and engagement and ease-of-use features. Inter-rater reliability for this coding was assessed. Inter-rater agreement for identifying presence of potentially effective BCTs ranged from 66.8 to 95.1 % with 'prevalence and bias adjusted kappas' (PABAK) ranging from 0.35 to 0.90 (p < 0.001). The intra-class correlation coefficients between the two coders for scores denoting the proportions of (a) a set of engagement features and (b) a set of ease-of-use features, which were included, were 0.77 and 0.75, respectively (p < 0.001). Prevalence estimates for BCTs ranged from <10 % for medication advice to >50 % for rewarding abstinence. The average proportions of specified engagement and ease-of-use features included in the apps were 69 and 83 %, respectively. The study found that it is possible to identify potentially effective BCTs, and engagement and ease-of-use features in smoking cessation apps with fair to high inter-rater reliability.
Looking at the ICF and human communication through the lens of classification theory.
Walsh, Regina
2011-08-01
This paper explores the insights that classification theory can provide about the application of the International Classification of Functioning, Disability and Health (ICF) to communication. It first considers the relationship between conceptual models and classification systems, highlighting that classification systems in speech-language pathology (SLP) have not historically been based on conceptual models of human communication. It then overviews the key concepts and criteria of classification theory. Applying classification theory to the ICF and communication raises a number of issues, some previously highlighted through clinical application. Six focus questions from classification theory are used to explore these issues, and to propose the creation of an ICF-related conceptual model of communicating for the field of communication disability, which would address some of the issues raised. Developing a conceptual model of communication for SLP purposes closely articulated with the ICF would foster productive intra-professional discourse, while at the same time allow the profession to continue to use the ICF for purposes in inter-disciplinary discourse. The paper concludes by suggesting the insights of classification theory can assist professionals to apply the ICF to communication with the necessary rigour, and to work further in developing a conceptual model of human communication.
Standardizing an approach to the evaluation of implementation science proposals.
Crable, Erika L; Biancarelli, Dea; Walkey, Allan J; Allen, Caitlin G; Proctor, Enola K; Drainoni, Mari-Lynn
2018-05-29
The fields of implementation and improvement sciences have experienced rapid growth in recent years. However, research that seeks to inform health care change may have difficulty translating core components of implementation and improvement sciences within the traditional paradigms used to evaluate efficacy and effectiveness research. A review of implementation and improvement sciences grant proposals within an academic medical center using a traditional National Institutes of Health framework highlighted the need for tools that could assist investigators and reviewers in describing and evaluating proposed implementation and improvement sciences research. We operationalized existing recommendations for writing implementation science proposals as the ImplemeNtation and Improvement Science Proposals Evaluation CriTeria (INSPECT) scoring system. The resulting system was applied to pilot grants submitted to a call for implementation and improvement science proposals at an academic medical center. We evaluated the reliability of the INSPECT system using Krippendorff's alpha coefficients and explored the utility of the INSPECT system to characterize common deficiencies in implementation research proposals. We scored 30 research proposals using the INSPECT system. Proposals received a median cumulative score of 7 out of a possible score of 30. Across individual elements of INSPECT, proposals scored highest for criteria rating evidence of a care or quality gap. Proposals generally performed poorly on all other criteria. Most proposals received scores of 0 for criteria identifying an evidence-based practice or treatment (50%), conceptual model and theoretical justification (70%), setting's readiness to adopt new services/treatment/programs (54%), implementation strategy/process (67%), and measurement and analysis (70%). Inter-coder reliability testing showed excellent reliability (Krippendorff's alpha coefficient 0.88) for the application of the scoring system overall and demonstrated reliability scores ranging from 0.77 to 0.99 for individual elements. The INSPECT scoring system presents a new scoring criteria with a high degree of inter-rater reliability and utility for evaluating the quality of implementation and improvement sciences grant proposals.
AVHRR channel selection for land cover classification
Maxwell, S.K.; Hoffer, R.M.; Chapman, P.L.
2002-01-01
Mapping land cover of large regions often requires processing of satellite images collected from several time periods at many spectral wavelength channels. However, manipulating and processing large amounts of image data increases the complexity and time, and hence the cost, that it takes to produce a land cover map. Very few studies have evaluated the importance of individual Advanced Very High Resolution Radiometer (AVHRR) channels for discriminating cover types, especially the thermal channels (channels 3, 4 and 5). Studies rarely perform a multi-year analysis to determine the impact of inter-annual variability on the classification results. We evaluated 5 years of AVHRR data using combinations of the original AVHRR spectral channels (1-5) to determine which channels are most important for cover type discrimination, yet stabilize inter-annual variability. Particular attention was placed on the channels in the thermal portion of the spectrum. Fourteen cover types over the entire state of Colorado were evaluated using a supervised classification approach on all two-, three-, four- and five-channel combinations for seven AVHRR biweekly composite datasets covering the entire growing season for each of 5 years. Results show that all three of the major portions of the electromagnetic spectrum represented by the AVHRR sensor are required to discriminate cover types effectively and stabilize inter-annual variability. Of the two-channel combinations, channels 1 (red visible) and 2 (near-infrared) had, by far, the highest average overall accuracy (72.2%), yet the inter-annual classification accuracies were highly variable. Including a thermal channel (channel 4) significantly increased the average overall classification accuracy by 5.5% and stabilized interannual variability. Each of the thermal channels gave similar classification accuracies; however, because of the problems in consistently interpreting channel 3 data, either channel 4 or 5 was found to be a more appropriate choice. Substituting the thermal channel with a single elevation layer resulted in equivalent classification accuracies and inter-annual variability.
Advanced imaging communication system
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Rice, R. F.
1977-01-01
Key elements of system are imaging and nonimaging sensors, data compressor/decompressor, interleaved Reed-Solomon block coder, convolutional-encoded/Viterbi-decoded telemetry channel, and Reed-Solomon decoding. Data compression provides efficient representation of sensor data, and channel coding improves reliability of data transmission.
Panchani, Sunil; Reading, Jonathan; Mehta, Jaysheel
2016-06-01
The position of the lateral sesamoid on standard dorso-plantar weight bearing radiographs, with respect to the lateral cortex of the first metatarsal, has been shown to correlate well with the degree of the hallux valgus angle. This study aimed to assess the inter- and intra-observer error of this new classification system. Five orthopaedic consultants and five trainee orthopaedic surgeons were recruited to assess and document the degree of displacement of the lateral sesamoid on 144 weight-bearing dorso-plantar radiographs on two separate occasions. The severity of hallux valgus was defined as normal (0%), mild (≤50%), moderate (51-≤99%) or severe (≥100%) depending on the percentage displacement of the lateral sesamoid body from the lateral cortical border of the first metatarsal. Consultant intra-observer variability showed good agreement between repeated assessment of the radiographs (mean Kappa=0.75). Intra-observer variability for trainee orthopaedic surgeons also showed good agreement with a mean Kappa=0.73. Intraclass correlations for consultants and trainee surgeons was also high. The new classification system of assessing the severity of hallux valgus shows high inter- and intra-observer variability with good agreement and reproducibility between surgeons of consultant and trainee grades. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cross-ontological analytics for alignment of different classification schemes
Posse, Christian; Sanfilippo, Antonio P; Gopalan, Banu; Riensche, Roderick M; Baddeley, Robert L
2010-09-28
Quantification of the similarity between nodes in multiple electronic classification schemes is provided by automatically identifying relationships and similarities between nodes within and across the electronic classification schemes. Quantifying the similarity between a first node in a first electronic classification scheme and a second node in a second electronic classification scheme involves finding a third node in the first electronic classification scheme, wherein a first product value of an inter-scheme similarity value between the second and third nodes and an intra-scheme similarity value between the first and third nodes is a maximum. A fourth node in the second electronic classification scheme can be found, wherein a second product value of an inter-scheme similarity value between the first and fourth nodes and an intra-scheme similarity value between the second and fourth nodes is a maximum. The maximum between the first and second product values represents a measure of similarity between the first and second nodes.
Akpinar, Pinar; Tezel, Canan G; Eliasson, Ann-Christin; Icagasioglu, Afitap
2010-01-01
To determine the reliability and cross-cultural validation of the Turkish translation of the Manual Ability Classification System (MACS) for children with cerebral palsy (CP) and to investigate the relation to gross motor function and other comorbidities. After the forward and backward translation procedures, inter-rater and test-retest reliability was assessed between parents, physiotherapists and physicians using the intra-class correlation coefficient (ICC). Children (N = 118, 4 to 18 years, mean age 9 years 4 months; 68 boys, 50 girls) with various types of CP were classified. Additional data on the Gross Motor Function Classification System (GMFCS), intellectual delay, visual acuity, and epilepsy were collected. The inter-rater reliability was high; the ICC ranged from 0.89 to 0.96 among different professionals and parents. Between two persons of the same profession it ranged from 0.97 to 0.98. For the test-retest reliability it ranged from 0.91 to 0.98. Total agreement between the GMFCS and the MACS occurred in only 45% of the children. The level of the MACS was found to correlate with the accompanying comorbidities, namely intellectual delay and epilepsy. The Turkish version of the MACS is found to be valid and reliable, and is suggested to be appropriate for the assessment of manual ability within the Turkish population.
Sanders, Tekla B; Bowens, Felicia M; Pierce, William; Stasher-Booker, Bridgette; Thompson, Erica Q; Jones, Warren A
2012-01-01
This article will examine the benefits and challenges of the US healthcare system's upcoming conversion to use of the International Classification of Diseases, Tenth Revision, Clinical Modification/Procedure Coding System (ICD-10-CM/PCS) and will review the cost implications of the transition. Benefits including improved quality of care, potential cost savings from increased accuracy of payments and reduction of unpaid claims, and improved tracking of healthcare data related to public health and bioterrorism events are discussed. Challenges are noted in the areas of planning and implementation, the financial cost of the transition, a shortage of qualified coders, the need for further training and education of the healthcare workforce, and the loss of productivity during the transition. Although the transition will require substantial implementation and conversion costs, potential benefits can be achieved in the areas of data integrity, fraud detection, enhanced cost analysis capabilities, and improved monitoring of patients’ health outcomes that will yield greater cost savings over time. The discussion concludes with recommendations to healthcare organizations of ways in which technological advances and workforce training and development opportunities can ease the transition to the new coding system. PMID:22548024
Berger, Rachel P; Parks, Sharyn; Fromkin, Janet; Rubin, Pamela; Pecora, Peter J
2015-04-01
To assess the accuracy of an International Classification of Diseases (ICD) code-based operational case definition for abusive head trauma (AHT). Subjects were children <5 years of age evaluated for AHT by a hospital-based Child Protection Team (CPT) at a tertiary care paediatric hospital with a completely electronic medical record (EMR) system. Subjects were designated as non-AHT traumatic brain injury (TBI) or AHT based on whether the CPT determined that the injuries were due to AHT. The sensitivity and specificity of the ICD-based definition were calculated. There were 223 children evaluated for AHT: 117 AHT and 106 non-AHT TBI. The sensitivity and specificity of the ICD-based operational case definition were 92% (95% CI 85.8 to 96.2) and 96% (95% CI 92.3 to 99.7), respectively. All errors in sensitivity and three of the four specificity errors were due to coder error; one specificity error was a physician error. In a paediatric tertiary care hospital with an EMR system, the accuracy of an ICD-based case definition for AHT was high. Additional studies are needed to assess the accuracy of this definition in all types of hospitals in which children with AHT are cared for. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
van den Boogaart, Vivian E M; de Lussanet, Quido G; Houben, Ruud M A; de Ruysscher, Dirk; Groen, Harry J M; Marcus, J Tim; Smit, Egbert F; Dingemans, Anne-Marie C; Backes, Walter H
2016-03-01
Objectives When evaluating anti-tumor treatment response by dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) it is necessary to assure its validity and reproducibility. This has not been well addressed in lung tumors. Therefore we have evaluated the inter-reader reproducibility of response classification by DCE-MRI in patients with non-small cell lung cancer (NSCLC) treated with bevacizumab and erlotinib enrolled in a multicenter trial. Twenty-one patients were scanned before and 3 weeks after start of treatment with DCE-MRI in a multicenter trial. The scans were evaluated by two independent readers. The primary lung tumor was used for response assessment. Responses were assessed in terms of relative changes in tumor mean trans endothelial transfer rate (K(trans)) and its heterogeneity in terms of the spatial standard deviation. Reproducibility was expressed by the inter-reader variability, intra-class correlation coefficient (ICC) and dichotomous response classification. The inter-reader variability and ICC for the relative K(trans) were 5.8% and 0.930, respectively. For tumor heterogeneity the inter-reader variability and ICC were 0.017 and 0.656, respectively. For the two readers the response classification for relative K(trans) was concordant in 20 of 21 patients (k=0.90, p<0.0001) and for tumor heterogeneity in 19 of 21 patients (k=0.80, p<0.0001). Strong agreement was seen with regard to the inter-reader variability and reproducibility of response classification by the two readers of lung cancer DCE-MRI scans. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Software Certification - Coding, Code, and Coders
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Holzmann, Gerard J.
2011-01-01
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
Reliability of diagnostic coding in intensive care patients
Misset, Benoît; Nakache, Didier; Vesin, Aurélien; Darmon, Mickael; Garrouste-Orgeas, Maïté; Mourvillier, Bruno; Adrie, Christophe; Pease, Sébastian; de Beauregard, Marie-Aliette Costa; Goldgran-Toledano, Dany; Métais, Elisabeth; Timsit, Jean-François
2008-01-01
Introduction Administrative coding of medical diagnoses in intensive care unit (ICU) patients is mandatory in order to create databases for use in epidemiological and economic studies. We assessed the reliability of coding between different ICU physicians. Method One hundred medical records selected randomly from 29,393 cases collected between 1998 and 2004 in the French multicenter Outcomerea ICU database were studied. Each record was sent to two senior physicians from independent ICUs who recoded the diagnoses using the International Statistical Classification of Diseases and Related Health Problems: Tenth Revision (ICD-10) after being trained according to guidelines developed by two French national intensive care medicine societies: the French Society of Intensive Care Medicine (SRLF) and the French Society of Anesthesiology and Intensive Care Medicine (SFAR). These codes were then compared with the original codes, which had been selected by the physician treating the patient. A specific comparison was done for the diagnoses of septicemia and shock (codes derived from A41 and R57, respectively). Results The ICU physicians coded an average of 4.6 ± 3.0 (range 1 to 32) diagnoses per patient, with little agreement between the three coders. The primary diagnosis was matched by both external coders in 34% (95% confidence interval (CI) 25% to 43%) of cases, by only one in 35% (95% CI 26% to 44%) of cases, and by neither in 31% (95% CI 22% to 40%) of cases. Only 18% (95% CI 16% to 20%) of all codes were selected by all three coders. Similar results were obtained for the diagnoses of septicemia and/or shock. Conclusion In a multicenter database designed primarily for epidemiological and cohort studies in ICU patients, the coding of medical diagnoses varied between different observers. This could limit the interpretation and validity of research and epidemiological programs using diagnoses as inclusion criteria. PMID:18664267
NASA Astrophysics Data System (ADS)
Huang, Feng; Sun, Lifeng; Zhong, Yuzhuo
2006-01-01
Robust transmission of live video over ad hoc wireless networks presents new challenges: high bandwidth requirements are coupled with delay constraints; even a single packet loss causes error propagation until a complete video frame is coded in the intra-mode; ad hoc wireless networks suffer from bursty packet losses that drastically degrade the viewing experience. Accordingly, we propose a novel UMD coder capable of quickly recovering from losses and ensuring continuous playout. It uses 'peg' frames to prevent error propagation in the High-Resolution (HR) description and improve the robustness of key frames. The Low-Resolution (LR) coder works independent of the HR one, but they can also help each other recover from losses. Like many UMD coders, our UMD coder is drift-free, disruption-tolerant and able to make good use of the asymmetric available bandwidths of multiple paths. The simulation results under different conditions show that the proposed UMD coder has the highest decoded quality and lowest probability of pause when compared with concurrent UMDC techniques. The coder also has a comparable decoded quality, lower startup delay and lower probability of pause than a state-of-the-art FEC-based scheme. To provide robustness for video multicast applications, we propose non-end-to-end UMDC-based video distribution over a multi-tree multicast network. The multiplicity of parents decorrelates losses and the non-end-to-end feature increases the throughput of UMDC video data. We deploy an application-level service of LR description reconstruction in some intermediate nodes of the LR multicast tree. The principle behind this is to reconstruct the disrupted LR frames by the correctly received HR frames. As a result, the viewing experience at the downstream nodes benefits from the protection reconstruction at the upstream nodes.
Mkentane, K.; Gumedze, F.; Ngoepe, M.; Davids, L. M.; Khumalo, N. P.
2017-01-01
Introduction Curly hair is reported to contain higher lipid content than straight hair, which may influence incorporation of lipid soluble drugs. The use of race to describe hair curl variation (Asian, Caucasian and African) is unscientific yet common in medical literature (including reports of drug levels in hair). This study investigated the reliability of a geometric classification of hair (based on 3 measurements: the curve diameter, curl index and number of waves). Materials and methods After ethical approval and informed consent, proximal virgin (6cm) hair sampled from the vertex of scalp in 48 healthy volunteers were evaluated. Three raters each scored hairs from 48 volunteers at two occasions each for the 8 and 6-group classifications. One rater applied the 6-group classification to 80 additional volunteers in order to further confirm the reliability of this system. The Kappa statistic was used to assess intra and inter rater agreement. Results Each rater classified 480 hairs on each occasion. No rater classified any volunteer’s 10 hairs into the same group; the most frequently occurring group was used for analysis. The inter-rater agreement was poor for the 8-groups (k = 0.418) but improved for the 6-groups (k = 0.671). The intra-rater agreement also improved (k = 0.444 to 0.648 versus 0.599 to 0.836) for 6-groups; that for the one evaluator for all volunteers was good (k = 0.754). Conclusions Although small, this is the first study to test the reliability of a geometric classification. The 6-group method is more reliable. However, a digital classification system is likely to reduce operator error. A reliable objective classification of human hair curl is long overdue, particularly with the increasing use of hair as a testing substrate for treatment compliance in Medicine. PMID:28570555
Lan Ma; Minett, James W; Blu, Thierry; Wang, William S-Y
2015-08-01
Biometrics is a growing field, which permits identification of individuals by means of unique physical features. Electroencephalography (EEG)-based biometrics utilizes the small intra-personal differences and large inter-personal differences between individuals' brainwave patterns. In the past, such methods have used features derived from manually-designed procedures for this purpose. Another possibility is to use convolutional neural networks (CNN) to automatically extract an individual's best and most unique neural features and conduct classification, using EEG data derived from both Resting State with Open Eyes (REO) and Resting State with Closed Eyes (REC). Results indicate that this CNN-based joint-optimized EEG-based Biometric System yields a high degree of accuracy of identification (88%) for 10-class classification. Furthermore, rich inter-personal difference can be found using a very low frequency band (0-2Hz). Additionally, results suggest that the temporal portions over which subjects can be individualized is less than 200 ms.
Bradshaw, Debbie; Groenewald, Pamela; Bourne, David E.; Mahomed, Hassan; Nojilana, Beatrice; Daniels, Johan; Nixon, Jo
2006-01-01
OBJECTIVE: To review the quality of the coding of the cause of death (COD) statistics and assess the mortality information needs of the City of Cape Town. METHODS: Using an action research approach, a study was set up to investigate the quality of COD information, the accuracy of COD coding and consistency of coding practices in the larger health subdistricts. Mortality information needs and the best way of presenting the statistics to assist health managers were explored. FINDINGS: Useful information was contained in 75% of death certificates, but nearly 60% had only a single cause certified; 55% of forms were coded accurately. Disagreement was mainly because routine coders coded the immediate instead of the underlying COD. An abridged classification of COD, based on causes of public health importance, prevalent causes and selected combinations of diseases was implemented with training on underlying cause. Analysis of the 2001 data identified the leading causes of death and premature mortality and illustrated striking differences in the disease burden and profile between health subdistricts. CONCLUSION: Action research is particularly useful for improving information systems and revealed the need to standardize the coding practice to identify underlying cause. The specificity of the full ICD classification is beyond the level of detail on the death certificates currently available. An abridged classification for coding provides a practical tool appropriate for local level public health surveillance. Attention to the presentation of COD statistics is important to enable the data to inform decision-makers. PMID:16583080
Bradshaw, Debbie; Groenewald, Pamela; Bourne, David E; Mahomed, Hassan; Nojilana, Beatrice; Daniels, Johan; Nixon, Jo
2006-03-01
To review the quality of the coding of the cause of death (COD) statistics and assess the mortality information needs of the City of Cape Town. Using an action research approach, a study was set up to investigate the quality of COD information, the accuracy of COD coding and consistency of coding practices in the larger health subdistricts. Mortality information needs and the best way of presenting the statistics to assist health managers were explored. Useful information was contained in 75% of death certificates, but nearly 60% had only a single cause certified; 55% of forms were coded accurately. Disagreement was mainly because routine coders coded the immediate instead of the underlying COD. An abridged classification of COD, based on causes of public health importance, prevalent causes and selected combinations of diseases was implemented with training on underlying cause. Analysis of the 2001 data identified the leading causes of death and premature mortality and illustrated striking differences in the disease burden and profile between health subdistricts. Action research is particularly useful for improving information systems and revealed the need to standardize the coding practice to identify underlying cause. The specificity of the full ICD classification is beyond the level of detail on the death certificates currently available. An abridged classification for coding provides a practical tool appropriate for local level public health surveillance. Attention to the presentation of COD statistics is important to enable the data to inform decision-makers.
Schmitz, Matthew; Forst, Linda
2016-02-15
Inclusion of information about a patient's work, industry, and occupation, in the electronic health record (EHR) could facilitate occupational health surveillance, better health outcomes, prevention activities, and identification of workers' compensation cases. The US National Institute for Occupational Safety and Health (NIOSH) has developed an autocoding system for "industry" and "occupation" based on 1990 Bureau of Census codes; its effectiveness requires evaluation in conjunction with promoting the mandatory addition of these variables to the EHR. The objective of the study was to evaluate the intercoder reliability of NIOSH's Industry and Occupation Computerized Coding System (NIOCCS) when applied to data collected in a community survey conducted under the Affordable Care Act; to determine the proportion of records that are autocoded using NIOCCS. Standard Occupational Classification (SOC) codes are used by several federal agencies in databases that capture demographic, employment, and health information to harmonize variables related to work activities among these data sources. There are 359 industry and occupation responses that were hand coded by 2 investigators, who came to a consensus on every code. The same variables were autocoded using NIOCCS at the high and moderate criteria level. Kappa was .84 for agreement between hand coders and between the hand coder consensus code versus NIOCCS high confidence level codes for the first 2 digits of the SOC code. For 4 digits, NIOCCS coding versus investigator coding ranged from kappa=.56 to .70. In this study, NIOCCS was able to achieve production rates (ie, to autocode) 31%-36% of entered variables at the "high confidence" level and 49%-58% at the "medium confidence" level. Autocoding (production) rates are somewhat lower than those reported by NIOSH. Agreement between manually coded and autocoded data are "substantial" at the 2-digit level, but only "fair" to "good" at the 4-digit level. This work serves as a baseline for performance of NIOCCS by investigators in the field. Further field testing will clarify NIOCCS effectiveness in terms of ability to assign codes and coding accuracy and will clarify its value as inclusion of these occupational variables in the EHR is promoted.
Real-time data compression of broadcast video signals
NASA Technical Reports Server (NTRS)
Shalkauser, Mary Jo W. (Inventor); Whyte, Wayne A., Jr. (Inventor); Barnes, Scott P. (Inventor)
1991-01-01
A non-adaptive predictor, a nonuniform quantizer, and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time.
Xu, Xiayu; Ding, Wenxiang; Abràmoff, Michael D; Cao, Ruofan
2017-04-01
Retinal artery and vein classification is an important task for the automatic computer-aided diagnosis of various eye diseases and systemic diseases. This paper presents an improved supervised artery and vein classification method in retinal image. Intra-image regularization and inter-subject normalization is applied to reduce the differences in feature space. Novel features, including first-order and second-order texture features, are utilized to capture the discriminating characteristics of arteries and veins. The proposed method was tested on the DRIVE dataset and achieved an overall accuracy of 0.923. This retinal artery and vein classification algorithm serves as a potentially important tool for the early diagnosis of various diseases, including diabetic retinopathy and cardiovascular diseases. Copyright © 2017 Elsevier B.V. All rights reserved.
Diagnostic discrepancies in retinopathy of prematurity classification
Campbell, J. Peter; Ryan, Michael C.; Lore, Emily; Tian, Peng; Ostmo, Susan; Jonas, Karyn; Chan, R.V. Paul; Chiang, Michael F.
2016-01-01
Objective To identify the most common areas for discrepancy in retinopathy of prematurity (ROP) classification between experts. Design Prospective cohort study. Subjects, Participants, and/or Controls 281 infants were identified as part of a multi-center, prospective, ROP cohort study from 7 participating centers. Each site had participating ophthalmologists who provided the clinical classification after routine examination using binocular indirect ophthalmoscopy (BIO), and obtained wide-angle retinal images, which were independently classified by two study experts. Methods Wide-angle retinal images (RetCam; Clarity Medical Systems, Pleasanton, CA) were obtained from study subjects, and two experts evaluated each image using a secure web-based module. Image-based classifications for zone, stage, plus disease, overall disease category (no ROP, mild ROP, Type II or pre-plus, and Type I) were compared between the two experts, and to the clinical classification obtained by BIO. Main Outcome Measures Inter-expert image-based agreement and image-based vs. ophthalmoscopic diagnostic agreement using absolute agreement and weighted kappa statistic. Results 1553 study eye examinations from 281 infants were included in the study. Experts disagreed on the stage classification in 620/1553 (40%) of comparisons, plus disease classification (including pre-plus) in 287/1553 (18%), zone in 117/1553 (8%), and overall ROP category in 618/1553 (40%). However, agreement for presence vs. absence of type 1 disease was >95%. There were no differences between image-based and clinical classification except for zone III disease. Conclusions The most common area of discrepancy in ROP classification is stage, although inter-expert agreement for clinically-significant disease such as presence vs. absence of type 1 and type 2 disease is high. There were no differences between image-based grading and the clinical exam in the ability to detect clinically-significant disease. This study provides additional evidence that image-based classification of ROP reliably detects clinically significant levels of ROP with high accuracy compared to the clinical exam. PMID:27238376
Measuring diagnoses: ICD code accuracy.
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-10-01
To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.
Development of a good-quality speech coder for transmission over noisy channels at 2.4 kb/s
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Berouti, M.; Higgins, A.; Russell, W.
1982-03-01
This report describes the development, study, and experimental results of a 2.4 kb/s speech coder called harmonic deviations (HDV) vocoder, which transmits good-quality speech over noisy channels with bit-error rates of up to 1%. The HDV coder is based on the linear predictive coding (LPC) vocoder, and it transmits additional information over and above the data transmitted by the LPC vocoder, in the form of deviations between the speech spectrum and the LPC all-pole model spectrum at a selected set of frequencies. At the receiver, the spectral deviations are used to generate the excitation signal for the all-pole synthesis filter. The report describes and compares several methods for extracting the spectral deviations from the speech signal and for encoding them. To limit the bit-rate of the HDV coder to 2.4 kb/s the report discusses several methods including orthogonal transformation and minimum-mean-square-error scalar quantization of log area ratios, two-stage vector-scalar quantization, and variable frame rate transmission. The report also presents the results of speech-quality optimization of the HDV coder at 2.4 kb/s.
Adaptive zero-tree structure for curved wavelet image coding
NASA Astrophysics Data System (ADS)
Zhang, Liang; Wang, Demin; Vincent, André
2006-02-01
We investigate the issue of efficient data organization and representation of the curved wavelet coefficients [curved wavelet transform (WT)]. We present an adaptive zero-tree structure that exploits the cross-subband similarity of the curved wavelet transform. In the embedded zero-tree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT), the parent-child relationship is defined in such a way that a parent has four children, restricted to a square of 2×2 pixels, the parent-child relationship in the adaptive zero-tree structure varies according to the curves along which the curved WT is performed. Five child patterns were determined based on different combinations of curve orientation. A new image coder was then developed based on this adaptive zero-tree structure and the set-partitioning technique. Experimental results using synthetic and natural images showed the effectiveness of the proposed adaptive zero-tree structure for encoding of the curved wavelet coefficients. The coding gain of the proposed coder can be up to 1.2 dB in terms of peak SNR (PSNR) compared to the SPIHT coder. Subjective evaluation shows that the proposed coder preserves lines and edges better than the SPIHT coder.
NASA Astrophysics Data System (ADS)
Adjorlolo, Clement; Mutanga, Onisimo; Cho, Moses A.; Ismail, Riyad
2013-04-01
In this paper, a user-defined inter-band correlation filter function was used to resample hyperspectral data and thereby mitigate the problem of multicollinearity in classification analysis. The proposed resampling technique convolves the spectral dependence information between a chosen band-centre and its shorter and longer wavelength neighbours. Weighting threshold of inter-band correlation (WTC, Pearson's r) was calculated, whereby r = 1 at the band-centre. Various WTC (r = 0.99, r = 0.95 and r = 0.90) were assessed, and bands with coefficients beyond a chosen threshold were assigned r = 0. The resultant data were used in the random forest analysis to classify in situ C3 and C4 grass canopy reflectance. The respective WTC datasets yielded improved classification accuracies (kappa = 0.82, 0.79 and 0.76) with less correlated wavebands when compared to resampled Hyperion bands (kappa = 0.76). Overall, the results obtained from this study suggested that resampling of hyperspectral data should account for the spectral dependence information to improve overall classification accuracy as well as reducing the problem of multicollinearity.
Profile of a city: characterizing and classifying urban soils in the city of Ghent
NASA Astrophysics Data System (ADS)
Delbecque, Nele; Verdoodt, Ann
2017-04-01
Worldwide, urban lands are expanding rapidly. Conversion of agricultural and natural landscapes to urban fabric can strongly influence soil properties through soil sealing, excavation, leveling, contamination, waste disposal and land management. Urban lands, often characterized by intensive use, need to deliver many production, ecological and cultural ecosystem services. To safeguard this natural capital for future generations, an improved understanding of biogeochemical characteristics, processes and functions of urban soils in time and space is essential. Additionally, existing (inter)national soil classification systems, based on the identification of soil genetic horizons, do not always allow a functional classification of urban soils. This research aims (1) to gain insight into urban soils and their properties in the city of Ghent (Belgium), and (2) to develop a procedure to functionally incorporate urban soils into existing (inter)national soil classification systems. Undisturbed soil cores (depth up to 1.25 m) are collected at 15 locations in Ghent with different times since development and land uses. Geotek MSCL-scans are taken to determine magnetic susceptibility and gamma density and to obtain high resolution images. Physico-chemical characterization of the soil cores is performed by means of detailed soil profile descriptions, traditional lab analyses, as well as proximal soil sensing techniques (XRF). The first results of this research will be presented and critically discussed to improve future efforts to characterize, classify and evaluate urban soils and their ecosystem services.
Sass, Julian; Becker, Kim; Ludmann, Dominik; Pantazoglou, Elisabeth; Dewenter, Heike; Thun, Sylvia
2018-01-01
A nationally uniform medication plan has recently been part of German legislation. The specification for the German medication plan was developed in cooperation between various stakeholders of the healthcare system. Its' goal is to enhance usability and interoperability while also providing patients and physicians with the necessary information they require for a safe and high-quality therapy. Within the research and development project named Medication Plan PLUS, the specification of the medication plan was tested and reviewed for semantic interoperability in particular. In this study, the list of pharmaceutical dose forms provided in the specification was mapped to the standard terms of the European Directorate for the Quality of Medicines & HealthCare by different coders. The level of agreement between coders was calculated using Cohen's Kappa (κ). Results show that less than half of the dose forms could be coded with EDQM standard terms. In addition to that Kappa was found to be moderate, which means rather unconvincing agreement among coders. In conclusion, there is still vast room for improvement in utilization of standardized international vocabulary and unused potential considering cross-border eHealth implementations in the future.
Inter-sectoral costs and benefits of mental health prevention: towards a new classification scheme.
Drost, Ruben M W A; Paulus, Aggie T G; Ruwaard, Dirk; Evers, Silvia M A A
2013-12-01
Many preventive interventions for mental disorders have costs and benefits that spill over to sectors outside the healthcare sector. Little is known about these "inter-sectoral costs and benefits" (ICBs) of prevention. However, to achieve an efficient allocation of scarce resources, insights on ICBs are indispensable. The main aim was to identify the ICBs related to the prevention of mental disorders and provide a sector-specific classification scheme for these ICBs. Using PubMed, a literature search was conducted for ICBs of mental disorders and related (psycho)social effects. A policy perspective was used to build the scheme's structure, which was adapted to the outcomes of the literature search. In order to validate the scheme's international applicability inside and outside the mental health domain, semi-structured interviews were conducted with (inter)national experts in the broad fields of health promotion and disease prevention. The searched-for items appeared in a total of 52 studies. The ICBs found were classified in one of four sectors: "Education", "Labor and Social Security", "Household and Leisure" or "Criminal Justice System". Psycho(social) effects were placed in a separate section under "Individual and Family". Based on interviews, the scheme remained unadjusted, apart from adding a population-based dimension. This is the first study which offers a sector-specific classification of ICBs. Given the explorative nature of the study, no guidelines on sector-specific classification of ICBs were available. Nevertheless, the classification scheme was acknowledged by an international audience and could therefore provide added value to researchers and policymakers in the field of mental health economics and prevention. The identification and classification of ICBs offers decision makers supporting information on how to optimally allocate scarce resources with respect to preventive interventions for mental disorders. By exploring a new area of research, which has remained largely unexplored until now, the current study has an added value as it may form the basis for the development of a tool which can be used to calculate the ICBs of specific mental health related preventive interventions.
Inter- and intra-rater reliability of nasal auscultation in daycare children.
Santos, Rita; Silva Alexandrino, Ana; Tomé, David; Melo, Cristina; Mesquita Montes, António; Costa, Daniel; Pinto Ferreira, João
2018-02-01
The aim of this study was to assess nasal auscultation's intra- and inter-rater reliability and to analyze ear and respiratory clinical condition according to nasal auscultation. Cross-sectional study performed in 125 children aged up to 3 years old attending daycare centers. Nasal auscultation, tympanometry and Paediatric Respiratory Severity Score (PRSS) were applied to all children. Nasal sounds were classified by an expert panel in order to determine nasal auscultation's intra and inter- rater reliability. The classification of nasal sounds was assessed against tympanometric and PRSS values. Nasal auscultation revealed substantial inter-rater (K=0.75) and intra-rater (K=0.69; K=0.61 and K=0.72) reliability. Children with a "non-obstructed" classification revealed a lower peak pressure (t=-3.599, P<0.001 in left ear; t=-2.258, P=0.026 in right ear) and a higher compliance (t=-2,728, P=0.007 in left ear; t=-3.830. P<0.001 in right ear) in both ears. There was an association between the classification of sounds and tympanogram types in both ears (X=11.437, P=0.003 in left ear; X=13.535, P=0.001 in right ear). Children with a "non-obstructed" classification had a healthier respiratory condition. Nasal auscultation revealed substantial intra- and inter-rater reliability. Nasal auscultation exhibited important differences according to ear and respiratory clinical conditions. Nasal auscultation in pediatrics seems to be an original topic as well as a simple method that can be used to identify early signs of nasopharyngeal obstruction.
Entropy reduction via simplified image contourization
NASA Technical Reports Server (NTRS)
Turner, Martin J.
1993-01-01
The process of contourization is presented which converts a raster image into a set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimizes noticeable artifacts in the simplified image.
Towards Automatic Classification of Wikipedia Content
NASA Astrophysics Data System (ADS)
Szymański, Julian
Wikipedia - the Free Encyclopedia encounters the problem of proper classification of new articles everyday. The process of assignment of articles to categories is performed manually and it is a time consuming task. It requires knowledge about Wikipedia structure, which is beyond typical editor competence, which leads to human-caused mistakes - omitting or wrong assignments of articles to categories. The article presents application of SVM classifier for automatic classification of documents from The Free Encyclopedia. The classifier application has been tested while using two text representations: inter-documents connections (hyperlinks) and word content. The results of the performed experiments evaluated on hand crafted data show that the Wikipedia classification process can be partially automated. The proposed approach can be used for building a decision support system which suggests editors the best categories that fit new content entered to Wikipedia.
Development of the Responsiveness to Child Feeding Cues Scale
Hodges, Eric A.; Johnson, Susan L.; Hughes, Sheryl O.; Hopkinson, Judy M.; Butte, Nancy F.; Fisher, Jennifer O.
2013-01-01
Parent-child feeding interactions during the first two years of life are thought to shape child appetite and obesity risk, but remain poorly studied. This research was designed to develop and assess the Responsiveness to Child Feeding Cues Scale (RCFCS), an observational measure of caregiver responsiveness to child feeding cues relevant to obesity. General responsiveness during feeding as well as maternal responsiveness to child hunger and fullness were rated during mid-morning feeding occasions by 3 trained coders using digitally-recordings. Initial inter-rater reliability and criterion validity were evaluated in a sample of 144 ethnically-diverse mothers of healthy 7- to 24-month-old children. Maternal self-report of demographics and measurements of maternal/child anthropometrics were obtained. Inter-rater agreement for most variables was excellent (ICC>0.80). Mothers tended to be more responsive to child hunger than fullness cues (p<0.001). Feeding responsiveness dimensions were associated with demographics, including maternal education, maternal body mass index, and child age, and aspects of feeding, including breastfeeding duration, and self-feeding. The RCFCS is a reliable observational measure of responsive feeding for children <2 years of age that is relevant to obesity in early development. PMID:23419965
Proximal humeral fracture classification systems revisited.
Majed, Addie; Macleod, Iain; Bull, Anthony M J; Zyto, Karol; Resch, Herbert; Hertel, Ralph; Reilly, Peter; Emery, Roger J H
2011-10-01
This study evaluated several classification systems and expert surgeons' anatomic understanding of these complex injuries based on a consecutive series of patients. We hypothesized that current proximal humeral fracture classification systems, regardless of imaging methods, are not sufficiently reliable to aid clinical management of these injuries. Complex fractures in 96 consecutive patients were investigated by generation of rapid sequence prototyping models from computed tomography Digital Imaging and Communications in Medicine (DICOM) imaging data. Four independent senior observers were asked to classify each model using 4 classification systems: Neer, AO, Codman-Hertel, and a prototype classification system by Resch. Interobserver and intraobserver κ coefficient values were calculated for the overall classification system and for selected classification items. The κ coefficient values for the interobserver reliability were 0.33 for Neer, 0.11 for AO, 0.44 for Codman-Hertel, and 0.15 for Resch. Interobserver reliability κ coefficient values were 0.32 for the number of fragments and 0.30 for the anatomic segment involved using the Neer system, 0.30 for the AO type (A, B, C), and 0.53, 0.48, and 0.08 for the Resch impaction/distraction, varus/valgus and flexion/extension subgroups, respectively. Three-part fractures showed low reliability for the Neer and AO systems. Currently available evidence suggests fracture classifications in use have poor intra- and inter-observer reliability despite the modality of imaging used thus making treating these injuries difficult as weak as affecting scientific research as well. This study was undertaken to evaluate the reliability of several systems using rapid sequence prototype models. Overall interobserver κ values represented slight to moderate agreement. The most reliable interobserver scores were found with the Codman-Hertel classification, followed by elements of Resch's trial system. The AO system had the lowest values. The higher interobserver reliability values for the Codman-Hertel system showed that is the only comprehensive fracture description studied, whereas the novel classification by Resch showed clear definition in respect to varus/valgus and impaction/distraction angulation. Copyright © 2011 Journal of Shoulder and Elbow Surgery Board of Trustees. All rights reserved.
Localized contourlet features in vehicle make and model recognition
NASA Astrophysics Data System (ADS)
Zafar, I.; Edirisinghe, E. A.; Acar, B. S.
2009-02-01
Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic Number Plate Recognition (ANPR) systems. Several vehicle MMR systems have been proposed in literature. In parallel to this, the usefulness of multi-resolution based feature analysis techniques leading to efficient object classification algorithms have received close attention from the research community. To this effect, Contourlet transforms that can provide an efficient directional multi-resolution image representation has recently been introduced. Already an attempt has been made in literature to use Curvelet/Contourlet transforms in vehicle MMR. In this paper we propose a novel localized feature detection method in Contourlet transform domain that is capable of increasing the classification rates up to 4%, as compared to the previously proposed Contourlet based vehicle MMR approach in which the features are non-localized and thus results in sub-optimal classification. Further we show that the proposed algorithm can achieve the increased classification accuracy of 96% at significantly lower computational complexity due to the use of Two Dimensional Linear Discriminant Analysis (2DLDA) for dimensionality reduction by preserving the features with high between-class variance and low inter-class variance.
The Development of an Automatic Dialect Classification Test. Final Report.
ERIC Educational Resources Information Center
Willis, Clodius
These experiments investigated and described intra-subject, inter-subject, and inter-group variation in perception of synthetic vowels as well as the possibility that inter-group differences reflect dialect differences. Two tests were made covering the full phonetic range of English vowels. In two other tests subjects chose between one of two…
Ofstad, Eirik H; Frich, Jan C; Schei, Edvin; Frankel, Richard M; Gulbrandsen, Pål
2016-01-01
Objective The medical literature lacks a comprehensive taxonomy of decisions made by physicians in medical encounters. Such a taxonomy might be useful in understanding the physician-centred, patient-centred and shared decision-making in clinical settings. We aimed to identify and classify all decisions emerging in conversations between patients and physicians. Design Qualitative study of video recorded patient–physician encounters. Participants and setting 380 patients in consultations with 59 physicians from 17 clinical specialties and three different settings (emergency room, ward round, outpatient clinic) in a Norwegian teaching hospital. A randomised sample of 30 encounters from internal medicine was used to identify and classify decisions, a maximum variation sample of 20 encounters was used for reliability assessments, and the remaining encounters were analysed to test for applicability across specialties. Results On the basis of physician statements in our material, we developed a taxonomy of clinical decisions—the Decision Identification and Classification Taxonomy for Use in Medicine (DICTUM). We categorised decisions into 10 mutually exclusive categories: gathering additional information, evaluating test results, defining problem, drug-related, therapeutic procedure-related, legal and insurance-related, contact-related, advice and precaution, treatment goal, and deferment. Four-coder inter-rater reliability using Krippendorff's α was 0.79. Conclusions DICTUM represents a precise, detailed and comprehensive taxonomy of medical decisions communicated within patient–physician encounters. Compared to previous normative frameworks, the taxonomy is descriptive, substantially broader and offers new categories to the variety of clinical decisions. The taxonomy could prove helpful in studies on the quality of medical work, use of time and resources, and understanding of why, when and how patients are or are not involved in decisions. PMID:26868946
DiClemente, Carlo C; Crouch, Taylor Berens; Norwood, Amber E Q; Delahanty, Janine; Welsh, Christopher
2015-03-01
Screening, brief intervention, and referral to treatment (SBIRT) has become an empirically supported and widely implemented approach in primary and specialty care for addressing substance misuse. Accordingly, training of providers in SBIRT has increased exponentially in recent years. However, the quality and fidelity of training programs and subsequent interventions are largely unknown because of the lack of SBIRT-specific evaluation tools. The purpose of this study was to create a coding scale to assess quality and fidelity of SBIRT interactions addressing alcohol, tobacco, illicit drugs, and prescription medication misuse. The scale was developed to evaluate performance in an SBIRT residency training program. Scale development was based on training protocol and competencies with consultation from Motivational Interviewing coding experts. Trained medical residents practiced SBIRT with standardized patients during 10- to 15-min videotaped interactions. This study included 25 tapes from the Family Medicine program coded by 3 unique coder pairs with varying levels of coding experience. Interrater reliability was assessed for overall scale components and individual items via intraclass correlation coefficients. Coder pair-specific reliability was also assessed. Interrater reliability was excellent overall for the scale components (>.85) and nearly all items. Reliability was higher for more experienced coders, though still adequate for the trained coder pair. Descriptive data demonstrated a broad range of adherence and skills. Subscale correlations supported concurrent and discriminant validity. Data provide evidence that the MD3 SBIRT Coding Scale is a psychometrically reliable coding system for evaluating SBIRT interactions and can be used to evaluate implementation skills for fidelity, training, assessment, and research. Recommendations for refinement and further testing of the measure are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Evaluation and integration of disparate classification systems for clefts of the lip
Wang, Kathie H.; Heike, Carrie L.; Clarkson, Melissa D.; Mejino, Jose L. V.; Brinkley, James F.; Tse, Raymond W.; Birgfeld, Craig B.; Fitzsimons, David A.; Cox, Timothy C.
2014-01-01
Orofacial clefting is a common birth defect with wide phenotypic variability. Many systems have been developed to classify cleft patterns to facilitate diagnosis, management, surgical treatment, and research. In this review, we examine the rationale for different existing classification schemes and determine their inter-relationships, as well as strengths and deficiencies for subclassification of clefts of the lip. The various systems differ in how they describe and define attributes of cleft lip (CL) phenotypes. Application and analysis of the CL classifications reveal discrepancies that may result in errors when comparing studies that use different systems. These inconsistencies in terminology, variable levels of subclassification, and ambiguity in some descriptions may confound analyses and impede further research aimed at understanding the genetics and etiology of clefts, development of effective treatment options for patients, as well as cross-institutional comparisons of outcome measures. Identification and reconciliation of discrepancies among existing systems is the first step toward creating a common standard to allow for a more explicit interpretation that will ultimately lead to a better understanding of the causes and manifestations of phenotypic variations in clefting. PMID:24860508
The validity and reliability of a simple semantic classification of foot posture.
Cross, Hugh A; Lehman, Linda
2008-12-01
The Simple Semantic Classification (SSC) is described as a pragmatic method to assist in the assessment of the weight bearing foot. It was designed for application by therapists and technicians working in underdeveloped situations, after they have had basic orientation in foot function. To present evidence of the validity and inter observer reliability of the SSC. 13 physiotherapists from LEPRA India projects and 12 physical therapists functioning within the National Programme for the Elimination of Hansen's Disease (PNEH), Brazil, participated in an inter-observer exercise. Inter-observer agreement was gauged using the Kappa statistic. The results of the inter-observer exercise were dependent on observations of foot posture made from photographs. This was necessary to ensure that the procedure was standardised for participants in different countries. The method had limitations which were partly reflected in the results. The level of agreement between the principle investigator and Indian physiotherapists was Kappa = 058. The level of agreement between Brazilian physical therapists and the principle investigator was Kappa = 0.70. The authors opine that the results were sufficiently compelling to suggest that the Simple Semantic Classification can be used as a field method to identify people at increased risk of foot pathologies.
ERIC Educational Resources Information Center
Klevens, Joanne; Leeb, Rebecca T.
2010-01-01
Objective: To describe the distribution of child maltreatment fatalities of children under 5 by age, sex, race/ethnicity, type of maltreatment, and relationship to alleged perpetrator using data from the National Violent Death Reporting System (NVDRS). Study design: Two independent coders reviewed information from death certificates, medical…
On the optimality of a universal noiseless coder
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner H.
1993-01-01
Rice developed a universal noiseless coding structure that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Variations of such noiseless coders have been used in many NASA applications. Custom VLSI coder and decoder modules capable of processing over 50 million samples per second have been fabricated and tested. In this study, the first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, for source symbol sets having a Laplacian distribution. Except for the default option, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery over a wide entropy range, and they confirm the optimality of the scheme. Comparison with other known techniques are performed on several widely used images and the results further validate the coder's optimality.
Fine-Granularity Functional Interaction Signatures for Characterization of Brain Conditions
Hu, Xintao; Zhu, Dajiang; Lv, Peili; Li, Kaiming; Han, Junwei; Wang, Lihong; Shen, Dinggang; Guo, Lei; Liu, Tianming
2014-01-01
In the human brain, functional activity occurs at multiple spatial scales. Current studies on functional brain networks and their alterations in brain diseases via resting-state functional magnetic resonance imaging (rs-fMRI) are generally either at local scale (regionally confined analysis and inter-regional functional connectivity analysis) or at global scale (graph theoretic analysis). In contrast, inferring functional interaction at fine-granularity sub-network scale has not been adequately explored yet. Here our hypothesis is that functional interaction measured at fine-granularity subnetwork scale can provide new insight into the neural mechanisms of neurological and psychological conditions, thus offering complementary information for healthy and diseased population classification. In this paper, we derived fine-granularity functional interaction (FGFI) signatures in subjects with Mild Cognitive Impairment (MCI) and Schizophrenia by diffusion tensor imaging (DTI) and rsfMRI, and used patient-control classification experiments to evaluate the distinctiveness of the derived FGFI features. Our experimental results have shown that the FGFI features alone can achieve comparable classification performance compared with the commonly used inter-regional connectivity features. However, the classification performance can be substantially improved when FGFI features and inter-regional connectivity features are integrated, suggesting the complementary information achieved from the FGFI signatures. PMID:23319242
Detection Of Malware Collusion With Static Dependence Analysis On Inter-App Communication
2016-12-08
DETECTION OF MALWARE COLLUSION WITH STATIC DEPENDENCE ANALYSIS ON INTER-APP COMMUNICATION VIRGINIA TECH DECEMBER 2016 FINAL TECHNICAL REPORT... DEPENDENCE ANALYSIS ON INTER-APP COMMUNICATION 5a. CONTRACT NUMBER FA8750-15-2-0076 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S...exploited. 15. SUBJECT TERMS Malware Collusion; Inter-App Communication; Static Dependence Analysis 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF
Subband Image Coding with Jointly Optimized Quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.
1995-01-01
An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
CTEPP STANDARD OPERATING PROCEDURE FOR TRANSLATING VIDEOTAPES OF CHILD ACTIVITIES (SOP-4.13)
The EPA will conduct a two-day video translation workshop to demonstrate to coders the procedures for translating the activity patterns of preschool children on videotape. The coders will be required to pass reliability tests to successfully complete the training requirements of ...
E/M coding problems plague physicians, coders.
King, Mitchell S; Lipsky, Martin S; Sharp, Lisa
2002-01-01
As the government turns its high beams on fraudulent billing, physician E/M coding is raising questions. With several studies spotlighting the difficulty physicians have in applying CPT E/M codes, the authors wanted to know if credentialed coders had the same problem. Here's what they found.
Rios, Anthony; Kavuluru, Ramakanth
2013-09-01
Extracting diagnosis codes from medical records is a complex task carried out by trained coders by reading all the documents associated with a patient's visit. With the popularity of electronic medical records (EMRs), computational approaches to code extraction have been proposed in the recent years. Machine learning approaches to multi-label text classification provide an important methodology in this task given each EMR can be associated with multiple codes. In this paper, we study the the role of feature selection, training data selection, and probabilistic threshold optimization in improving different multi-label classification approaches. We conduct experiments based on two different datasets: a recent gold standard dataset used for this task and a second larger and more complex EMR dataset we curated from the University of Kentucky Medical Center. While conventional approaches achieve results comparable to the state-of-the-art on the gold standard dataset, on our complex in-house dataset, we show that feature selection, training data selection, and probabilistic thresholding provide significant gains in performance.
InterProScan 5: genome-scale protein function classification
Jones, Philip; Binns, David; Chang, Hsin-Yu; Fraser, Matthew; Li, Weizhong; McAnulla, Craig; McWilliam, Hamish; Maslen, John; Mitchell, Alex; Nuka, Gift; Pesseat, Sebastien; Quinn, Antony F.; Sangrador-Vegas, Amaia; Scheremetjew, Maxim; Yong, Siew-Yit; Lopez, Rodrigo; Hunter, Sarah
2014-01-01
Motivation: Robust large-scale sequence analysis is a major challenge in modern genomic science, where biologists are frequently trying to characterize many millions of sequences. Here, we describe a new Java-based architecture for the widely used protein function prediction software package InterProScan. Developments include improvements and additions to the outputs of the software and the complete reimplementation of the software framework, resulting in a flexible and stable system that is able to use both multiprocessor machines and/or conventional clusters to achieve scalable distributed data analysis. InterProScan is freely available for download from the EMBl-EBI FTP site and the open source code is hosted at Google Code. Availability and implementation: InterProScan is distributed via FTP at ftp://ftp.ebi.ac.uk/pub/software/unix/iprscan/5/ and the source code is available from http://code.google.com/p/interproscan/. Contact: http://www.ebi.ac.uk/support or interhelp@ebi.ac.uk or mitchell@ebi.ac.uk PMID:24451626
Multilingual Twitter Sentiment Classification: The Role of Human Annotators
Mozetič, Igor; Grčar, Miha; Smailović, Jasmina
2016-01-01
What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self- and inter-annotator agreements since this improves the training datasets and consequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered. PMID:27149621
On the identification of sleep stages in mouse electroencephalography time-series.
Lampert, Thomas; Plano, Andrea; Austin, Jim; Platt, Bettina
2015-05-15
The automatic identification of sleep stages in electroencephalography (EEG) time-series is a long desired goal for researchers concerned with the study of sleep disorders. This paper presents advances towards achieving this goal, with particular application to EEG time-series recorded from mice. Approaches in the literature apply supervised learning classifiers, however, these do not reach the performance levels required for use within a laboratory. In this paper, detection reliability is increased, most notably in the case of REM stage identification, by naturally decomposing the problem and applying a support vector machine (SVM) based classifier to each of the EEG channels. Their outputs are integrated within a multiple classifier system. Furthermore, there exists no general consensus on the ideal choice of parameter values in such systems. Therefore, an investigation into the effects upon the classification performance is presented by varying parameters such as the epoch length; features size; number of training samples; and the method for calculating the power spectral density estimate. Finally, the results of these investigations are brought together to demonstrate the performance of the proposed classification algorithm in two cases: intra-animal classification and inter-animal classification. It is shown that, within a dataset of 10 EEG recordings, and using less than 1% of an EEG as training data, a mean classification errors of Awake 6.45%, NREM 5.82%, and REM 6.65% (with standard deviations less than 0.6%) are achieved in intra-animal analysis and, when using the equivalent of 7% of one EEG as training data, Awake 10.19%, NREM 7.75%, and REM 17.43% are achieved in inter-animal analysis (with mean standard deviations of 6.42%, 2.89%, and 9.69% respectively). A software package implementing the proposed approach will be made available through Cybula Ltd. Copyright © 2015 Elsevier B.V. All rights reserved.
Colorectal Cancer Classification and Cell Heterogeneity: A Systems Oncology Approach
Blanco-Calvo, Moisés; Concha, Ángel; Figueroa, Angélica; Garrido, Federico; Valladares-Ayerbes, Manuel
2015-01-01
Colorectal cancer is a heterogeneous disease that manifests through diverse clinical scenarios. During many years, our knowledge about the variability of colorectal tumors was limited to the histopathological analysis from which generic classifications associated with different clinical expectations are derived. However, currently we are beginning to understand that under the intense pathological and clinical variability of these tumors there underlies strong genetic and biological heterogeneity. Thus, with the increasing available information of inter-tumor and intra-tumor heterogeneity, the classical pathological approach is being displaced in favor of novel molecular classifications. In the present article, we summarize the most relevant proposals of molecular classifications obtained from the analysis of colorectal tumors using powerful high throughput techniques and devices. We also discuss the role that cancer systems biology may play in the integration and interpretation of the high amount of data generated and the challenges to be addressed in the future development of precision oncology. In addition, we review the current state of implementation of these novel tools in the pathological laboratory and in clinical practice. PMID:26084042
A Manual for Coding Descriptions, Interpretations, and Evaluations of Visual Art Forms.
ERIC Educational Resources Information Center
Acuff, Bette C.; Sieber-Suppes, Joan
This manual presents a system for categorizing stated esthetic responses to paintings. It is primarily a training manual for coders, but it may also be used for teaching reflective thinking skills and for evaluating programs of art education. The coding system contains 33 subdivisions of esthetic responses under three major categories: Cue…
Reliability of the Robinson classification for displaced comminuted midshaft clavicular fractures.
Stegeman, Sylvia A; Fernandes, Nicole C; Krijnen, Pieta; Schipper, Inger B
2015-01-01
This study aimed to assess the reliability of the Robinson classification for displaced comminuted midshaft fractures. A total of 102 surgeons and 52 radiologists classified 15 displaced comminuted midshaft clavicular fractures on anteroposterior (AP) and 30-degree caudocephalad radiographs twice. For both surgeons and radiologists, inter-observer and intra-observer agreement significantly improved after showing the 30-degree caudocephalad view in addition to the AP view. Radiologists had significantly higher inter- and intra-observer agreement than surgeons after judging both radiographs (κmultirater of 0.81 vs. 0.56; κintra-observer of 0.73 vs. 0.44). We advise to use two-plane radiography and to routinely incorporate the Robinson classification in the radiology reports. Copyright © 2015 Elsevier Inc. All rights reserved.
DOT National Transportation Integrated Search
2012-09-01
This report summarizes the results of a 13-month effort by CodeRed Business Solutions (CRBS) to consider how urban rail transit agencies can leverage data within their maintenance management systems to build asset inventories for higher-level analysi...
Breast density characterization using texton distributions.
Petroudi, Styliani; Brady, Michael
2011-01-01
Breast density has been shown to be one of the most significant risks for developing breast cancer, with women with dense breasts at four to six times higher risk. The Breast Imaging Reporting and Data System (BI-RADS) has a four class classification scheme that describes the different breast densities. However, there is great inter and intra observer variability among clinicians in reporting a mammogram's density class. This work presents a novel texture classification method and its application for the development of a completely automated breast density classification system. The new method represents the mammogram using textons, which can be thought of as the building blocks of texture under the operational definition of Leung and Malik as clustered filter responses. The new proposed method characterizes the mammographic appearance of the different density patterns by evaluating the texton spatial dependence matrix (TDSM) in the breast region's corresponding texton map. The TSDM is a texture model that captures both statistical and structural texture characteristics. The normalized TSDM matrices are evaluated for mammograms from the different density classes and corresponding texture models are established. Classification is achieved using a chi-square distance measure. The fully automated TSDM breast density classification method is quantitatively evaluated on mammograms from all density classes from the Oxford Mammogram Database. The incorporation of texton spatial dependencies allows for classification accuracy reaching over 82%. The breast density classification accuracy is better using texton TSDM compared to simple texton histograms.
2016-01-01
Background Inclusion of information about a patient’s work, industry, and occupation, in the electronic health record (EHR) could facilitate occupational health surveillance, better health outcomes, prevention activities, and identification of workers’ compensation cases. The US National Institute for Occupational Safety and Health (NIOSH) has developed an autocoding system for “industry” and “occupation” based on 1990 Bureau of Census codes; its effectiveness requires evaluation in conjunction with promoting the mandatory addition of these variables to the EHR. Objective The objective of the study was to evaluate the intercoder reliability of NIOSH’s Industry and Occupation Computerized Coding System (NIOCCS) when applied to data collected in a community survey conducted under the Affordable Care Act; to determine the proportion of records that are autocoded using NIOCCS. Methods Standard Occupational Classification (SOC) codes are used by several federal agencies in databases that capture demographic, employment, and health information to harmonize variables related to work activities among these data sources. There are 359 industry and occupation responses that were hand coded by 2 investigators, who came to a consensus on every code. The same variables were autocoded using NIOCCS at the high and moderate criteria level. Results Kappa was .84 for agreement between hand coders and between the hand coder consensus code versus NIOCCS high confidence level codes for the first 2 digits of the SOC code. For 4 digits, NIOCCS coding versus investigator coding ranged from kappa=.56 to .70. In this study, NIOCCS was able to achieve production rates (ie, to autocode) 31%-36% of entered variables at the “high confidence” level and 49%-58% at the “medium confidence” level. Autocoding (production) rates are somewhat lower than those reported by NIOSH. Agreement between manually coded and autocoded data are “substantial” at the 2-digit level, but only “fair” to “good” at the 4-digit level. Conclusions This work serves as a baseline for performance of NIOCCS by investigators in the field. Further field testing will clarify NIOCCS effectiveness in terms of ability to assign codes and coding accuracy and will clarify its value as inclusion of these occupational variables in the EHR is promoted. PMID:26878932
Multipath search coding of stationary signals with applications to speech
NASA Astrophysics Data System (ADS)
Fehn, H. G.; Noll, P.
1982-04-01
This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.
Gold-standard for computer-assisted morphological sperm analysis.
Chang, Violeta; Garcia, Alejandra; Hitschfeld, Nancy; Härtel, Steffen
2017-04-01
Published algorithms for classification of human sperm heads are based on relatively small image databases that are not open to the public, and thus no direct comparison is available for competing methods. We describe a gold-standard for morphological sperm analysis (SCIAN-MorphoSpermGS), a dataset of sperm head images with expert-classification labels in one of the following classes: normal, tapered, pyriform, small or amorphous. This gold-standard is for evaluating and comparing known techniques and future improvements to present approaches for classification of human sperm heads for semen analysis. Although this paper does not provide a computational tool for morphological sperm analysis, we present a set of experiments for comparing sperm head description and classification common techniques. This classification base-line is aimed to be used as a reference for future improvements to present approaches for human sperm head classification. The gold-standard provides a label for each sperm head, which is achieved by majority voting among experts. The classification base-line compares four supervised learning methods (1- Nearest Neighbor, naive Bayes, decision trees and Support Vector Machine (SVM)) and three shape-based descriptors (Hu moments, Zernike moments and Fourier descriptors), reporting the accuracy and the true positive rate for each experiment. We used Fleiss' Kappa Coefficient to evaluate the inter-expert agreement and Fisher's exact test for inter-expert variability and statistical significant differences between descriptors and learning techniques. Our results confirm the high degree of inter-expert variability in the morphological sperm analysis. Regarding the classification base line, we show that none of the standard descriptors or classification approaches is best suitable for tackling the problem of sperm head classification. We discovered that the correct classification rate was highly variable when trying to discriminate among non-normal sperm heads. By using the Fourier descriptor and SVM, we achieved the best mean correct classification: only 49%. We conclude that the SCIAN-MorphoSpermGS will provide a standard tool for evaluation of characterization and classification approaches for human sperm heads. Indeed, there is a clear need for a specific shape-based descriptor for human sperm heads and a specific classification approach to tackle the problem of high variability within subcategories of abnormal sperm cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
Blaschke, V; Brauns, B; Khaladj, N; Schmidt, C; Emmert, S
2018-02-27
Hospital revenues generated by diagnosis-related groups (DRGs) are in part dependent on the coding of secondary diagnoses. Therefore, more and more hospitals trust specialized coders with this task, thereby relieving doctors from time-consuming administrative burdens and establishing a highly professionalized coding environment. However, it is vastly unknown if the revenues generated by the coders do indeed exceed their incurred costs. Coding data from the departments of dermatology, ophthalmology, and infectious diseases from Rostock University Hospital from 2007-2016 were analyzed for the effects of secondary diagnoses on the resulting DRG, i. e., hospital charges. Ophthalmological case were highly resistant to the addition of secondary diagnoses. In contrast, adding secondary diagnoses to cases from infectious diseases resulted in 15% higher revenues. Although dermatological and infectious cases share the same sensitivity to secondary diagnoses, higher revenues could only rarely be realized in dermatology, probably owing to a younger, less multimorbid patient population. Except for ophthalmology, trusting specialized coders with clinical coding generates additional revenues through the coding of secondary diagnoses which exceed the costs for employing these coders.
Giordano, Vincenzo; Koch, Hilton Augusto; Mendes, Carlos Henrique; Bergamin, André; de Souza, Felipe Serrão; do Amaral, Ney Pecegueiro
2015-02-01
The aim of this study was to evaluate the inter- and intra-observer agreement in the initial diagnosis and classification by means of plain radiographs and CT scans of tibial plateau fractures photographed and sent via WhatsApp Messenger. The increasing popularity of smartphones has driven the development of technology for data transmission and imaging and generated a growing interest in the use of these devices as diagnostic tools. The emergence of WhatsApp Messenger technology, which is available for various platforms used by smartphones, has led to an improvement in the quality and resolution of images sent and received. The images (plain radiographs and CT scans) were obtained from 13 cases of tibial plateau fractures using the iPhone 5 (Apple Inc., Cupertino, CA, USA) and were sent to six observers via the WhatsApp Messenger application. The observers were asked to determine the standard deviation and type of injury, the classification according to the Schatzker and the Luo classifications schemes, and whether the CT scan changed the classification. The six observers independently assessed the images on two separate occasions, 15 days apart. The inter- and intra-observer agreement for both periods of the study ranged from excellent to perfect (0.75<κ<1.0) across all survey questions. When asked if the inclusion of the CT images would change their final X-ray classification (Schatzker or Luo), the inter- and intra-observer agreement was perfect (k=1) on both assessment occasions. We found an excellent inter- and intra-observer agreement in the imaging assessment of tibial plateau fractures sent via WhatsApp Messenger. The authors now propose the systematic use of the application to facilitate faster documentation and obtaining the opinion of an experienced consultant when not on call. Finally, we think the use of the WhatsApp Messenger as an adjuvant tool could be broadened to other clinical centres to assess its viability in other skeletal and non-skeletal trauma situations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast
NASA Astrophysics Data System (ADS)
Chu, Tianli; Xiong, Zixiang
2003-12-01
This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.
Pedersen, Ken Steen; Toft, Nils
2011-03-01
The objective of the current study was to evaluate intra- and inter-observer agreement using a descriptive classification scale with four categories, descriptive text and pictures for assessment of consistency in faecal samples from pigs post weaning. The four consistency categories were score one=firm and shaped, score two=soft and shaped, score three=loose and score four=watery. Five observers from the same veterinary practice examined 100 faecal samples using the scale with four categories. Four of the observers examined the 100 faecal samples twice within the same day. Within observers the difference in proportions for the individual consistency categories between two examinations was on average 0.04 (range: 0-0.10). The mean intra-observer agreement was 0.82 (range: 0.72-0.91) with a mean kappa value of 0.76 (range: 0.61-0.88). For inter-observer agreement overall kappa was 0.64. For the 10 pair-wise comparisons the mean inter-observer agreement was 0.73 (range: 0.61-0.90) with a mean kappa value of 0.64 (range: 0.48-0.87). The difference in proportions for the individual consistency categories was on average 0.08 (range: 0-0.17). In conclusion, the agreement observed for the descriptive classification scale with four categories, descriptive text and pictures may be categorized as a substantial to almost perfect intra-observer agreement and a moderate to almost perfect inter-observer agreement. However, more objective measures than clinical scales may still be needed to improve intra- and inter-observer agreement in research studies. Copyright © 2010 Elsevier B.V. All rights reserved.
A Raman spectroscopy bio-sensor for tissue discrimination in surgical robotics.
Ashok, Praveen C; Giardini, Mario E; Dholakia, Kishan; Sibbett, Wilson
2014-01-01
We report the development of a fiber-based Raman sensor to be used in tumour margin identification during endoluminal robotic surgery. Although this is a generic platform, the sensor we describe was adapted for the ARAKNES (Array of Robots Augmenting the KiNematics of Endoluminal Surgery) robotic platform. On such a platform, the Raman sensor is intended to identify ambiguous tissue margins during robot-assisted surgeries. To maintain sterility of the probe during surgical intervention, a disposable sleeve was specially designed. A straightforward user-compatible interface was implemented where a supervised multivariate classification algorithm was used to classify different tissue types based on specific Raman fingerprints so that it could be used without prior knowledge of spectroscopic data analysis. The protocol avoids inter-patient variability in data and the sensor system is not restricted for use in the classification of a particular tissue type. Representative tissue classification assessments were performed using this system on excised tissue. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Fink, Wolfgang
2009-05-01
Artificial neural networks (ANNs) are powerful methods for the classification of multi-dimensional data as well as for the control of dynamic systems. In general terms, ANNs consist of neurons that are, e.g., arranged in layers and interconnected by real-valued or binary neural couplings or weights. ANNs try mimicking the processing taking place in biological brains. The classification and generalization capabilities of ANNs are given by the interconnection architecture and the coupling strengths. To perform a certain classification or control task with a particular ANN architecture (i.e., number of neurons, number of layers, etc.), the inter-neuron couplings and their accordant coupling strengths must be determined (1) either by a priori design (i.e., manually) or (2) using training algorithms such as error back-propagation. The more complex the classification or control task, the less obvious it is how to determine an a priori design of an ANN, and, as a consequence, the architecture choice becomes somewhat arbitrary. Furthermore, rather than being able to determine for a given architecture directly the corresponding coupling strengths necessary to perform the classification or control task, these have to be obtained/learned through training of the ANN on test data. We report on the use of a Stochastic Optimization Framework (SOF; Fink, SPIE 2008) for the autonomous self-configuration of Artificial Neural Networks (i.e., the determination of number of hidden layers, number of neurons per hidden layer, interconnections between neurons, and respective coupling strengths) for performing classification or control tasks. This may provide an approach towards cognizant and self-adapting computing architectures and systems.
Telemetry advances in data compression and channel coding
NASA Technical Reports Server (NTRS)
Miller, Warner H.; Morakis, James C.; Yeh, Pen-Shu
1990-01-01
Addressed in this paper is the dependence of telecommunication channel, forward error correcting coding and source data compression coding on integrated circuit technology. Emphasis is placed on real time high speed Reed Solomon (RS) decoding using full custom VLSI technology. Performance curves of NASA's standard channel coder and a proposed standard lossless data compression coder are presented.
Nagler, Rebekah H.; Bigman, Cabral A.; Ramanadhan, Shoba; Ramamurthi, Divya; Viswanath, K.
2016-01-01
Background Americans remain under-informed about cancer and other health disparities and the social determinants of health (SDH). The news media may be contributing to this knowledge deficit, whether by discussing these issues narrowly or ignoring them altogether. Because local media are particularly important in influencing public opinion and support for public policies, this study examines the prevalence and framing of disparities/SDH in local mainstream and ethnic print news. Methods We conducted a multi-method content analysis of local mainstream (English-language) and ethnic (Spanish-language) print news in two lower-income cities in New England with substantial racial/ethnic minority populations. After establishing inter-coder reliability (kappa=0.63–0.88), coders reviewed the primary English- and Spanish-language newspaper in each city, identifying both disparities and non-disparities health stories published between February 2010 and January 2011. Results Local print news coverage of cancer and other health disparities was rare. Of 650 health stories published across four newspapers during the one-year study period, only 21 (3.2%) discussed disparities/SDH. Although some stories identified causes of and solutions for disparities, these were often framed in individual (e.g., poor dietary habits) rather than social contextual terms (e.g., lack of food availability/affordability). Cancer and other health stories routinely missed opportunities to discuss disparities/SDH. Conclusion Local mainstream and ethnic media may be ideal targets for multilevel interventions designed to address cancer and other health inequalities. Impact By increasing media attention to and framing of health disparities, we may observe important downstream effects on public opinion and support for structural solutions to disparities, particularly at the local level. PMID:27196094
NASA Astrophysics Data System (ADS)
Haaf, Ezra; Barthel, Roland
2018-04-01
Classification and similarity based methods, which have recently received major attention in the field of surface water hydrology, namely through the PUB (prediction in ungauged basins) initiative, have not yet been applied to groundwater systems. However, it can be hypothesised, that the principle of "similar systems responding similarly to similar forcing" applies in subsurface hydrology as well. One fundamental prerequisite to test this hypothesis and eventually to apply the principle to make "predictions for ungauged groundwater systems" is efficient methods to quantify the similarity of groundwater system responses, i.e. groundwater hydrographs. In this study, a large, spatially extensive, as well as geologically and geomorphologically diverse dataset from Southern Germany and Western Austria was used, to test and compare a set of 32 grouping methods, which have previously only been used individually in local-scale studies. The resulting groupings are compared to a heuristic visual classification, which serves as a baseline. A performance ranking of these classification methods is carried out and differences in homogeneity of grouping results were shown, whereby selected groups were related to hydrogeological indices and geological descriptors. This exploratory empirical study shows that the choice of grouping method has a large impact on the object distribution within groups, as well as on the homogeneity of patterns captured in groups. The study provides a comprehensive overview of a large number of grouping methods, which can guide researchers when attempting similarity-based groundwater hydrograph classification.
2012-01-01
Background Electromyography (EMG) pattern-recognition based control strategies for multifunctional myoelectric prosthesis systems have been studied commonly in a controlled laboratory setting. Before these myoelectric prosthesis systems are clinically viable, it will be necessary to assess the effect of some disparities between the ideal laboratory setting and practical use on the control performance. One important obstacle is the impact of arm position variation that causes the changes of EMG pattern when performing identical motions in different arm positions. This study aimed to investigate the impacts of arm position variation on EMG pattern-recognition based motion classification in upper-limb amputees and the solutions for reducing these impacts. Methods With five unilateral transradial (TR) amputees, the EMG signals and tri-axial accelerometer mechanomyography (ACC-MMG) signals were simultaneously collected from both amputated and intact arms when performing six classes of arm and hand movements in each of five arm positions that were considered in the study. The effect of the arm position changes was estimated in terms of motion classification error and compared between amputated and intact arms. Then the performance of three proposed methods in attenuating the impact of arm positions was evaluated. Results With EMG signals, the average intra-position and inter-position classification errors across all five arm positions and five subjects were around 7.3% and 29.9% from amputated arms, respectively, about 1.0% and 10% low in comparison with those from intact arms. While ACC-MMG signals could yield a similar intra-position classification error (9.9%) as EMG, they had much higher inter-position classification error with an average value of 81.1% over the arm positions and the subjects. When the EMG data from all five arm positions were involved in the training set, the average classification error reached a value of around 10.8% for amputated arms. Using a two-stage cascade classifier, the average classification error was around 9.0% over all five arm positions. Reducing ACC-MMG channels from 8 to 2 only increased the average position classification error across all five arm positions from 0.7% to 1.0% in amputated arms. Conclusions The performance of EMG pattern-recognition based method in classifying movements strongly depends on arm positions. This dependency is a little stronger in intact arm than in amputated arm, which suggests that the investigations associated with practical use of a myoelectric prosthesis should use the limb amputees as subjects instead of using able-body subjects. The two-stage cascade classifier mode with ACC-MMG for limb position identification and EMG for limb motion classification may be a promising way to reduce the effect of limb position variation on classification performance. PMID:23036049
Output MSE and PSNR prediction in DCT-based lossy compression of remote sensing images
NASA Astrophysics Data System (ADS)
Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2017-10-01
Amount and size of remote sensing (RS) images acquired by modern systems are so large that data have to be compressed in order to transfer, save and disseminate them. Lossy compression becomes more popular for aforementioned situations. But lossy compression has to be applied carefully with providing acceptable level of introduced distortions not to lose valuable information contained in data. Then introduced losses have to be controlled and predicted and this is problematic for many coders. In this paper, we analyze possibilities of predicting mean square error or, equivalently, PSNR for coders based on discrete cosine transform (DCT) applied either for compressing singlechannel RS images or multichannel data in component-wise manner. The proposed approach is based on direct dependence between distortions introduced due to DCT coefficient quantization and losses in compressed data. One more innovation deals with possibility to employ a limited number (percentage) of blocks for which DCT-coefficients have to be calculated. This accelerates prediction and makes it considerably faster than compression itself. There are two other advantages of the proposed approach. First, it is applicable for both uniform and non-uniform quantization of DCT coefficients. Second, the approach is quite general since it works for several analyzed DCT-based coders. The simulation results are obtained for standard test images and then verified for real-life RS data.
Kavuluru, Ramakanth; Han, Sifei; Harris, Daniel
2017-01-01
Diagnosis codes are extracted from medical records for billing and reimbursement and for secondary uses such as quality control and cohort identification. In the US, these codes come from the standard terminology ICD-9-CM derived from the international classification of diseases (ICD). ICD-9 codes are generally extracted by trained human coders by reading all artifacts available in a patient’s medical record following specific coding guidelines. To assist coders in this manual process, this paper proposes an unsupervised ensemble approach to automatically extract ICD-9 diagnosis codes from textual narratives included in electronic medical records (EMRs). Earlier attempts on automatic extraction focused on individual documents such as radiology reports and discharge summaries. Here we use a more realistic dataset and extract ICD-9 codes from EMRs of 1000 inpatient visits at the University of Kentucky Medical Center. Using named entity recognition (NER), graph-based concept-mapping of medical concepts, and extractive text summarization techniques, we achieve an example based average recall of 0.42 with average precision 0.47; compared with a baseline of using only NER, we notice a 12% improvement in recall with the graph-based approach and a 7% improvement in precision using the extractive text summarization approach. Although diagnosis codes are complex concepts often expressed in text with significant long range non-local dependencies, our present work shows the potential of unsupervised methods in extracting a portion of codes. As such, our findings are especially relevant for code extraction tasks where obtaining large amounts of training data is difficult. PMID:28748227
Liao, Wei; Yu, Yang; Miao, Huan-Huan; Feng, Yi-Xuan; Ji, Gong-Jun; Feng, Jian-Hua
2017-05-01
Tourette syndrome (TS) is associated with gross morphological changes in the corpus callosum, suggesting deficits in inter-hemispheric coordination. The present study sought to identify changes in inter-hemispheric functional and anatomical connectivity in boys with "pure" TS as well as their potential value for clinical diagnosis. TS boys without comorbidity (pure TS, n = 24) were selected from a large dataset and compared to age- and education-matched controls (n = 32). Intrinsic functional connectivity (iFC) between bilateral homotopic voxels was computed and compared between groups. Abnormal iFC was found in the bilateral prefronto-striatum-midbrain networks as well as bilateral sensorimotor and temporal cortices. The iFC between the bilateral anterior cingulate cortex (ACC) was negatively correlated with symptom severity. Anatomical connectivity strengths between functionally abnormal regions were estimated by diffusion probabilistic tractography, but no significant between-group difference was found. To test the clinical applicability of these neuroimaging findings, multivariate pattern analysis was used to develop a classification model in half of the total sample. The classification model exhibited excellent classification power for discriminating TS patients from controls in the other half samples. In summary, our findings emphasize the role of inter-hemispheric communication deficits in the pathophysiology of TS and suggest that iFC is a potential quantitative neuromarker for clinical diagnosis.
Do you see what I see? Mobile eye-tracker contextual analysis and inter-rater reliability.
Stuart, S; Hunt, D; Nell, J; Godfrey, A; Hausdorff, J M; Rochester, L; Alcock, L
2018-02-01
Mobile eye-trackers are currently used during real-world tasks (e.g. gait) to monitor visual and cognitive processes, particularly in ageing and Parkinson's disease (PD). However, contextual analysis involving fixation locations during such tasks is rarely performed due to its complexity. This study adapted a validated algorithm and developed a classification method to semi-automate contextual analysis of mobile eye-tracking data. We further assessed inter-rater reliability of the proposed classification method. A mobile eye-tracker recorded eye-movements during walking in five healthy older adult controls (HC) and five people with PD. Fixations were identified using a previously validated algorithm, which was adapted to provide still images of fixation locations (n = 116). The fixation location was manually identified by two raters (DH, JN), who classified the locations. Cohen's kappa correlation coefficients determined the inter-rater reliability. The algorithm successfully provided still images for each fixation, allowing manual contextual analysis to be performed. The inter-rater reliability for classifying the fixation location was high for both PD (kappa = 0.80, 95% agreement) and HC groups (kappa = 0.80, 91% agreement), which indicated a reliable classification method. This study developed a reliable semi-automated contextual analysis method for gait studies in HC and PD. Future studies could adapt this methodology for various gait-related eye-tracking studies.
Large-scale classification of traffic signs under real-world conditions
NASA Astrophysics Data System (ADS)
Hazelhoff, Lykele; Creusen, Ivo; van de Wouw, Dennis; de With, Peter H. N.
2012-02-01
Traffic sign inventories are important to governmental agencies as they facilitate evaluation of traffic sign locations and are beneficial for road and sign maintenance. These inventories can be created (semi-)automatically based on street-level panoramic images. In these images, object detection is employed to detect the signs in each image, followed by a classification stage to retrieve the specific sign type. Classification of traffic signs is a complicated matter, since sign types are very similar with only minor differences within the sign, a high number of different signs is involved and multiple distortions occur, including variations in capturing conditions, occlusions, viewpoints and sign deformations. Therefore, we propose a method for robust classification of traffic signs, based on the Bag of Words approach for generic object classification. We extend the approach with a flexible, modular codebook to model the specific features of each sign type independently, in order to emphasize at the inter-sign differences instead of the parts common for all sign types. Additionally, this allows us to model and label the present false detections. Furthermore, analysis of the classification output provides the unreliable results. This classification system has been extensively tested for three different sign classes, covering 60 different sign types in total. These three data sets contain the sign detection results on street-level panoramic images, extracted from a country-wide database. The introduction of the modular codebook shows a significant improvement for all three sets, where the system is able to classify about 98% of the reliable results correctly.
Digital codec for real-time processing of broadcast quality video signals at 1.8 bits/pixel
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1989-01-01
The authors present the hardware implementation of a digital television bandwidth compression algorithm which processes standard NTSC (National Television Systems Committee) composite color television signals and produces broadcast-quality video in real time at an average of 1.8 b/pixel. The sampling rate used with this algorithm results in 768 samples over the active portion of each video line by 512 active video lines per video frame. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a nonadaptive predictor, nonuniform quantizer, and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The nonadaptive predictor and multilevel Huffman coder combine to set this technique apart from prior-art DPCM encoding algorithms. The authors describe the data compression algorithm and the hardware implementation of the codec and provide performance results.
PatternCoder: A Programming Support Tool for Learning Binary Class Associations and Design Patterns
ERIC Educational Resources Information Center
Paterson, J. H.; Cheng, K. F.; Haddow, J.
2009-01-01
PatternCoder is a software tool to aid student understanding of class associations. It has a wizard-based interface which allows students to select an appropriate binary class association or design pattern for a given problem. Java code is then generated which allows students to explore the way in which the class associations are implemented in a…
Multiframe video coding for improved performance over wireless channels.
Budagavi, M; Gibson, J D
2001-01-01
We propose and evaluate a multi-frame extension to block motion compensation (BMC) coding of videoconferencing-type video signals for wireless channels. The multi-frame BMC (MF-BMC) coder makes use of the redundancy that exists across multiple frames in typical videoconferencing sequences to achieve additional compression over that obtained by using the single frame BMC (SF-BMC) approach, such as in the base-level H.263 codec. The MF-BMC approach also has an inherent ability of overcoming some transmission errors and is thus more robust when compared to the SF-BMC approach. We model the error propagation process in MF-BMC coding as a multiple Markov chain and use Markov chain analysis to infer that the use of multiple frames in motion compensation increases robustness. The Markov chain analysis is also used to devise a simple scheme which randomizes the selection of the frame (amongst the multiple previous frames) used in BMC to achieve additional robustness. The MF-BMC coders proposed are a multi-frame extension of the base level H.263 coder and are found to be more robust than the base level H.263 coder when subjected to simulated errors commonly encountered on wireless channels.
ERIC Educational Resources Information Center
Jamison, Wesley
1977-01-01
Two models of intertask relations, Wohlwill's divergent-decalage and reciprocal-interaction patterns, were evaluated for their fit to cross-classification tables which showed the joint classification of 101 children's performance on all possible pairs of eight concrete operational tasks. (SB)
A TWIN STUDY OF SCHIZOAFFECTIVE-MANIA, SCHIZOAFFECTIVE-DEPRESSION AND OTHER PSYCHOTIC SYNDROMES
Cardno, Alastair G; Rijsdijk, Frühling V; West, Robert M; Gottesman, Irving I; Craddock, Nick; Murray, Robin M; McGuffin, Peter
2012-01-01
The nosological status of schizoaffective disorders remains controversial. Twin studies are potentially valuable for investigating relationships between schizoaffective-mania, schizoaffective-depression and other psychotic syndromes, but no such study has yet been reported. We ascertained 224 probandwise twin pairs (106 monozygotic, 118 same-sex dizygotic), where probands had psychotic or manic symptoms, from the Maudsley Twin Register in London (1948–1993). We investigated Research Diagnostic Criteria schizoaffective-mania, schizoaffective-depression, schizophrenia, mania and depressive psychosis primarily using a non-hierarchical classification, and additionally using hierarchical and data-derived classifications, and a classification featuring broad schizophrenic and manic syndromes without separate schizoaffective syndromes. We investigated inter-rater reliability and co-occurrence of syndromes within twin probands and twin pairs. The schizoaffective syndromes showed only moderate inter-rater reliability. There was general significant co-occurrence between syndromes within twin probands and monozygotic pairs, and a trend for schizoaffective-mania and mania to have the greatest co-occurrence. Schizoaffective syndromes in monozygotic probands were associated with relatively high risk of a psychotic syndrome occurring in their co-twins. The classification of broad schizophrenic and manic syndromes without separate schizoaffective syndromes showed improved inter-rater reliability, but high genetic and environmental correlations between the two broad syndromes. The results are consistent with regarding schizoaffective-mania as due to co-occurring elevated liability to schizophrenia, mania and depression; and schizoaffective-depression as due to co-occurring elevated liability to schizophrenia and depression, but with less elevation of liability to mania. If in due course schizoaffective syndromes show satisfactory inter-rater reliability and some specific etiological factors they could alternatively be regarded as partly independent disorders. PMID:22213671
A twin study of schizoaffective-mania, schizoaffective-depression, and other psychotic syndromes.
Cardno, Alastair G; Rijsdijk, Frühling V; West, Robert M; Gottesman, Irving I; Craddock, Nick; Murray, Robin M; McGuffin, Peter
2012-03-01
The nosological status of schizoaffective disorders remains controversial. Twin studies are potentially valuable for investigating relationships between schizoaffective-mania, schizoaffective-depression, and other psychotic syndromes, but no such study has yet been reported. We ascertained 224 probandwise twin pairs [106 monozygotic (MZ), 118 same-sex dizygotic (DZ)], where probands had psychotic or manic symptoms, from the Maudsley Twin Register in London (1948-1993). We investigated Research Diagnostic Criteria schizoaffective-mania, schizoaffective-depression, schizophrenia, mania and depressive psychosis primarily using a non-hierarchical classification, and additionally using hierarchical and data-derived classifications, and a classification featuring broad schizophrenic and manic syndromes without separate schizoaffective syndromes. We investigated inter-rater reliability and co-occurrence of syndromes within twin probands and twin pairs. The schizoaffective syndromes showed only moderate inter-rater reliability. There was general significant co-occurrence between syndromes within twin probands and MZ pairs, and a trend for schizoaffective-mania and mania to have the greatest co-occurrence. Schizoaffective syndromes in MZ probands were associated with relatively high risk of a psychotic syndrome occurring in their co-twins. The classification of broad schizophrenic and manic syndromes without separate schizoaffective syndromes showed improved inter-rater reliability, but high genetic and environmental correlations between the two broad syndromes. The results are consistent with regarding schizoaffective-mania as due to co-occurring elevated liability to schizophrenia, mania, and depression; and schizoaffective-depression as due to co-occurring elevated liability to schizophrenia and depression, but with less elevation of liability to mania. If in due course schizoaffective syndromes show satisfactory inter-rater reliability and some specific etiological factors they could alternatively be regarded as partly independent disorders. Copyright © 2011 Wiley Periodicals, Inc.
Harrop, James S; Vaccaro, Alexander R; Hurlbert, R John; Wilsey, Jared T; Baron, Eli M; Shaffrey, Christopher I; Fisher, Charles G; Dvorak, Marcel F; Oner, F C; Wood, Kirkham B; Anand, Neel; Anderson, D Greg; Lim, Moe R; Lee, Joon Y; Bono, Christopher M; Arnold, Paul M; Rampersaud, Y Raja; Fehlings, Michael G
2006-02-01
A new classification and treatment algorithm for thoracolumbar injuries was recently introduced by Vaccaro and colleagues in 2005. A thoracolumbar injury severity scale (TLISS) was proposed for grading and guiding treatment for these injuries. The scale is based on the following: 1) the mechanism of injury; 2) the integrity of the posterior ligamentous complex (PLC); and 3) the patient's neurological status. The reliability and validity of assessing injury mechanism and the integrity of the PLC was assessed. Forty-eight spine surgeons, consisting of neurosurgeons and orthopedic surgeons, reviewed 56 clinical thoracolumbar injury case histories. Each was classified and scored to determine treatment recommendations according to a novel classification system. After 3 months the case histories were reordered and the physicians repeated the exercise. Validity of this classification was good among reviewers; the vast majority (> 90%) agreed with the system's treatment recommendations. Surgeons were unclear as to a cogent description of PLC disruption and fracture mechanism. The TLISS demonstrated acceptable reliability in terms of intra- and interobserver agreement on the algorithm's treatment recommendations. Replacing injury mechanism with a description of injury morphology and better definition of PLC injury will improve inter- and intraobserver reliability of this injury classification system.
Mehdizadeh, Farhad; Soroosh, Mohammad; Alipour-Banaei, Hamed; Farshidi, Ebrahim
2017-03-01
In this paper, we propose what we believe is a novel all-optical analog-to-digital converter (ADC) based on photonic crystals. The proposed structure is composed of a nonlinear triplexer and an optical coder. The nonlinear triplexer is for creating discrete levels in the continuous optical input signal, and the optical coder is for generating a 2-bit standard binary code out of the discrete levels coming from the nonlinear triplexer. Controlling the resonant mode of the resonant rings through optical intensity is the main objective and working mechanism of the proposed structure. The maximum delay time obtained for the proposed structure was about 5 ps and the total footprint is about 1520 μm2.
Inter-class sparsity based discriminative least square regression.
Wen, Jie; Xu, Yong; Li, Zuoyong; Ma, Zhongli; Xu, Yuanrong
2018-06-01
Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero-one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero-one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification. Copyright © 2018 Elsevier Ltd. All rights reserved.
van der Mei, Sijrike F; Dijkers, Marcel P J M; Heerkens, Yvonne F
2011-12-01
To examine to what extent the concept and the domains of participation as defined in the International Classification of Functioning, Disability and Health (ICF) are represented in general cancer-specific health-related quality of life (HRQOL) instruments. Using the ICF linking rules, two coders independently extracted the meaningful concepts of ten instruments and linked these to ICF codes. The proportion of concepts that could be linked to ICF codes ranged from 68 to 95%. Although all instruments contained concepts linked to Participation (Chapters d7-d9 of the classification of 'Activities and Participation'), the instruments covered only a small part of all available ICF codes. The proportion of ICF codes in the instruments that were participation related ranged from 3 to 35%. 'Major life areas' (d8) was the most frequently used Participation Chapter, with d850 'remunerative employment' as the most used ICF code. The number of participation-related ICF codes covered in the instruments is limited. General cancer-specific HRQOL instruments only assess social life of cancer patients to a limited degree. This study's information on the content of these instruments may guide researchers in selecting the appropriate instrument for a specific research purpose.
Takasaki, Hiroshi; Okuyama, Kousuke; Rosedale, Richard
2017-02-01
Mechanical Diagnosis and Therapy (MDT) is used in the treatment of extremity problems. Classifying clinical problems is one method of providing effective treatment to a target population. Classification reliability is a key factor to determine the precise clinical problem and to direct an appropriate intervention. To explore inter-examiner reliability of the MDT classification for extremity problems in three reliability designs: 1) vignette reliability using surveys with patient vignettes, 2) concurrent reliability, where multiple assessors decide a classification by observing someone's assessment, 3) successive reliability, where multiple assessors independently assess the same patient at different times. Systematic review with data synthesis in a quantitative format. Agreement of MDT subgroups was examined using the Kappa value, with the operational definition of acceptable reliability set at ≥ 0.6. The level of evidence was determined considering the methodological quality of the studies. Six studies were included and all studies met the criteria for high quality. Kappa values for the vignette reliability design (five studies) were ≥ 0.7. There was data from two cohorts in one study for the concurrent reliability design and the Kappa values ranged from 0.45 to 1.0. Kappa values for the successive reliability design (data from three cohorts in one study) were < 0.6. The current review found strong evidence of acceptable inter-examiner reliability of MDT classification for extremity problems in the vignette reliability design, limited evidence of acceptable reliability in the concurrent reliability design and unacceptable reliability in the successive reliability design. Copyright © 2017 Elsevier Ltd. All rights reserved.
Strategies Used In Capture The Flag Events Contributing To Team Performance
2016-03-01
16 codered 1082 w3stormz 1094 penthackon 744 balalaikacr3w 2802 gallopsled 1697 shellphish 331 hackingforchimac 651 From the table, we observe that...mmibh 12 943 848 47 48 94.91% gallopsled 18 1697 1549 81 67 96.05% mslc 14 1814 1700 46 68 96.25% w3stormz 13 1094 1045 9 40 96.34% codered 8 1082 1040 7
Computer Science Career Network
2013-03-01
development model. TopCoder’s development model is competition-based, meaning that TopCoder conducts competitions to develop digital assets. TopCoder...success in running a competition that had as an objective creating digital assets, and we intend to run more of them, to create assets for...cash prizes and merchandise . This includes social media contests, contests will all our games, special referral contests, and a couple NASA
Examiner Training and Reliability in Two Randomized Clinical Trials of Adult Dental Caries
Banting, David W.; Amaechi, Bennett T.; Bader, James D.; Blanchard, Peter; Gilbert, Gregg H.; Gullion, Christina M.; Holland, Jan Carlton; Makhija, Sonia K.; Papas, Athena; Ritter, André V.; Singh, Mabi L.; Vollmer, William M.
2013-01-01
Objectives This report describes the training of dental examiners participating in two dental caries clinical trials and reports the inter- and intra- examiner reliability scores from the initial standardization sessions. Methods Study examiners were trained to use a modified ICDAS-II system to detect the visual signs of non-cavitated and cavitated dental caries in adult subjects. Dental caries was classified as no caries (S), non-cavitated caries (D1), enamel caries (D2) and dentine caries (D3). Three standardization sessions involving 60 subjects and 3604 tooth surface calls were used to calculate several measures of examiner reliability. Results The prevalence of dental caries observed in the standardization sessions ranged from 1.4% to 13.5% of the coronal tooth surfaces examined. Overall agreement between pairs of examiners ranged from 0.88 to 0.99. An intra-class coefficient threshold of 0.60 was surpassed for all but one examiner. Inter-examiner unweighted kappa values were low (0.23– 0.35) but weighted kappas and the ratio of observed to maximum kappas were more encouraging (0.42– 0.83). The highest kappa values occurred for the S/D1 vs. D2/D3 two-level classification of dental caries, for which seven of the eight examiners achieved observed to maximum kappa values over 0.90.Intra-examiner reliability was notably higher than inter-examiner reliability for all measures and dental caries classification systems employed. Conclusion The methods and results for the initial examiner training and standardization sessions for two large clinical trials are reported. Recommendations for others planning examiner training and standardization sessions are offered. PMID:22320292
Examiner training and reliability in two randomized clinical trials of adult dental caries.
Banting, David W; Amaechi, Bennett T; Bader, James D; Blanchard, Peter; Gilbert, Gregg H; Gullion, Christina M; Holland, Jan Carlton; Makhija, Sonia K; Papas, Athena; Ritter, André V; Singh, Mabi L; Vollmer, William M
2011-01-01
This report describes the training of dental examiners participating in two dental caries clinical trials and reports the inter- and intra-examiner reliability scores from the initial standardization sessions. Study examiners were trained to use a modified International Caries Detection and Assessment System II system to detect the visual signs of non-cavitated and cavitated dental caries in adult subjects. Dental caries was classified as no caries (S), non-cavitated caries (D1), enamel caries (D2), and dentine caries (D3). Three standardization sessions involving 60 subjects and 3,604 tooth surface calls were used to calculate several measures of examiner reliability. The prevalence of dental caries observed in the standardization sessions ranged from 1.4 percent to 13.5 percent of the coronal tooth surfaces examined. Overall agreement between pairs of examiners ranged from 0.88 to 0.99. An intra-class coefficient threshold of 0.60 was surpassed for all but one examiner. Inter-examiner unweighted kappa values were low (0.23-0.35), but weighted kappas and the ratio of observed to maximum kappas were more encouraging (0.42-0.83). The highest kappa values occurred for the S/D1 versus D2/D3 two-level classification of dental caries, for which seven of the eight examiners achieved observed to maximum kappa values over 0.90. Intra-examiner reliability was notably higher than inter-examiner reliability for all measures and dental caries classifications employed. The methods and results for the initial examiner training and standardization sessions for two large clinical trials are reported. Recommendations for others planning examiner training and standardization sessions are offered. © 2011 American Association of Public Health Dentistry.
Elvrum, Ann-Kristin G; Beckung, Eva; Sæther, Rannei; Lydersen, Stian; Vik, Torstein; Himmelmann, Kate
2017-08-01
To develop a revised edition of the Bimanual Fine Motor Function (BFMF 2), as a classification of fine motor capacity in children with cerebral palsy (CP), and establish intra- and interrater reliability of this edition. The content of the original BFMF was discussed by an expert panel, resulting in a revised edition comprising the original description of the classification levels, but in addition including figures with specific explanatory text. Four professionals classified fine motor function of 79 children (3-17 years; 45 boys) who represented all subtypes of CP and Manual Ability Classification levels (I-V). Intra- and inter-rater reliability was assessed using overall intra-class correlation coefficient (ICC), and Cohen's quadratic weighted kappa. The overall ICC was 0.86. Cohen's weighted kappa indicated high intra-rater (к w : >0.90) and inter-rater (к w : >0.85) reliability. The revised BFMF 2 had high intra- and interrater reliability. The classification levels could be determined from short video recordings (<5 minutes), using the figures and precise descriptions of the fine motor function levels included in the BFMF 2. Thus, the BFMF 2 may be a feasible and useful classification of fine motor capacity both in research and in clinical practice.
Nouraei, S A R; O'Hanlon, S; Butler, C R; Hadovsky, A; Donald, E; Benjamin, E; Sandhu, G S
2009-02-01
To audit the accuracy of otolaryngology clinical coding and identify ways of improving it. Prospective multidisciplinary audit, using the 'national standard clinical coding audit' methodology supplemented by 'double-reading and arbitration'. Teaching-hospital otolaryngology and clinical coding departments. Otolaryngology inpatient and day-surgery cases. Concordance between initial coding performed by a coder (first cycle) and final coding by a clinician-coder multidisciplinary team (MDT; second cycle) for primary and secondary diagnoses and procedures, and Health Resource Groupings (HRG) assignment. 1250 randomly-selected cases were studied. Coding errors occurred in 24.1% of cases (301/1250). The clinician-coder MDT reassigned 48 primary diagnoses and 186 primary procedures and identified a further 209 initially-missed secondary diagnoses and procedures. In 203 cases, patient's initial HRG changed. Incorrect coding caused an average revenue loss of 174.90 pounds per patient (14.7%) of which 60% of the total income variance was due to miscoding of a eight highly-complex head and neck cancer cases. The 'HRG drift' created the appearance of disproportionate resource utilisation when treating 'simple' cases. At our institution the total cost of maintaining a clinician-coder MDT was 4.8 times lower than the income regained through the double-reading process. This large audit of otolaryngology practice identifies a large degree of error in coding on discharge. This leads to significant loss of departmental revenue, and given that the same data is used for benchmarking and for making decisions about resource allocation, it distorts the picture of clinical practice. These can be rectified through implementing a cost-effective clinician-coder double-reading multidisciplinary team as part of a data-assurance clinical governance framework which we recommend should be established in hospitals.
Tsopra, Rosy; Peckham, Daniel; Beirne, Paul; Rodger, Kirsty; Callister, Matthew; White, Helen; Jais, Jean-Philippe; Ghosh, Dipansu; Whitaker, Paul; Clifton, Ian J; Wyatt, Jeremy C
2018-07-01
Coding of diagnoses is important for patient care, hospital management and research. However coding accuracy is often poor and may reflect methods of coding. This study investigates the impact of three alternative coding methods on the inaccuracy of diagnosis codes and hospital reimbursement. Comparisons of coding inaccuracy were made between a list of coded diagnoses obtained by a coder using (i)the discharge summary alone, (ii)case notes and discharge summary, and (iii)discharge summary with the addition of medical input. For each method, inaccuracy was determined for the primary, secondary diagnoses, Healthcare Resource Group (HRG) and estimated hospital reimbursement. These data were then compared with a gold standard derived by a consultant and coder. 107 consecutive patient discharges were analysed. Inaccuracy of diagnosis codes was highest when a coder used the discharge summary alone, and decreased significantly when the coder used the case notes (70% vs 58% respectively, p < 0.0001) or coded from the discharge summary with medical support (70% vs 60% respectively, p < 0.0001). When compared with the gold standard, the percentage of incorrect HRGs was 42% for discharge summary alone, 31% for coding with case notes, and 35% for coding with medical support. The three coding methods resulted in an annual estimated loss of hospital remuneration of between £1.8 M and £16.5 M. The accuracy of diagnosis codes and percentage of correct HRGs improved when coders used either case notes or medical support in addition to the discharge summary. Further emphasis needs to be placed on improving the standard of information recorded in discharge summaries. Copyright © 2018 Elsevier B.V. All rights reserved.
On the optimality of code options for a universal noiseless coder
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner
1991-01-01
A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.
Høyer, C; Paludan, J P D; Pavar, S; Biurrun Manresa, J A; Petersen, L J
2014-03-01
To assess the intra- and inter-observer variation in laser Doppler flowmetry curve reading for measurement of toe and ankle pressures. A prospective single blinded diagnostic accuracy study was conducted on 200 patients with known or suspected peripheral arterial disease (PAD), with a total of 760 curve sets produced. The first curve reading for this study was performed by laboratory technologists blinded to clinical clues and previous readings at least 3 months after the primary data sampling. The pressure curves were later reassessed following another period of at least 3 months. Observer agreement in diagnostic classification according to TASC-II criteria was quantified using Cohen's kappa. Reliability was quantified using intra-class correlation coefficients, coefficients of variance, and Bland-Altman analysis. The overall agreement in diagnostic classification (PAD/not PAD) was 173/200 (87%) for intra-observer (κ = .858) and 175/200 (88%) for inter-observer data (κ = .787). Reliability analysis confirmed excellent correlation for both intra- and inter-observer data (ICC all ≥.931). The coefficients of variance ranged from 2.27% to 6.44% for intra-observer and 2.39% to 8.42% for inter-observer data. Subgroup analysis showed lower observer-variation for reading of toe pressures in patients with diabetes and/or chronic kidney disease than patients not diagnosed with these conditions. Bland-Altman plots showed higher variation in toe pressure readings than ankle pressure readings. This study shows substantial intra- and inter-observer agreement in diagnostic classification and reading of absolute pressures when using laboratory technologists as observers. The study emphasises that observer variation for curve reading is an important factor concerning the overall reproducibility of the method. Our data suggest diabetes and chronic kidney disease have an influence on toe pressure reproducibility. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Bradbury, Andrew W; Adam, Donald J; Bell, Jocelyn; Forbes, John F; Fowkes, F Gerry R; Gillespie, Ian; Ruckley, Charles Vaughan; Raab, Gillian M
2010-05-01
The Bypass versus Angioplasty in Severe Ischaemia of the Leg (BASIL) trial showed in patients with severe lower limb ischemia (rest pain, tissue loss) who survive for 2 years after intervention that initial randomization to bypass surgery, compared with balloon angioplasty, was associated with an improvement in subsequent amputation-free survival and overall survival of about 6 and 7 months, respectively. The aim of this report is to describe the angiographic severity and extent of infrainguinal arterial disease in the BASIL trial cohort so that the trial outcomes can be appropriately generalized to other patient cohorts with similar anatomic (angiographic) patterns of disease. Preintervention angiograms were scored using the Bollinger method and the TransAtlantic Inter-Society Consensus (TASC) II classification system by three consultant interventional radiologists and two consultant vascular surgeons unaware of the treatment received or patient outcomes. As was to be expected from the randomization process, patients in the two trial arms were well matched in terms of angiographic severity and extent of disease as documented by Bollinger and TASC II. In patients with the least overall disease, it tended to be concentrated in the superficial femoral and popliteal arteries, which were the commonest sites of disease overall. The below knee arteries became increasingly involved as the overall severity of disease increased, but the disease in the above knee arteries did not tend to worsen. The posterior tibial artery was the most diseased crural artery, whereas the peroneal appeared relatively spared. There was less interobserver disagreement with the Bollinger method than with the TASC II classification system, which also appears inherently less sensitive to clinically important differences in infrapopliteal disease among patients with severe leg ischemia. Anatomic (angiographic) disease description in patients with severe leg ischemia requires a reproducible scoring system that is sensitive to differences in crural artery disease. The Bollinger system appears well suited for this purpose, but the TASC II classification system less so. We hope this detailed analysis will facilitate appropriate generalization of the BASIL trial data to other groups of patients affected by similar anatomic (angiographic) patterns of disease. Crown Copyright (c) 2010. Published by Mosby, Inc. All rights reserved.
Gupta, Priyanka; Schomburg, John; Krishna, Suprita; Adejoro, Oluwakayode; Wang, Qi; Marsh, Benjamin; Nguyen, Andrew; Genere, Juan Reyes; Self, Patrick; Lund, Erik; Konety, Badrinath R
2017-01-01
To examine the Manufacturer and User Facility Device Experience Database (MAUDE) database to capture adverse events experienced with the Da Vinci Surgical System. In addition, to design a standardized classification system to categorize the complications and machine failures associated with the device. Overall, 1,057,000 DaVinci procedures were performed in the United States between 2009 and 2012. Currently, no system exists for classifying and comparing device-related errors and complications with which to evaluate adverse events associated with the Da Vinci Surgical System. The MAUDE database was queried for events reports related to the DaVinci Surgical System between the years 2009 and 2012. A classification system was developed and tested among 14 robotic surgeons to associate a level of severity with each event and its relationship to the DaVinci Surgical System. Events were then classified according to this system and examined by using Chi-square analysis. Two thousand eight hundred thirty-seven events were identified, of which 34% were obstetrics and gynecology (Ob/Gyn); 19%, urology; 11%, other; and 36%, not specified. Our classification system had moderate agreement with a Kappa score of 0.52. Using our classification system, we identified 75% of the events as mild, 18% as moderate, 4% as severe, and 3% as life threatening or resulting in death. Seventy-seven percent were classified as definitely related to the device, 15% as possibly related, and 8% as not related. Urology procedures compared with Ob/Gyn were associated with more severe events (38% vs 26%, p < 0.0001). Energy instruments were associated with less severe events compared with the surgical system (8% vs 87%, p < 0.0001). Events that were definitely associated with the device tended to be less severe (81% vs 19%, p < 0.0001). Our classification system is a valid tool with moderate inter-rater agreement that can be used to better understand device-related adverse events. The majority of robotic related events were mild but associated with the device.
Rammeh, Soumaya; Khadra, Hajer Ben; Znaidi, Nadia Sabbegh; Romdhane, Neila Attia; Najjar, Taoufik; Bouzaidi, Slim; Zermani, Rachida
2014-01-01
Many classification systems are currently used for histological evaluation of the severity of chronic viral hepatitis, including the Ishak and Metavir scores, but there is not a consensus classification. The objective of this work was to study the intra and inter-observers agreement of these two scores in the histopathological analysis of liver biopsies in patients with chronic viral hepatitis B or C. Fifty nine patients were included in the study, 26 had chronic hepatitis C and 33 had chronic hepatitis B. To investigate the inter-observers agreement, the liver biopsies were analyzed separately by two pathologists without prior consensus reading. The two pathologists conducted then a consensual reading before reviewing all cases independently. Cohen's kappa coefficient was calculated and in case of asymmetry Spearman's rho coefficient. Before the consensus reading, the agreement was moderate for the analysis of histological activity with both scores (Metavir: kappa=0.41, Ishak: rho=0.58). For the analysis of fibrosis, the agreement was good with both scores (Metavir: kappa=0.61, Ishak: rho=0.86). The consensus reading has improved the reproducibility of the activity that has become good with both scores (Metavir: kappa=0.77, Ishak: rho=0.76). For fibrosis improvement was observed with the Ishak score which agreement became excellent (kappa=0.81). In conclusion, we recommend in routine practice, a combined score: Metavir for activity and Ishak for fibrosis and to make a double reading for each biopsy.
Ludwin, Artur; Ludwin, Inga; Kudla, Marek; Kottner, Jan
2015-09-01
To estimate the inter-rater/intrarater reliability of the European Society of Human Reproduction and Embryology/European Society for Gynaecological Endoscopy (ESHRE-ESGE) classification of congenital uterine malformations and to compare the results obtained with the reliability of the American Society for Reproductive Medicine (ASRM) classification supplemented with additional morphometric criteria. Reliability/agreement study. Private clinic. Uterine malformations (n = 50 patients, consecutively included) and normal uterus (n = 62 women, randomly selected) constituted the study. These were classified based on real-time three-dimensional ultrasound single volume transvaginal (or transrectal in the case of virgins, 4 cases) ultrasonography findings, which were assessed by an expert rater based on the ESHRE-ESGE criteria. The samples were obtained from women of reproductive age. Unprocessed three-dimensional datasets were independently evaluated offline by two experienced, blinded raters using both classification systems. The κ-values and proportions of agreement. Standardized interpretation indicated that the ESHRE-ESGE system has substantial/good or almost perfect/very good reliability (κ >0.60 and >0.80), but the interpretation of the clinically relevant cutoffs of κ-values showed insufficient reliability for clinical use (κ < 0.90), especially in the diagnosis of septate uterus. The ASRM system had sufficient reliability (κ > 0.95). The low reliability of the ESHRE-ESGE system may lead to a lack of consensus about the management of common uterine malformations and biased research interpretations. The use of the ASRM classification, supplemented with simple morphometric criteria, may be preferred if their sufficient reliability can be confirmed real-time in a large sample size. Copyright © 2015 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Fulfilling the Roosevelts’ Vision for American Naval Power (1923-2005)
2006-06-30
nuclear pressure vessels are based on the results of that program.81 In...of a Nuclear Submarine 14 Identification Friend-or-Foe Systems 15 First American Airborne Radar 17 ThE COlD WAR 18 Monopulse Radar...Film-Forming Foam 38 Nuclear Reactor Safety iii 39 Linear Predictive Coder 40 Submarine Habitability 41
2015-04-09
Anthony Wurmstein, Lt Col Brian (Moose) Musselman, & Maj Alejandro Ramos (U.S. Air Force). Finally, Lt Col Mary E. Arnholt and Maj Ernest Herrera, Jr ...Wright- Patterson AFB, OH 45433-7913 Distribution A: Approved for public release; distribution is unlimited. Case Number: 88ABW-2015-2334, 12...School of Aerospace Medicine Aerospace Medicine Department Aerospace Education Branch 2510 Fifth St. Wright- Patterson AFB, OH 45433-7913 8
Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Nie, Feiping; Munsell, Brent
2018-01-01
Graph-based transductive learning (GTL) is a powerful machine learning technique that is used when sufficient training data is not available. In particular, conventional GTL approaches first construct a fixed inter-subject relation graph that is based on similarities in voxel intensity values in the feature domain, which can then be used to propagate the known phenotype data (i.e., clinical scores and labels) from the training data to the testing data in the label domain. However, this type of graph is exclusively learned in the feature domain, and primarily due to outliers in the observed features, may not be optimal for label propagation in the label domain. To address this limitation, a progressive GTL (pGTL) method is proposed that gradually finds an intrinsic data representation that more accurately aligns imaging features with the phenotype data. In general, optimal feature-to-phenotype alignment is achieved using an iterative approach that: (1) refines inter-subject relationships observed in the feature domain by using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined inter-subject relationships, and (3) verifies the intrinsic data representation on the training data to guarantee an optimal classification when applied to testing data. Additionally, the iterative approach is extended to multi-modal imaging data to further improve pGTL classification accuracy. Using Alzheimer’s disease and Parkinson’s disease study data, the classification accuracy of the proposed pGTL method is compared to several state-of-the-art classification methods, and the results show pGTL can more accurately identify subjects, even at different progression stages, in these two study data sets. PMID:28551556
Font, P; Loscertales, J; Benavente, C; Bermejo, A; Callejas, M; Garcia-Alonso, L; Garcia-Marcilla, A; Gil, S; Lopez-Rubio, M; Martin, E; Muñoz, C; Ricard, P; Soto, C; Balsalobre, P; Villegas, A
2013-01-01
Morphology is the basis of the diagnosis of myelodysplastic syndromes (MDS). The WHO classification offers prognostic information and helps with the treatment decisions. However, morphological changes are subject to potential inter-observer variance. The aim of our study was to explore the reliability of the 2008 WHO classification of MDS, reviewing 100 samples previously diagnosed with MDS using the 2001 WHO criteria. Specimens were collected from 10 hospitals and were evaluated by 10 morphologists, working in five pairs. Each observer evaluated 20 samples, and each sample was analyzed independently by two morphologists. The second observer was blinded to the clinical and laboratory data, except for the peripheral blood (PB) counts. Nineteen cases were considered as unclassified MDS (MDS-U) by the 2001 WHO classification, but only three remained as MDS-U by the 2008 WHO proposal. Discordance was observed in 26 of the 95 samples considered suitable (27 %). Although there were a high number of observers taking part, the rate of discordance was quite similar among the five pairs. The inter-observer concordance was very good regarding refractory anemia with excess blasts type 1 (RAEB-1) (10 of 12 cases, 84 %), RAEB-2 (nine of 10 cases, 90 %), and also good regarding refractory cytopenia with multilineage dysplasia (37 of 50 cases, 74 %). However, the categories with unilineage dysplasia were not reproducible in most of the cases. The rate of concordance with refractory cytopenia with unilineage dysplasia was 40 % (two of five cases) and 25 % with RA with ring sideroblasts (two of eight). Our results show that the 2008 WHO classification gives a more accurate stratification of MDS but also illustrates the difficulty in diagnosing MDS with unilineage dysplasia.
Ofstad, Eirik H; Frich, Jan C; Schei, Edvin; Frankel, Richard M; Gulbrandsen, Pål
2016-02-11
The medical literature lacks a comprehensive taxonomy of decisions made by physicians in medical encounters. Such a taxonomy might be useful in understanding the physician-centred, patient-centred and shared decision-making in clinical settings. We aimed to identify and classify all decisions emerging in conversations between patients and physicians. Qualitative study of video recorded patient-physician encounters. 380 patients in consultations with 59 physicians from 17 clinical specialties and three different settings (emergency room, ward round, outpatient clinic) in a Norwegian teaching hospital. A randomised sample of 30 encounters from internal medicine was used to identify and classify decisions, a maximum variation sample of 20 encounters was used for reliability assessments, and the remaining encounters were analysed to test for applicability across specialties. On the basis of physician statements in our material, we developed a taxonomy of clinical decisions--the Decision Identification and Classification Taxonomy for Use in Medicine (DICTUM). We categorised decisions into 10 mutually exclusive categories: gathering additional information, evaluating test results, defining problem, drug-related, therapeutic procedure-related, legal and insurance-related, contact-related, advice and precaution, treatment goal, and deferment. Four-coder inter-rater reliability using Krippendorff's α was 0.79. DICTUM represents a precise, detailed and comprehensive taxonomy of medical decisions communicated within patient-physician encounters. Compared to previous normative frameworks, the taxonomy is descriptive, substantially broader and offers new categories to the variety of clinical decisions. The taxonomy could prove helpful in studies on the quality of medical work, use of time and resources, and understanding of why, when and how patients are or are not involved in decisions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
EliXR-TIME: A Temporal Knowledge Representation for Clinical Research Eligibility Criteria.
Boland, Mary Regina; Tu, Samson W; Carini, Simona; Sim, Ida; Weng, Chunhua
2012-01-01
Effective clinical text processing requires accurate extraction and representation of temporal expressions. Multiple temporal information extraction models were developed but a similar need for extracting temporal expressions in eligibility criteria (e.g., for eligibility determination) remains. We identified the temporal knowledge representation requirements of eligibility criteria by reviewing 100 temporal criteria. We developed EliXR-TIME, a frame-based representation designed to support semantic annotation for temporal expressions in eligibility criteria by reusing applicable classes from well-known clinical temporal knowledge representations. We used EliXR-TIME to analyze a training set of 50 new temporal eligibility criteria. We evaluated EliXR-TIME using an additional random sample of 20 eligibility criteria with temporal expressions that have no overlap with the training data, yielding 92.7% (76 / 82) inter-coder agreement on sentence chunking and 72% (72 / 100) agreement on semantic annotation. We conclude that this knowledge representation can facilitate semantic annotation of the temporal expressions in eligibility criteria.
Seng, Elizabeth K; Lovejoy, Travis I
2013-12-01
This study psychometrically evaluates the Motivational Interviewing Treatment Integrity Code (MITI) to assess fidelity to motivational interviewing to reduce sexual risk behaviors in people living with HIV/AIDS. 74 sessions from a pilot randomized controlled trial of motivational interviewing to reduce sexual risk behaviors in people living with HIV were coded with the MITI. Participants reported sexual behavior at baseline, 3-month, and 6-months. Regarding reliability, excellent inter-rater reliability was achieved for measures of behavior frequency across the 12 sessions coded by both coders; global scales demonstrated poor intraclass correlations, but adequate percent agreement. Regarding validity, principle components analyses indicated that a two-factor model accounted for an adequate amount of variance in the data. These factors were associated with decreases in sexual risk behaviors after treatment. The MITI is a reliable and valid measurement of treatment fidelity for motivational interviewing targeting sexual risk behaviors in people living with HIV/AIDS.
There's alcohol in my soap: portrayal and effects of alcohol use in a popular television series.
van Hoof, Joris J; de Jong, Menno D T; Fennis, Bob M; Gosselt, Jordy F
2009-06-01
Two studies are reported addressing the media influences on adolescents' alcohol-related attitudes and behaviours. A content analysis was conducted to investigate the prevalence of alcohol portrayal in a Dutch soap series. The coding scheme covered the alcohol consumption per soap character, drinking situations and drinking times. Inter-coder reliability was satisfactory. The results showed that alcohol portrayal was prominent and that many instances of alcohol use reflected undesirable behaviours. To assess the influence of such alcohol cues on adolescents, a 2x2 experiment was conducted focusing on the separate and combined effects of alcohol portrayal in the soap series and surrounding alcohol commercials. Whereas the alcohol commercials had the expected effects on adolescents' attitudes, the alcohol-related soap content only appeared to have unexpected effects. Adolescents who were exposed to the alcohol portrayal in the soap series had a less positive attitude towards alcohol and lower drinking intentions. Implications of these findings for health policy and future research are discussed.
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Craig, Elizabeth; Kerr, Neal; McDonald, Gabrielle
2017-03-01
In New Zealand, there is a paucity of information on children with chronic conditions and disabilities (CCD). One reason is that many are managed in hospital outpatients where diagnostic coding of health-care events does not occur. This study explores the feasibility of coding paediatric outpatient data to provide health planners with information on children with CCD. Thirty-seven clinicians from six District Health Boards (DHBs) trialled coding over 12 weeks. In five DHBs, the International Classification of Diseases and Related Health Problems, 10th Edition, Australian Modification (ICD-10-AM) and Systematised Nomenclature of Medicine Clinical Terms (SNOMED-CT) were trialled for 6 weeks each. In one DHB, ICD-10-AM was trialled for 12 weeks. A random sample (30%) of ICD-10-AM coded events were also coded by clinical coders. A mix of paper and electronic methods were used. In total 2,604 outpatient events were coded in ICD-10-AM and 693 in SNOMED-CT. Dual coding occurred for 770 (29.6%) ICD-10-AM events. Overall, 34% of ICD-10-AM and 40% of SNOMED-CT events were for developmental and behavioural disorders. Chronic medical conditions were also common. Clinicians were concerned about the workload impacts, particularly for paper-based methods. Coder's were concerned about clinician's adherence to coding guidelines and the poor quality of documentation in some notes. Coded outpatient data could provide planners with a rich source of information on children with CCD. However, coding is also resource intensive. Thus its costs need to be weighed against the costs of managing a much larger health budget using very limited information. © 2016 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).
van Meeteren, Jetty; Nieuwenhuijsen, Channah; de Grund, Arthur; Stam, Henk J; Roebroeck, Marij E
2010-01-01
The study aimed to establish whether the manual ability classification system (MACS), a valid classification system for manual ability in children with cerebral palsy (CP), is applicable in young adults with CP and normal intelligence. The participants (n = 83) were young adults with CP and normal intelligence and had a mean age of 19.9 years. In this study, inter observer reliability of the MACS was determined. We investigated relationships between the MACS level and patient characteristics (such as the gross motor function classification system (GMFCS) level, limb distribution of the spastic paresis and educational level) and with functional activities of the upper extremity (assessed with the Melbourne assessment, the Abilhand questionnaire and the domain self-care of the functional independence measure (FIM)). Furthermore, with a linear regression analysis it was determined whether the MACS is a significant determinant of activity limitations and participation restrictions. The reliability was good (intraclass correlation coefficient 0.83). The Spearman correlation coefficients with GMFCS level, limb distribution of the spastic paresis and educational level were 0.53, 0.46, and 0.26, respectively. MACS level correlated moderately with outcome measures of functional activities (correlations ranging from -0.38 to -0.55). MACS level is, in addition to the GMFCS level, an important determinant for limitations in activities and restrictions in participation. We conclude that the MACS is a feasible method to classify manual ability in young adults with CP and normal intelligence with a good manual ability.
Zafirah, S A; Nur, Amrizal Muhammad; Puteh, Sharifa Ezat Wan; Aljunid, Syed Mohamed
2018-01-25
The accuracy of clinical coding is crucial in the assignment of Diagnosis Related Groups (DRGs) codes, especially if the hospital is using Casemix System as a tool for resource allocations and efficiency monitoring. The aim of this study was to estimate the potential loss of income due to an error in clinical coding during the implementation of the Malaysia Diagnosis Related Group (MY-DRG ® ) Casemix System in a teaching hospital in Malaysia. Four hundred and sixty-four (464) coded medical records were selected, re-examined and re-coded by an independent senior coder (ISC). This ISC re-examined and re-coded the error code that was originally entered by the hospital coders. The pre- and post-coding results were compared, and if there was any disagreement, the codes by the ISC were considered the accurate codes. The cases were then re-grouped using a MY-DRG ® grouper to assess and compare the changes in the DRG assignment and the hospital tariff assignment. The outcomes were then verified by a casemix expert. Coding errors were found in 89.4% (415/424) of the selected patient medical records. Coding errors in secondary diagnoses were the highest, at 81.3% (377/464), followed by secondary procedures at 58.2% (270/464), principal procedures of 50.9% (236/464) and primary diagnoses at 49.8% (231/464), respectively. The coding errors resulted in the assignment of different MY-DRG ® codes in 74.0% (307/415) of the cases. From this result, 52.1% (160/307) of the cases had a lower assigned hospital tariff. In total, the potential loss of income due to changes in the assignment of the MY-DRG ® code was RM654,303.91. The quality of coding is a crucial aspect in implementing casemix systems. Intensive re-training and the close monitoring of coder performance in the hospital should be performed to prevent the potential loss of hospital income.
DNA methylation-based classification of central nervous system tumours.
Capper, David; Jones, David T W; Sill, Martin; Hovestadt, Volker; Schrimpf, Daniel; Sturm, Dominik; Koelsche, Christian; Sahm, Felix; Chavez, Lukas; Reuss, David E; Kratz, Annekathrin; Wefers, Annika K; Huang, Kristin; Pajtler, Kristian W; Schweizer, Leonille; Stichel, Damian; Olar, Adriana; Engel, Nils W; Lindenberg, Kerstin; Harter, Patrick N; Braczynski, Anne K; Plate, Karl H; Dohmen, Hildegard; Garvalov, Boyan K; Coras, Roland; Hölsken, Annett; Hewer, Ekkehard; Bewerunge-Hudler, Melanie; Schick, Matthias; Fischer, Roger; Beschorner, Rudi; Schittenhelm, Jens; Staszewski, Ori; Wani, Khalida; Varlet, Pascale; Pages, Melanie; Temming, Petra; Lohmann, Dietmar; Selt, Florian; Witt, Hendrik; Milde, Till; Witt, Olaf; Aronica, Eleonora; Giangaspero, Felice; Rushing, Elisabeth; Scheurlen, Wolfram; Geisenberger, Christoph; Rodriguez, Fausto J; Becker, Albert; Preusser, Matthias; Haberler, Christine; Bjerkvig, Rolf; Cryan, Jane; Farrell, Michael; Deckert, Martina; Hench, Jürgen; Frank, Stephan; Serrano, Jonathan; Kannan, Kasthuri; Tsirigos, Aristotelis; Brück, Wolfgang; Hofer, Silvia; Brehmer, Stefanie; Seiz-Rosenhagen, Marcel; Hänggi, Daniel; Hans, Volkmar; Rozsnoki, Stephanie; Hansford, Jordan R; Kohlhof, Patricia; Kristensen, Bjarne W; Lechner, Matt; Lopes, Beatriz; Mawrin, Christian; Ketter, Ralf; Kulozik, Andreas; Khatib, Ziad; Heppner, Frank; Koch, Arend; Jouvet, Anne; Keohane, Catherine; Mühleisen, Helmut; Mueller, Wolf; Pohl, Ute; Prinz, Marco; Benner, Axel; Zapatka, Marc; Gottardo, Nicholas G; Driever, Pablo Hernáiz; Kramm, Christof M; Müller, Hermann L; Rutkowski, Stefan; von Hoff, Katja; Frühwald, Michael C; Gnekow, Astrid; Fleischhack, Gudrun; Tippelt, Stephan; Calaminus, Gabriele; Monoranu, Camelia-Maria; Perry, Arie; Jones, Chris; Jacques, Thomas S; Radlwimmer, Bernhard; Gessi, Marco; Pietsch, Torsten; Schramm, Johannes; Schackert, Gabriele; Westphal, Manfred; Reifenberger, Guido; Wesseling, Pieter; Weller, Michael; Collins, Vincent Peter; Blümcke, Ingmar; Bendszus, Martin; Debus, Jürgen; Huang, Annie; Jabado, Nada; Northcott, Paul A; Paulus, Werner; Gajjar, Amar; Robinson, Giles W; Taylor, Michael D; Jaunmuktane, Zane; Ryzhova, Marina; Platten, Michael; Unterberg, Andreas; Wick, Wolfgang; Karajannis, Matthias A; Mittelbronn, Michel; Acker, Till; Hartmann, Christian; Aldape, Kenneth; Schüller, Ulrich; Buslei, Rolf; Lichter, Peter; Kool, Marcel; Herold-Mende, Christel; Ellison, David W; Hasselblatt, Martin; Snuderl, Matija; Brandner, Sebastian; Korshunov, Andrey; von Deimling, Andreas; Pfister, Stefan M
2018-03-22
Accurate pathological diagnosis is crucial for optimal management of patients with cancer. For the approximately 100 known tumour types of the central nervous system, standardization of the diagnostic process has been shown to be particularly challenging-with substantial inter-observer variability in the histopathological diagnosis of many tumour types. Here we present a comprehensive approach for the DNA methylation-based classification of central nervous system tumours across all entities and age groups, and demonstrate its application in a routine diagnostic setting. We show that the availability of this method may have a substantial impact on diagnostic precision compared to standard methods, resulting in a change of diagnosis in up to 12% of prospective cases. For broader accessibility, we have designed a free online classifier tool, the use of which does not require any additional onsite data processing. Our results provide a blueprint for the generation of machine-learning-based tumour classifiers across other cancer entities, with the potential to fundamentally transform tumour pathology.
How reliable and accurate is the AO/OTA comprehensive classification for adult long-bone fractures?
Meling, Terje; Harboe, Knut; Enoksen, Cathrine H; Aarflot, Morten; Arthursson, Astvaldur J; Søreide, Kjetil
2012-07-01
Reliable classification of fractures is important for treatment allocation and study comparisons. The overall accuracy of scoring applied to a general population of fractures is little known. This study aimed to investigate the accuracy and reliability of the comprehensive Arbeitsgemeinschaft für Osteosynthesefragen/Orthopedic Trauma Association classification for adult long-bone fractures and identify factors associated with poor coding agreement. Adults (>16 years) with long-bone fractures coded in a Fracture and Dislocation Registry at the Stavanger University Hospital during the fiscal year 2008 were included. An unblinded reference code dataset was generated for the overall accuracy assessment by two experienced orthopedic trauma surgeons. Blinded analysis of intrarater reliability was performed by rescoring and of interrater reliability by recoding of a randomly selected fracture sample. Proportion of agreement (PA) and kappa (κ) statistics are presented. Uni- and multivariate logistic regression analyses of factors predicting accuracy were performed. During the study period, 949 fractures were included and coded by 26 surgeons. For the intrarater analysis, overall agreements were κ = 0.67 (95% confidence interval [CI]: 0.64-0.70) and PA 69%. For interrater assessment, κ = 0.67 (95% CI: 0.62-0.72) and PA 69%. The accuracy of surgeons' blinded recoding was κ = 0.68 (95% CI: 0.65- 0.71) and PA 68%. Fracture type, frequency of the fracture, and segment fractured significantly influenced accuracy whereas the coder's experience did not. Both the reliability and accuracy of the comprehensive Arbeitsgemeinschaft für Osteosynthesefragen/Orthopedic Trauma Association classification for long-bone fractures ranged from substantial to excellent. Variations in coding accuracy seem to be related more to the fracture itself than the surgeon. Diagnostic study, level I.
Motion-Compensated Compression of Dynamic Voxelized Point Clouds.
De Queiroz, Ricardo L; Chou, Philip A
2017-05-24
Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.
ERIC Educational Resources Information Center
Dondi, Marco; Messinger, Daniel; Colle, Marta; Tabasso, Alessia; Simion, Francesca; Barba, Beatrice Dalla; Fogel, Alan
2007-01-01
To better understand the form and recognizability of neonatal smiling, 32 newborns (14 girls; M = 25.6 hr) were videorecorded in the behavioral states of alertness, drowsiness, active sleep, and quiet sleep. Baby Facial Action Coding System coding of both lip corner raising (simple or non-Duchenne) and lip corner raising with cheek raising…
Fifty years of progress in speech coding standards
NASA Astrophysics Data System (ADS)
Cox, Richard
2004-10-01
Over the past 50 years, speech coding has taken root worldwide. Early applications were for the military and transmission for telephone networks. The military gave equal priority to intelligibility and low bit rate. The telephone network gave priority to high quality and low delay. These illustrate three of the four areas in which requirements must be set for any speech coder application: bit rate, quality, delay, and complexity. While the military could afford relatively expensive terminal equipment for secure communications, the telephone network needed low cost for massive deployment in switches and transmission equipment worldwide. Today speech coders are at the heart of the wireless phones and telephone answering systems we use every day. In addition to the technology and technical invention that has occurred, standards make it possible for all these different systems to interoperate. The primary areas of standardization are the public switched telephone network, wireless telephony, and secure telephony for government and military applications. With the advent of IP telephony there are additional standardization efforts and challenges. In this talk the progress in all areas is reviewed as well as a reflection on Jim Flanagan's impact on this field during the past half century.
Development of the ICD-10 simplified version and field test.
Paoin, Wansa; Yuenyongsuwan, Maliwan; Yokobori, Yukiko; Endo, Hiroyoshi; Kim, Sukil
2018-05-01
The International Statistical Classification of Diseases and Related Health Problems, 10th Revision (ICD-10) has been used in various Asia-Pacific countries for more than 20 years. Although ICD-10 is a powerful tool, clinical coding processes are complex; therefore, many developing countries have not been able to implement ICD-10-based health statistics (WHO-FIC APN, 2007). This study aimed to simplify ICD-10 clinical coding processes, to modify index terms to facilitate computer searching and to provide a simplified version of ICD-10 for use in developing countries. The World Health Organization Family of International Classifications Asia-Pacific Network (APN) developed a simplified version of the ICD-10 and conducted field testing in Cambodia during February and March 2016. Ten hospitals were selected to participate. Each hospital sent a team to join a training workshop before using the ICD-10 simplified version to code 100 cases. All hospitals subsequently sent their coded records to the researchers. Overall, there were 1038 coded records with a total of 1099 ICD clinical codes assigned. The average accuracy rate was calculated as 80.71% (66.67-93.41%). Three types of clinical coding errors were found. These related to errors relating to the coder (14.56%), those resulting from the physician documentation (1.27%) and those considered system errors (3.46%). The field trial results demonstrated that the APN ICD-10 simplified version is feasible for implementation as an effective tool to implement ICD-10 clinical coding for hospitals. Developing countries may consider adopting the APN ICD-10 simplified version for ICD-10 code assignment in hospitals and health care centres. The simplified version can be viewed as an introductory tool which leads to the implementation of the full ICD-10 and may support subsequent ICD-11 adoption.
Analogue and digital linear modulation techniques for mobile satellite
NASA Technical Reports Server (NTRS)
Whitmarsh, W. J.; Bateman, A.; Mcgeehan, J. P.
1990-01-01
The choice of modulation format for a mobile satellite service is complex. The subjective performance is summarized of candidate schemes and voice coder technologies. It is shown that good performance can be achieved with both analogue and digital voice systems, although the analogue system gives superior performance in fading. The results highlight the need for flexibility in the choice of signaling format. Linear transceiver technology capable of using many forms of narrowband modulation is described.
Töpel, Mats; Zizka, Alexander; Calió, Maria Fernanda; Scharn, Ruud; Silvestro, Daniele; Antonelli, Alexandre
2017-03-01
Understanding the patterns and processes underlying the uneven distribution of biodiversity across space constitutes a major scientific challenge in systematic biology and biogeography, which largely relies on effectively mapping and making sense of rapidly increasing species occurrence data. There is thus an urgent need for making the process of coding species into spatial units faster, automated, transparent, and reproducible. Here we present SpeciesGeoCoder, an open-source software package written in Python and R, that allows for easy coding of species into user-defined operational units. These units may be of any size and be purely spatial (i.e., polygons) such as countries and states, conservation areas, biomes, islands, biodiversity hotspots, and areas of endemism, but may also include elevation ranges. This flexibility allows scoring species into complex categories, such as those encountered in topographically and ecologically heterogeneous landscapes. In addition, SpeciesGeoCoder can be used to facilitate sorting and cleaning of occurrence data obtained from online databases, and for testing the impact of incorrect identification of specimens on the spatial coding of species. The various outputs of SpeciesGeoCoder include quantitative biodiversity statistics, global and local distribution maps, and files that can be used directly in many phylogeny-based applications for ancestral range reconstruction, investigations of biome evolution, and other comparative methods. Our simulations indicate that even datasets containing hundreds of millions of records can be analyzed in relatively short time using a standard computer. We exemplify the use of SpeciesGeoCoder by inferring the historical dispersal of birds across the Isthmus of Panama, showing that lowland species crossed the Isthmus about twice as frequently as montane species with a marked increase in the number of dispersals during the last 10 million years. [ancestral area reconstruction; biodiversity patterns; ecology; evolution; point in polygon; species distribution data.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Nguyen, Anthony N; Moore, Julie; O'Dwyer, John; Philpot, Shoni
2016-01-01
The paper assesses the utility of Medtex on automating Cancer Registry notifications from narrative histology and cytology reports from the Queensland state-wide pathology information system. A corpus of 45.3 million pathology HL7 messages (including 119,581 histology and cytology reports) from a Queensland pathology repository for the year of 2009 was analysed by Medtex for cancer notification. Reports analysed by Medtex were consolidated at a patient level and compared against patients with notifiable cancers from the Queensland Oncology Repository (QOR). A stratified random sample of 1,000 patients was manually reviewed by a cancer clinical coder to analyse agreements and discrepancies. Sensitivity of 96.5% (95% confidence interval: 94.5-97.8%), specificity of 96.5% (95.3-97.4%) and positive predictive value of 83.7% (79.6-86.8%) were achieved for identifying cancer notifiable patients. Medtex achieved high sensitivity and specificity across the breadth of cancers, report types, pathology laboratories and pathologists throughout the State of Queensland. The high sensitivity also resulted in the identification of cancer patients that were not found in the QOR. High sensitivity was at the expense of positive predictive value; however, these cases may be considered as lower priority to Cancer Registries as they can be quickly reviewed. Error analysis revealed that system errors tended to be tumour stream dependent. Medtex is proving to be a promising medical text analytic system. High value cancer information can be generated through intelligent data classification and extraction on large volumes of unstructured pathology reports. PMID:28269893
Nguyen, Anthony N; Moore, Julie; O'Dwyer, John; Philpot, Shoni
2016-01-01
The paper assesses the utility of Medtex on automating Cancer Registry notifications from narrative histology and cytology reports from the Queensland state-wide pathology information system. A corpus of 45.3 million pathology HL7 messages (including 119,581 histology and cytology reports) from a Queensland pathology repository for the year of 2009 was analysed by Medtex for cancer notification. Reports analysed by Medtex were consolidated at a patient level and compared against patients with notifiable cancers from the Queensland Oncology Repository (QOR). A stratified random sample of 1,000 patients was manually reviewed by a cancer clinical coder to analyse agreements and discrepancies. Sensitivity of 96.5% (95% confidence interval: 94.5-97.8%), specificity of 96.5% (95.3-97.4%) and positive predictive value of 83.7% (79.6-86.8%) were achieved for identifying cancer notifiable patients. Medtex achieved high sensitivity and specificity across the breadth of cancers, report types, pathology laboratories and pathologists throughout the State of Queensland. The high sensitivity also resulted in the identification of cancer patients that were not found in the QOR. High sensitivity was at the expense of positive predictive value; however, these cases may be considered as lower priority to Cancer Registries as they can be quickly reviewed. Error analysis revealed that system errors tended to be tumour stream dependent. Medtex is proving to be a promising medical text analytic system. High value cancer information can be generated through intelligent data classification and extraction on large volumes of unstructured pathology reports.
Enhanced inter-subject brain computer interface with associative sensorimotor oscillations.
Saha, Simanto; Ahmed, Khawza I; Mostafa, Raqibul; Khandoker, Ahsan H; Hadjileontiadis, Leontios
2017-02-01
Electroencephalography (EEG) captures electrophysiological signatures of cortical events from the scalp with high-dimensional electrode montages. Usually, excessive sources produce outliers and potentially affect the actual event related sources. Besides, EEG manifests inherent inter-subject variability of the brain dynamics, at the resting state and/or under the performance of task(s), caused probably due to the instantaneous fluctuation of psychophysiological states. A wavelet coherence (WC) analysis for optimally selecting associative inter-subject channels is proposed here and is being used to boost performances of motor imagery (MI)-based inter-subject brain computer interface (BCI). The underlying hypothesis is that optimally associative inter-subject channels can reduce the effects of outliers and, thus, eliminate dissimilar cortical patterns. The proposed approach has been tested on the dataset IVa from BCI competition III, including EEG data acquired from five healthy subjects who were given visual cues to perform 280 trials of MI for the right hand and right foot. Experimental results have shown increased classification accuracy (81.79%) using the WC-based selected 16 channels compared to the one (56.79%) achieved using all the available 118 channels. The associative channels lie mostly around the sensorimotor regions of the brain, reinforced by the previous literature, describing spatial brain dynamics during sensorimotor oscillations. Apparently, the proposed approach paves the way for optimised EEG channel selection that could boost further the efficiency and real-time performance of BCI systems.
Campbell, J. Peter; Kalpathy-Cramer, Jayashree; Erdogmus, Deniz; Tian, Peng; Kedarisetti, Dharanish; Moleta, Chace; Reynolds, James D.; Hutcheson, Kelly; Shapiro, Michael J.; Repka, Michael X.; Ferrone, Philip; Drenser, Kimberly; Horowitz, Jason; Sonmez, Kemal; Swan, Ryan; Ostmo, Susan; Jonas, Karyn E.; Chan, R.V. Paul; Chiang, Michael F.
2016-01-01
Objective To identify patterns of inter-expert discrepancy in plus disease diagnosis in retinopathy of prematurity (ROP). Design We developed two datasets of clinical images of varying disease severity (100 images and 34 images) as part of the Imaging and Informatics in ROP study, and determined a consensus reference standard diagnosis (RSD) for each image, based on 3 independent image graders and the clinical exam. We recruited 8 expert ROP clinicians to classify these images and compared the distribution of classifications between experts and the RSD. Subjects, Participants, and/or Controls Images obtained during routine ROP screening in neonatal intensive care units. 8 participating experts with >10 years of clinical ROP experience and >5 peer-reviewed ROP publications. Methods, Intervention, or Testing Expert classification of images of plus disease in ROP. Main Outcome Measures Inter-expert agreement (weighted kappa statistic), and agreement and bias on ordinal classification between experts (ANOVA) and the RSD (percent agreement). Results There was variable inter-expert agreement on diagnostic classifications between the 8 experts and the RSD (weighted kappa 0 – 0.75, mean 0.30). RSD agreement ranged from 80 – 94% agreement for the dataset of 100 images, and 29 – 79% for the dataset of 34 images. However, when images were ranked in order of disease severity (by average expert classification), the pattern of expert classification revealed a consistent systematic bias for each expert consistent with unique cut points for the diagnosis of plus disease and pre-plus disease. The two-way ANOVA model suggested a highly significant effect of both image and user on the average score (P<0.05, adjusted R2=0.82 for dataset A, and P< 0.05 and adjusted R2 =0.6615 for dataset B). Conclusions and Relevance There is wide variability in the classification of plus disease by ROP experts, which occurs because experts have different “cut-points” for the amounts of vascular abnormality required for presence of plus and pre-plus disease. This has important implications for research, teaching and patient care for ROP, and suggests that a continuous ROP plus disease severity score may more accurately reflect the behavior of expert ROP clinicians, and may better standardize classification in the future. PMID:27591053
2013-01-01
Background The harmonization of European health systems brings with it a need for tools to allow the standardized collection of information about medical care. A common coding system and standards for the description of services are needed to allow local data to be incorporated into evidence-informed policy, and to permit equity and mobility to be assessed. The aim of this project has been to design such a classification and a related tool for the coding of services for Long Term Care (DESDE-LTC), based on the European Service Mapping Schedule (ESMS). Methods The development of DESDE-LTC followed an iterative process using nominal groups in 6 European countries. 54 researchers and stakeholders in health and social services contributed to this process. In order to classify services, we use the minimal organization unit or “Basic Stable Input of Care” (BSIC), coded by its principal function or “Main Type of Care” (MTC). The evaluation of the tool included an analysis of feasibility, consistency, ontology, inter-rater reliability, Boolean Factor Analysis, and a preliminary impact analysis (screening, scoping and appraisal). Results DESDE-LTC includes an alpha-numerical coding system, a glossary and an assessment instrument for mapping and counting LTC. It shows high feasibility, consistency, inter-rater reliability and face, content and construct validity. DESDE-LTC is ontologically consistent. It is regarded by experts as useful and relevant for evidence-informed decision making. Conclusion DESDE-LTC contributes to establishing a common terminology, taxonomy and coding of LTC services in a European context, and a standard procedure for data collection and international comparison. PMID:23768163
2014-01-01
Background There is a critical shortage of healthcare workers in sub-Saharan Africa, and Malawi has one of the lowest physician densities in the region. One of the reasons for this shortage is inadequate retention of medical school graduates, partly due to the desire for specialization training. The University of Malawi College of Medicine has developed specialty training programs, but medical school graduates continue to report a desire to leave the country for specialization training. To understand this desire, we studied medical students’ perspectives on specialization training in Malawi. Methods We conducted semi-structured interviews of medical students in the final year of their degree program. We developed an interview guide through an iterative process, and recorded and transcribed all interviews for analysis. Two independent coders coded the manuscripts and assessed inter-coder reliability, and the authors used an “editing approach” to qualitative analysis to identify and categorize themes relating to the research aim. The University of Pittsburgh Institutional Review Board and the University of Malawi College of Medicine Research and Ethics Committee approved this study and authors obtained written informed consent from all participants. Results We interviewed 21 medical students. All students reported a desire for specialization training, with 12 (57%) students interested in specialties not currently offered in Malawi. Students discussed reasons for pursuing specialization training, impressions of specialization training in Malawi, reasons for staying or leaving Malawi to pursue specialization training and recommendations to improve training. Conclusions Graduating medical students in Malawi have mixed views of specialization training in their own country and still desire to leave Malawi to pursue further training. Training institutions in sub-Saharan Africa need to understand the needs of the country’s healthcare workforce and the needs of their graduating medical students to be able to match opportunities and retain graduating students. PMID:24393278
Sawatsky, Adam P; Parekh, Natasha; Muula, Adamson S; Bui, Thuy
2014-01-06
There is a critical shortage of healthcare workers in sub-Saharan Africa, and Malawi has one of the lowest physician densities in the region. One of the reasons for this shortage is inadequate retention of medical school graduates, partly due to the desire for specialization training. The University of Malawi College of Medicine has developed specialty training programs, but medical school graduates continue to report a desire to leave the country for specialization training. To understand this desire, we studied medical students' perspectives on specialization training in Malawi. We conducted semi-structured interviews of medical students in the final year of their degree program. We developed an interview guide through an iterative process, and recorded and transcribed all interviews for analysis. Two independent coders coded the manuscripts and assessed inter-coder reliability, and the authors used an "editing approach" to qualitative analysis to identify and categorize themes relating to the research aim. The University of Pittsburgh Institutional Review Board and the University of Malawi College of Medicine Research and Ethics Committee approved this study and authors obtained written informed consent from all participants. We interviewed 21 medical students. All students reported a desire for specialization training, with 12 (57%) students interested in specialties not currently offered in Malawi. Students discussed reasons for pursuing specialization training, impressions of specialization training in Malawi, reasons for staying or leaving Malawi to pursue specialization training and recommendations to improve training. Graduating medical students in Malawi have mixed views of specialization training in their own country and still desire to leave Malawi to pursue further training. Training institutions in sub-Saharan Africa need to understand the needs of the country's healthcare workforce and the needs of their graduating medical students to be able to match opportunities and retain graduating students.
A CMOS Imager with Focal Plane Compression using Predictive Coding
NASA Technical Reports Server (NTRS)
Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.
2007-01-01
This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.
Block-based scalable wavelet image codec
NASA Astrophysics Data System (ADS)
Bao, Yiliang; Kuo, C.-C. Jay
1999-10-01
This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).
Meneguette, Rodolfo I; Filho, Geraldo P R; Guidoni, Daniel L; Pessin, Gustavo; Villas, Leandro A; Ueyama, Jó
2016-01-01
Intelligent Transportation Systems (ITS) rely on Inter-Vehicle Communication (IVC) to streamline the operation of vehicles by managing vehicle traffic, assisting drivers with safety and sharing information, as well as providing appropriate services for passengers. Traffic congestion is an urban mobility problem, which causes stress to drivers and economic losses. In this context, this work proposes a solution for the detection, dissemination and control of congested roads based on inter-vehicle communication, called INCIDEnT. The main goal of the proposed solution is to reduce the average trip time, CO emissions and fuel consumption by allowing motorists to avoid congested roads. The simulation results show that our proposed solution leads to short delays and a low overhead. Moreover, it is efficient with regard to the coverage of the event and the distance to which the information can be propagated. The findings of the investigation show that the proposed solution leads to (i) high hit rate in the classification of the level of congestion, (ii) a reduction in average trip time, (iii) a reduction in fuel consumption, and (iv) reduced CO emissions.
Filho, Geraldo P. R.; Guidoni, Daniel L.; Pessin, Gustavo; Villas, Leandro A.; Ueyama, Jó
2016-01-01
Intelligent Transportation Systems (ITS) rely on Inter-Vehicle Communication (IVC) to streamline the operation of vehicles by managing vehicle traffic, assisting drivers with safety and sharing information, as well as providing appropriate services for passengers. Traffic congestion is an urban mobility problem, which causes stress to drivers and economic losses. In this context, this work proposes a solution for the detection, dissemination and control of congested roads based on inter-vehicle communication, called INCIDEnT. The main goal of the proposed solution is to reduce the average trip time, CO emissions and fuel consumption by allowing motorists to avoid congested roads. The simulation results show that our proposed solution leads to short delays and a low overhead. Moreover, it is efficient with regard to the coverage of the event and the distance to which the information can be propagated. The findings of the investigation show that the proposed solution leads to (i) high hit rate in the classification of the level of congestion, (ii) a reduction in average trip time, (iii) a reduction in fuel consumption, and (iv) reduced CO emissions PMID:27526048
2013-01-01
Background and purpose Guidelines for fracture treatment and evaluation require a valid classification. Classifications especially designed for children are available, but they might lead to reduced accuracy, considering the relative infrequency of childhood fractures in a general orthopedic department. We tested the reliability and accuracy of the Müller classification when used for long bone fractures in children. Methods We included all long bone fractures in children aged < 16 years who were treated in 2008 at the surgical ward of Stavanger University Hospital. 20 surgeons recorded 232 fractures. Datasets were generated for intra- and inter-rater analysis, as well as a reference dataset for accuracy calculations. We present proportion of agreement (PA) and kappa (K) statistics. Results For intra-rater analysis, overall agreement (κ) was 0.75 (95% CI: 0.68–0.81) and PA was 79%. For inter-rater assessment, K was 0.71 (95% CI: 0.61–0.80) and PA was 77%. Accuracy was estimated: κ = 0.72 (95% CI: 0.64–0.79) and PA = 76%. Interpretation The Müller classification (slightly adjusted for pediatric fractures) showed substantial to excellent accuracy among general orthopedic surgeons when applied to long bone fractures in children. However, separate knowledge about the child-specific fracture pattern, the maturity of the bone, and the degree of displacement must be considered when the treatment and the prognosis of the fractures are evaluated. PMID:23245225
A small terminal for satellite communication systems
NASA Technical Reports Server (NTRS)
Xiong, Fuqin; Wu, Dong; Jin, Min
1994-01-01
A small portable, low-cost satellite communications terminal system incorporating a modulator/demodulator and convolutional-Viterbi coder/decoder is described. Advances in signal processing and error-correction techniques in combination with higher power and higher frequencies aboard satellites allow for more efficient use of the space segment. This makes it possible to design small economical earth stations. The Advanced Communications Technology Satellite (ACTS) was chosen to test the system. ACTS, operating at the Ka band incorporates higher power, higher frequency, frequency and spatial reuse using spot beams and polarization.
NASA Astrophysics Data System (ADS)
Krappe, Sebastian; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian
2016-03-01
The morphological differentiation of bone marrow is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually under the use of bright field microscopy. This is a time-consuming, subjective, tedious and error-prone process. Furthermore, repeated examinations of a slide may yield intra- and inter-observer variances. For that reason a computer assisted diagnosis system for bone marrow differentiation is pursued. In this work we focus (a) on a new method for the separation of nucleus and plasma parts and (b) on a knowledge-based hierarchical tree classifier for the differentiation of bone marrow cells in 16 different classes. Classification trees are easily interpretable and understandable and provide a classification together with an explanation. Using classification trees, expert knowledge (i.e. knowledge about similar classes and cell lines in the tree model of hematopoiesis) is integrated in the structure of the tree. The proposed segmentation method is evaluated with more than 10,000 manually segmented cells. For the evaluation of the proposed hierarchical classifier more than 140,000 automatically segmented bone marrow cells are used. Future automated solutions for the morphological analysis of bone marrow smears could potentially apply such an approach for the pre-classification of bone marrow cells and thereby shortening the examination time.
Plasma cell quantification in bone marrow by computer-assisted image analysis.
Went, P; Mayer, S; Oberholzer, M; Dirnhofer, S
2006-09-01
Minor and major criteria for the diagnosis of multiple meloma according to the definition of the WHO classification include different categories of the bone marrow plasma cell count: a shift from the 10-30% group to the > 30% group equals a shift from a minor to a major criterium, while the < 10% group does not contribute to the diagnosis. Plasma cell fraction in the bone marrow is therefore critical for the classification and optimal clinical management of patients with plasma cell dyscrasias. The aim of this study was (i) to establish a digital image analysis system able to quantify bone marrow plasma cells and (ii) to evaluate two quantification techniques in bone marrow trephines i.e. computer-assisted digital image analysis and conventional light-microscopic evaluation. The results were compared regarding inter-observer variation of the obtained results. Eighty-seven patients, 28 with multiple myeloma, 29 with monoclonal gammopathy of undetermined significance, and 30 with reactive plasmocytosis were included in the study. Plasma cells in H&E- and CD138-stained slides were quantified by two investigators using light-microscopic estimation and computer-assisted digital analysis. The sets of results were correlated with rank correlation coefficients. Patients were categorized according to WHO criteria addressing the plasma cell content of the bone marrow (group 1: 0-10%, group 2: 11-30%, group 3: > 30%), and the results compared by kappa statistics. The degree of agreement in CD138-stained slides was higher for results obtained using the computer-assisted image analysis system compared to light microscopic evaluation (corr.coeff. = 0.782), as was seen in the intra- (corr.coeff. = 0.960) and inter-individual results correlations (corr.coeff. = 0.899). Inter-observer agreement for categorized results (SM/PW: kappa 0.833) was in a high range. Computer-assisted image analysis demonstrated a higher reproducibility of bone marrow plasma cell quantification. This might be of critical importance for diagnosis, clinical management and prognostics when plasma cell numbers are low, which makes exact quantifications difficult.
Savage, Jason W; Moore, Timothy A; Arnold, Paul M; Thakur, Nikhil; Hsu, Wellington K; Patel, Alpesh A; McCarthy, Kathryn; Schroeder, Gregory D; Vaccaro, Alexander R; Dimar, John R; Anderson, Paul A
2015-09-15
The thoracolumbar injury classification system (TLICS) was evaluated in 20 consecutive pediatric spine trauma cases. The purpose of this study was to determine the reliability and validity of the TLICS in pediatric spine trauma. The TLICS was developed to improve the categorization and management of thoracolumbar trauma. TLICS has been shown to have good reliability and validity in the adult population. The clinical and radiographical findings of 20 pediatric thoracolumbar fractures were prospectively presented to 20 surgeons with disparate levels of training and experience with spinal trauma. These injuries were consecutively scored using the TLICS. Cohen unweighted κ coefficients and Spearman rank order correlation values were calculated for the key parameters (injury morphology, status of posterior ligamentous complex, neurological status, TLICS total score, and proposed management) to assess the inter-rater reliabilities. Five surgeons scored the same cases 3 months later to assess the intra-rater reliability. The actual management of each case was then compared with the treatment recommended by the TLICS algorithm to assess validity. The inter-rater κ statistics of all subgroups (injury morphology, status of the posterior ligamentous complex, neurological status, TLICS total score, and proposed treatment) were within the range of moderate to substantial reproducibility (0.524-0.958). All subgroups had excellent intra-rater reliability (0.748-1.000). The various indices for validity were calculated (80.3% correct, 0.836 sensitivity, 0.785 specificity, 0.676 positive predictive value, 0.899 negative predictive value). Overall, TLICS demonstrated good validity. The TLICS has good reliability and validity when used in the pediatric population. The inter-rater reliability of predicting management and indices for validity are lower than those in adults with thoracolumbar fractures, which is likely due to differences in the way children are treated for certain types of injuries. TLICS can be used to reliably categorize thoracolumbar injuries in the pediatric population; however, modifications may be needed to better guide treatment in this specific patient population. 4.
Jun, Sanghoon; Kim, Namkug; Seo, Joon Beom; Lee, Young Kyung; Lynch, David A
2017-12-01
We propose the use of ensemble classifiers to overcome inter-scanner variations in the differentiation of regional disease patterns in high-resolution computed tomography (HRCT) images of diffuse interstitial lung disease patients obtained from different scanners. A total of 600 rectangular 20 × 20-pixel regions of interest (ROIs) on HRCT images obtained from two different scanners (GE and Siemens) and the whole lung area of 92 HRCT images were classified as one of six regional pulmonary disease patterns by two expert radiologists. Textual and shape features were extracted from each ROI and the whole lung parenchyma. For automatic classification, individual and ensemble classifiers were trained and tested with the ROI dataset. We designed the following three experimental sets: an intra-scanner study in which the training and test sets were from the same scanner, an integrated scanner study in which the data from the two scanners were merged, and an inter-scanner study in which the training and test sets were acquired from different scanners. In the ROI-based classification, the ensemble classifiers showed better (p < 0.001) accuracy (89.73%, SD = 0.43) than the individual classifiers (88.38%, SD = 0.31) in the integrated scanner test. The ensemble classifiers also showed partial improvements in the intra- and inter-scanner tests. In the whole lung classification experiment, the quantification accuracies of the ensemble classifiers with integrated training (49.57%) were higher (p < 0.001) than the individual classifiers (48.19%). Furthermore, the ensemble classifiers also showed better performance in both the intra- and inter-scanner experiments. We concluded that the ensemble classifiers provide better performance when using integrated scanner images.
Observer variation in the assessment of root canal curvature.
Faraj, S; Boutsioukis, C
2017-02-01
To evaluate the inter- and intra-observer agreement between training/trained endodontists regarding the ex vivo classification of root canal curvature into three categories and its measurement using three quantitative methods. Periapical radiographs of seven extracted human posterior teeth with varying degrees of curvature were exposed ex vivo. Twenty training/trained endodontists were asked to classify the root canal curvature into three categories (<10°, 10-30°, >30°), to measure the curvature using three quantitative methods (Schneider, Weine, Pruett) and to draw angles of 10° or 30°, as a control experiment. The procedure was repeated after six weeks. Inter- and intra-observer agreement was evaluated by the intraclass correlation coefficient and weighted kappa. The inter-observer agreement on the visual classification of root canal curvature was substantial (ICC = 0.65, P < 0.018), but a trend towards underestimation of the angle was evident. Participants modified their classifications both within and between the two sessions. Median angles drawn as a control experiment were not significantly different from the target values (P > 0.10), but the results of individual participants varied. When quantitative methods were used, the inter- and intra-observer agreement on the angle measurements was considerably better (ICC = 0.76-0.82, P < 0.001) than on the radius measurements (ICC = 0.16-0.19, P > 0.895). Visual estimation of root canal curvature was not reliable. The use of computer-based quantitative methods is recommended. The measurement of radius of curvature was more subjective than angle measurement. Endodontic Associations need to provide specific guidelines on how to estimate root canal curvature in case difficulty assessment forms. © 2015 International Endodontic Journal. Published by John Wiley & Sons Ltd.
First in Space: The Army’s Role in U.S. Space Efforts, 1938-1958
2017-06-09
National Aeronautics and Space Administration ( NASA ) attempted to consolidate early space and missile efforts, inter-service rivalries coupled with...Redstone, Jupiter, ARPA, NASA 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON...Agency (ARPA) and the National Aeronautics and Space Administration ( NASA ) attempted to consolidate early space and missile efforts, inter- service
NASA Technical Reports Server (NTRS)
Cummins, Kenneth L.; Carey, Lawrence D.; Schultz, Christopher J.; Bateman, Monte G.; Cecil, Daniel J.; Rudlosky, Scott D.; Petersen, Walter Arthur; Blakeslee, Richard J.; Goodman, Steven J.
2011-01-01
In order to produce useful proxy data for the GOES-R Geostationary Lightning Mapper (GLM) in regions not covered by VLF lightning mapping systems, we intend to employ data produced by ground-based (regional or global) VLF/LF lightning detection networks. Before using these data in GLM Risk Reduction tasks, it is necessary to have a quantitative understanding of the performance of these networks, in terms of CG flash/stroke DE, cloud flash/pulse DE, location accuracy, and CLD/CG classification error. This information is being obtained through inter-comparison with LMAs and well-quantified VLF/LF lightning networks. One of our approaches is to compare "bulk" counting statistics on the spatial scale of convective cells, in order to both quantify relative performance and observe variations in cell-based temporal trends provided by each network. In addition, we are using microsecond-level stroke/pulse time correlation to facilitate detailed inter-comparisons at a more-fundamental level. The current development status of our ground-based inter-comparison and evaluation tools will be presented, and performance metrics will be discussed through a comparison of Vaisala s Global Lightning Dataset (GLD360) with the NLDN at locations within and outside the U.S.
NASA Astrophysics Data System (ADS)
Cummins, K. L.; Carey, L. D.; Schultz, C. J.; Bateman, M. G.; Cecil, D. J.; Rudlosky, S. D.; Petersen, W. A.; Blakeslee, R. J.; Goodman, S. J.
2011-12-01
In order to produce useful proxy data for the GOES-R Geostationary Lightning Mapper (GLM) in regions not covered by VLF lightning mapping systems, we intend to employ data produced by ground-based (regional or global) VLF/LF lightning detection networks. Before using these data in GLM Risk Reduction tasks, it is necessary to have a quantitative understanding of the performance of these networks, in terms of CG flash/stroke DE, cloud flash/pulse DE, location accuracy, and CLD/CG classification error. This information is being obtained through inter-comparison with LMAs and well-quantified VLF/LF lightning networks. One of our approaches is to compare "bulk" counting statistics on the spatial scale of convective cells, in order to both quantify relative performance and observe variations in cell-based temporal trends provided by each network. In addition, we are using microsecond-level stroke/pulse time correlation to facilitate detailed inter-comparisons at a more-fundamental level. The current development status of our ground-based inter-comparison and evaluation tools will be presented, and performance metrics will be discussed through a comparison of Vaisala's Global Lightning Dataset (GLD360) with the NLDN at locations within and outside the U.S.
Expert identification of visual primitives used by CNNs during mammogram classification
NASA Astrophysics Data System (ADS)
Wu, Jimmy; Peck, Diondra; Hsieh, Scott; Dialani, Vandana; Lehman, Constance D.; Zhou, Bolei; Syrgkanis, Vasilis; Mackey, Lester; Patterson, Genevieve
2018-02-01
This work interprets the internal representations of deep neural networks trained for classification of diseased tissue in 2D mammograms. We propose an expert-in-the-loop inter- pretation method to label the behavior of internal units in convolutional neural networks (CNNs). Expert radiologists identify that the visual patterns detected by the units are correlated with meaningful medical phenomena such as mass tissue and calcificated vessels. We demonstrate that several trained CNN models are able to produce explanatory descriptions to support the final classification decisions. We view this as an important first step toward interpreting the internal representations of medical classification CNNs and explaining their predictions.
NASA Astrophysics Data System (ADS)
Martín–Moruno, Prado; Visser, Matt
2017-11-01
The (generalized) Rainich conditions are algebraic conditions which are polynomial in the (mixed-component) stress-energy tensor. As such they are logically distinct from the usual classical energy conditions (NEC, WEC, SEC, DEC), and logically distinct from the usual Hawking-Ellis (Segré-Plebański) classification of stress-energy tensors (type I, type II, type III, type IV). There will of course be significant inter-connections between these classification schemes, which we explore in the current article. Overall, we shall argue that it is best to view the (generalized) Rainich conditions as a refinement of the classical energy conditions and the usual Hawking-Ellis classification.
Optimization of the ANFIS using a genetic algorithm for physical work rate classification.
Habibi, Ehsanollah; Salehi, Mina; Yadegarfar, Ghasem; Taheri, Ali
2018-03-13
Recently, a new method was proposed for physical work rate classification based on an adaptive neuro-fuzzy inference system (ANFIS). This study aims to present a genetic algorithm (GA)-optimized ANFIS model for a highly accurate classification of physical work rate. Thirty healthy men participated in this study. Directly measured heart rate and oxygen consumption of the participants in the laboratory were used for training the ANFIS classifier model in MATLAB version 8.0.0 using a hybrid algorithm. A similar process was done using the GA as an optimization technique. The accuracy, sensitivity and specificity of the ANFIS classifier model were increased successfully. The mean accuracy of the model was increased from 92.95 to 97.92%. Also, the calculated root mean square error of the model was reduced from 5.4186 to 3.1882. The maximum estimation error of the optimized ANFIS during the network testing process was ± 5%. The GA can be effectively used for ANFIS optimization and leads to an accurate classification of physical work rate. In addition to high accuracy, simple implementation and inter-individual variability consideration are two other advantages of the presented model.
Implementation and impact of ICD-10 (Part II).
Rahmathulla, Gazanfar; Deen, H Gordon; Dokken, Judith A; Pirris, Stephen M; Pichelmann, Mark A; Nottmeier, Eric W; Reimer, Ronald; Wharen, Robert E
2014-01-01
The transition from the International Classification of Disease-9(th) clinical modification to the new ICD-10 was all set to occur on 1 October 2015. The American Medical Association has previously been successful in delaying the transition by over 10 years and has been able to further postpone its introduction to 2015. The new system will overcome many of the limitations present in the older version, thus paving the way to more accurate capture of clinical information. The benefits of the new ICD-10 system include improved quality of care, potential cost savings, reduction of unpaid claims, and improved tracking of healthcare data. The areas where challenges will be evident include planning and implementation, the cost to transition, a shortage of qualified coders, training and education of the healthcare workforce, and a loss of productivity when this occurs. The impacts include substantial costs to the healthcare system, but the projected long-term savings and benefits will be significant. Improved fraud detection, accurate data entry, ability to analyze cost benefits with procedures, and enhanced quality outcome measures are the most significant beneficial factors with this change. The present Current Procedural Terminology and Healthcare Common Procedure Coding System code sets will be used for reporting ambulatory procedures in the same manner as they have been. ICD-10-PCS will replace ICD-9 procedure codes for inpatient hospital services. The ICD-10-CM will replace the clinical code sets. Our article will focus on the challenges to execution of an ICD change and strategies to minimize risk while transitioning to the new system. With the implementation deadline gradually approaching, spine surgery practices that include multidisciplinary health specialists have to anticipate and prepare for the ICD change in order to mitigate risk. Education and communication is the key to this process in spine practices.
Street, J T; Thorogood, N P; Cheung, A; Noonan, V K; Chen, J; Fisher, C G; Dvorak, M F
2013-06-01
Observational cohort comparison. To compare the previously validated Spine Adverse Events Severity system (SAVES) with International Classification of Diseases, Tenth Revision codes (ICD-10) codes for identifying adverse events (AEs) in patients with traumatic spinal cord injury (TSCI). Quaternary Care Spine Program. Patients discharged between 2006 and 2010 were identified from our prospective registry. Two consecutive cohorts were created based on the system used to record acute care AEs; one used ICD-10 coding by hospital coders and the other used SAVES data prospectively collected by a multidisciplinary clinical team. The ICD-10 codes were appropriately mapped to the SAVES. There were 212 patients in the ICD-10 cohort and 173 patients in the SAVES cohort. Analyses were adjusted to account for the different sample sizes, and the two cohorts were comparable based on age, gender and motor score. The SAVES system identified twice as many AEs per person as ICD-10 coding. Fifteen unique AEs were more reliably identified using SAVES, including neuropathic pain (32 × more; P<0.001), urinary tract infections (1.4 × ; P<0.05), pressure sores (2.9 × ; P<0.001) and intra-operative AEs (2.3 × ; P<0.05). Eight of these 15 AEs more frequently identified by SAVES significantly impacted length of stay (P<0.05). Risk factors such as patient age and severity of paralysis were more reliably correlated to AEs collected through SAVES than ICD-10. Implementation of the SAVES system for patients with TSCI captured more individuals experiencing AEs and more AEs per person compared with ICD-10 codes. This study demonstrates the utility of prospectively collecting AE data using validated tools.
DREAM: Classification scheme for dialog acts in clinical research query mediation.
Hoxha, Julia; Chandar, Praveen; He, Zhe; Cimino, James; Hanauer, David; Weng, Chunhua
2016-02-01
Clinical data access involves complex but opaque communication between medical researchers and query analysts. Understanding such communication is indispensable for designing intelligent human-machine dialog systems that automate query formulation. This study investigates email communication and proposes a novel scheme for classifying dialog acts in clinical research query mediation. We analyzed 315 email messages exchanged in the communication for 20 data requests obtained from three institutions. The messages were segmented into 1333 utterance units. Through a rigorous process, we developed a classification scheme and applied it for dialog act annotation of the extracted utterances. Evaluation results with high inter-annotator agreement demonstrate the reliability of this scheme. This dataset is used to contribute preliminary understanding of dialog acts distribution and conversation flow in this dialog space. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Selim, Serdar; Sonmez, Namik Kemal; Onur, Isin; Coslu, Mesut
2017-10-01
Connection of similar landscape patches with ecological corridors supports habitat quality of these patches, increases urban ecological quality, and constitutes an important living and expansion area for wild life. Furthermore, habitat connectivity provided by urban green areas is supporting biodiversity in urban areas. In this study, possible ecological connections between landscape patches, which were achieved by using Expert classification technique and modeled with probabilistic connection index. Firstly, the reflection responses of plants to various bands are used as data in hypotheses. One of the important features of this method is being able to use more than one image at the same time in the formation of the hypothesis. For this reason, before starting the application of the Expert classification, the base images are prepared. In addition to the main image, the hypothesis conditions were also created for each class with the NDVI image which is commonly used in the vegetation researches. Besides, the results of the previously conducted supervised classification were taken into account. We applied this classification method by using the raster imagery with user-defined variables. Hereupon, to provide ecological connections of the tree cover which was achieved from the classification, we used Probabilistic Connection (PC) index. The probabilistic connection model which is used for landscape planning and conservation studies via detecting and prioritization critical areas for ecological connection characterizes the possibility of direct connection between habitats. As a result we obtained over % 90 total accuracy in accuracy assessment analysis. We provided ecological connections with PC index and we created inter-connected green spaces system. Thus, we offered and implicated green infrastructure system model takes place in the agenda of recent years.
Bishop, Julie Y; Jones, Grant L; Lewis, Brian; Pedroza, Angela
2015-04-01
In treatment of distal third clavicle fractures, the Neer classification system, based on the location of the fracture in relation to the coracoclavicular ligaments, has traditionally been used to determine fracture pattern stability. To determine the intra- and interobserver reliability in the classification of distal third clavicle fractures via standard plain radiographs and the intra- and interobserver agreement in the preferred treatment of these fractures. Cohort study (Diagnosis); Level of evidence, 3. Thirty radiographs of distal clavicle fractures were randomly selected from patients treated for distal clavicle fractures between 2006 and 2011. The radiographs were distributed to 22 shoulder/sports medicine fellowship-trained orthopaedic surgeons. Fourteen surgeons responded and took part in the study. The evaluators were asked to measure the size of the distal fragment, classify the fracture pattern as stable or unstable, assign the Neer classification, and recommend operative versus nonoperative treatment. The radiographs were reordered and redistributed 3 months later. Inter- and intrarater agreement was determined for the distal fragment size, stability of the fracture, Neer classification, and decision to operate. Single variable logistic regression was performed to determine what factors could most accurately predict the decision for surgery. Interrater agreement was fair for distal fragment size, moderate for stability, fair for Neer classification, slight for type IIB and III fractures, and moderate for treatment approach. Intrarater agreement was moderate for distal fragment size categories (κ = 0.50, P < .001) and Neer classification (κ = 0.42, P < .001) and substantial for stable fracture (κ = 0.65, P < .001) and decision to operate (κ = 0.65, P < .001). Fracture stability was the best predictor of treatment, with 89% accuracy (P < .001). Fracture stability determination and the decision to operate had the highest interobserver agreement. Fracture stability was the key determinant of treatment, rather than the Neer classification system or the size of the distal fragment. © 2015 The Author(s).
Buczinski, S; Faure, C; Jolivet, S; Abdallah, A
2016-07-01
To determine inter-observer agreement for a clinical scoring system for the detection of bovine respiratory disease complex in calves, and the impact of classification of calves as sick or healthy based on different cut-off values. Two third-year veterinary students (Observer 1 and 2) and one post-graduate student (Observer 3) received 4 hours of training on scoring dairy calves for signs of respiratory disease, including rectal temperature, cough, eye and nasal discharge, and ear position. Observers 1 and 2 scored 40 pre-weaning dairy calves 24 hours apart (80 observations) over three visits to a calf-rearing facility, and Observers 1, 2 and 3 scored 20 calves on one visit. Inter-observer agreement was assessed using percentage of agreement (PA) and Kappa statistics for individual clinical signs, comparing Observers 1 and 2. Agreement between the three observers for total clinical score was assessed using cut-off values of ≥4, ≥5 and ≥6 to indicate unhealthy calves. Inter-observer PA for rectal temperature was 0.68, for cough 0.78, for nasal discharge 0.62, for eye discharge 0.63, and for ear position 0.85. Kappa values for all clinical signs indicated slight to fair agreement (<0.4), except temperature that had moderate agreement (0.6). The Fleiss' Kappa for total score, using cut-offs of ≥4, ≥5 and ≥6 to indicate unhealthy calves, was 0.35, 0.06 and 0.13, respectively, indicating slight to fair agreement. There was important inter-observer discrepancies in scoring clinical signs of respiratory disease, using relatively inexperienced observers. These disagreements may ultimately mean increased false negative or false positive diagnoses and incorrect treatment of cases. Visual assessment of clinical signs associated with bovine respiratory disease needs to be thoroughly validated when disease monitoring is based on the use of a clinical scoring system.
Foo, Brian; van der Schaar, Mihaela
2010-11-01
In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.
Sevcenco, Sabina; Spick, Claudio; Helbich, Thomas H; Heinz, Gertraud; Shariat, Shahrokh F; Klingler, Hans C; Rauchenwald, Michael; Baltzer, Pascal A
2017-06-01
To systematically review the literature on the Bosniak classification system in CT to determine its diagnostic performance to diagnose malignant cystic lesions and the prevalence of malignancy in Bosniak categories. A predefined database search was performed from 1 January 1986 to 18 January 2016. Two independent reviewers extracted data on malignancy rates in Bosniak categories and several covariates using predefined criteria. Study quality was assessed using QUADAS-2. Meta-analysis included data pooling, subgroup analyses, meta-regression and investigation of publication bias. A total of 35 studies, which included 2,578 lesions, were investigated. Data on observer experience, inter-observer variation and technical CT standards were insufficiently reported. The pooled rate of malignancy increased from Bosniak I (3.2 %, 95 % CI 0-6.8, I 2 = 5 %) to Bosniak II (6 %, 95 % CI 2.7-9.3, I 2 = 32 %), IIF (6.7 %, 95 % CI 5-8.4, I 2 = 0 %), III (55.1 %, 95 % CI 45.7-64.5, I 2 = 89 %) and IV (91 %, 95 % CI 87.7-94.2, I 2 = 36). Several study design-related influences on malignancy rates and subsequent diagnostic performance indices were identified. The Bosniak classification is an accurate tool with which to stratify the risk of malignancy in renal cystic lesions. • The Bosniak classification can accurately rule out malignancy. • Specificity remains moderate at 74 % (95 % CI 64-82). • Follow-up examinations should be considered in Bosniak IIF and Bosniak II cysts. • Data on the influence of reader experience and inter-reader variability are insufficient. • Technical CT standards and publication year did not influence diagnostic performance.
Rawashdeh, Mohammad; Lewis, Sarah; Zaitoun, Maha; Brennan, Patrick
2018-05-01
While there is much literature describing the radiologic detection of breast cancer, there are limited data available on the agreement between experts when delineating and classifying breast lesions. The aim of this work is to measure the level of agreement between expert radiologists when delineating and classifying breast lesions as demonstrated through Breast Imaging Reporting and Data System (BI-RADS) and quantitative shape metrics. Forty mammographic images, each containing a single lesion, were presented to nine expert breast radiologists using a high specification interactive digital drawing tablet with stylus. Each reader was asked to manually delineate the breast masses using the tablet and stylus and then visually classify the lesion according to the American College of Radiology (ACR) BI-RADS lexicon. The delineated lesion compactness and elongation were computed using Matlab software. Intraclass Correlation Coefficient (ICC) and Cohen's kappa were used to assess inter-observer agreement for delineation and classification outcomes, respectively. Inter-observer agreement was fair for BI-RADS shape (kappa = 0.37) and moderate for margin (kappa = 0.58) assessments. Agreement for quantitative shape metrics was good for lesion elongation (ICC = 0.82) and excellent for compactness (ICC = 0.93). Fair to moderate levels of agreement was shown by radiologists for shape and margin classifications of cancers using the BI-RADS lexicon. When quantitative shape metrics were used to evaluate radiologists' delineation of lesions, good to excellent inter-observer agreement was found. The results suggest that qualitative descriptors such as BI-RADS lesion shape and margin understate the actual level of expert radiologist agreement. Copyright © 2018 Elsevier Ltd. All rights reserved.
AVHRR composite period selection for land cover classification
Maxwell, S.K.; Hoffer, R.M.; Chapman, P.L.
2002-01-01
Multitemporal satellite image datasets provide valuable information on the phenological characteristics of vegetation, thereby significantly increasing the accuracy of cover type classifications compared to single date classifications. However, the processing of these datasets can become very complex when dealing with multitemporal data combined with multispectral data. Advanced Very High Resolution Radiometer (AVHRR) biweekly composite data are commonly used to classify land cover over large regions. Selecting a subset of these biweekly composite periods may be required to reduce the complexity and cost of land cover mapping. The objective of our research was to evaluate the effect of reducing the number of composite periods and altering the spacing of those composite periods on classification accuracy. Because inter-annual variability can have a major impact on classification results, 5 years of AVHRR data were evaluated. AVHRR biweekly composite images for spectral channels 1-4 (visible, near-infrared and two thermal bands) covering the entire growing season were used to classify 14 cover types over the entire state of Colorado for each of five different years. A supervised classification method was applied to maintain consistent procedures for each case tested. Results indicate that the number of composite periods can be halved-reduced from 14 composite dates to seven composite dates-without significantly reducing overall classification accuracy (80.4% Kappa accuracy for the 14-composite data-set as compared to 80.0% for a seven-composite dataset). At least seven composite periods were required to ensure the classification accuracy was not affected by inter-annual variability due to climate fluctuations. Concentrating more composites near the beginning and end of the growing season, as compared to using evenly spaced time periods, consistently produced slightly higher classification values over the 5 years tested (average Kappa) of 80.3% for the heavy early/late case as compared to 79.0% for the alternate dataset case).
A proposal for classification of entities combining vascular malformations and deregulated growth.
Oduber, Charlène E U; van der Horst, Chantal M A M; Sillevis Smitt, J Henk; Smeulders, Mark J C; Mendiratta, Vibhu; Harper, John I; van Steensel, Maurice A M; Hennekam, Raoul C M
2011-01-01
Agreement on terminology and nomenclature is fundamental and essential for effective exchange of information between clinicians and researchers. An adequate terminology to describe all patients showing vascular malformations combined with deregulated growth is at present not available. To propose a classification of patients with vascular malformations, not restricted to the face, and growth disturbances based on simple, clinically visible characteristics, on which clinicians and researchers can comment and which should eventually lead to an internationally accepted classification. Rooted in our joint experience we established a classification of vascular malformation not limited to the face, with growth disturbances. It is based on the nature and localization of the vascular malformations; the nature, localization and timing of growth disturbances; the nature of co-localization of the vascular malformations and growth disturbances; the presence or absence of other features. Subsequently a mixed (experienced and non-experienced) group of observers evaluated 146 patients (106 from the Netherlands; 40 from the UK) with vascular malformations and disturbed growth, using the classification. Inter-observer variability was assessed by estimating the Intra-Class Correlation (ICC) coefficient and its 95% confidence interval. We defined 6 subgroups within the group of entities with vascular malformation-deregulated growth. Scoring the patients using the proposed classification yielded a high inter-observer reproducibility (ICC varying between 0.747 and 0.895 for all levels of flow). The presently proposed classification was found to be reliable and easy to use for patients with vascular malformations with growth disturbances. We invite both clinicians and researchers to comment on the classification, in order to improve it further. This way we may obtain our final aim of an internationally accepted classification of patients, which should facilitate both clinical treatment and care of, as well as research into the molecular background of entities combining vascular malformation and deregulated growth. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Development of a researcher codebook for use in evaluating social networking site profiles.
Moreno, Megan A; Egan, Katie G; Brockman, Libby
2011-07-01
Social networking sites (SNSs) are immensely popular and allow for the display of personal information, including references to health behaviors. Evaluating displayed content on an SNS for research purposes requires a systematic approach and a precise data collection instrument. The purpose of this article is to describe one approach to the development of a research codebook so that others may develop and test their own codebooks for use in SNS research. Our SNS research codebook began on the basis of health behavior theory and clinical criteria. Key elements in the codebook developmental process included an iterative team approach and an emphasis on confidentiality. Codebook successes include consistently high inter-rater reliability. Challenges include time investment in coder training and SNS server changes. We hope that this article will provide detailed information about one systematic approach to codebook development so that other researchers may use this structure to develop and test their own codebooks for use in SNS research. Copyright © 2011 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gundreddy, Rohith Reddy; Tan, Maxine; Qui, Yuchen; Zheng, Bin
2015-03-01
The purpose of this study is to develop and test a new content-based image retrieval (CBIR) scheme that enables to achieve higher reproducibility when it is implemented in an interactive computer-aided diagnosis (CAD) system without significantly reducing lesion classification performance. This is a new Fourier transform based CBIR algorithm that determines image similarity of two regions of interest (ROI) based on the difference of average regional image pixel value distribution in two Fourier transform mapped images under comparison. A reference image database involving 227 ROIs depicting the verified soft-tissue breast lesions was used. For each testing ROI, the queried lesion center was systematically shifted from 10 to 50 pixels to simulate inter-user variation of querying suspicious lesion center when using an interactive CAD system. The lesion classification performance and reproducibility as the queried lesion center shift were assessed and compared among the three CBIR schemes based on Fourier transform, mutual information and Pearson correlation. Each CBIR scheme retrieved 10 most similar reference ROIs and computed a likelihood score of the queried ROI depicting a malignant lesion. The experimental results shown that three CBIR schemes yielded very comparable lesion classification performance as measured by the areas under ROC curves with the p-value greater than 0.498. However, the CBIR scheme using Fourier transform yielded the highest invariance to both queried lesion center shift and lesion size change. This study demonstrated the feasibility of improving robustness of the interactive CAD systems by adding a new Fourier transform based image feature to CBIR schemes.
NASA Technical Reports Server (NTRS)
Kondoz, A. M.; Evans, B. G.
1993-01-01
In the last decade, low bit rate speech coding research has received much attention resulting in newly developed, good quality, speech coders operating at as low as 4.8 Kb/s. Although speech quality at around 8 Kb/s is acceptable for a wide variety of applications, at 4.8 Kb/s more improvements in quality are necessary to make it acceptable to the majority of applications and users. In addition to the required low bit rate with acceptable speech quality, other facilities such as integrated digital echo cancellation and voice activity detection are now becoming necessary to provide a cost effective and compact solution. In this paper we describe a CELP speech coder with integrated echo canceller and a voice activity detector all of which have been implemented on a single DSP32C with 32 KBytes of SRAM. The quality of CELP coded speech has been improved significantly by a new codebook implementation which also simplifies the encoder/decoder complexity making room for the integration of a 64-tap echo canceller together with a voice activity detector.
The influence of radiographic viewing perspective and demographics on the Critical Shoulder Angle
Suter, Thomas; Popp, Ariane Gerber; Zhang, Yue; Zhang, Chong; Tashjian, Robert Z.; Henninger, Heath B.
2014-01-01
Background Accurate assessment of the critical shoulder angle (CSA) is important in clinical evaluation of degenerative rotator cuff tears. This study analyzed the influence of radiographic viewing perspective on the CSA, developed a classification system to identify malpositioned radiographs, and assessed the relationship between the CSA and demographic factors. Methods Glenoid height, width and retroversion were measured on 3D CT reconstructions of 68 cadaver scapulae. A digitally reconstructed radiograph was aligned perpendicular to the scapular plane, and retroversion was corrected to obtain a true antero-posterior (AP) view. In 10 scapulae, incremental anteversion/retroversion and flexion/extension views were generated. The CSA was measured and a clinically applicable classification system was developed to detect views with >2° change in CSA versus true AP. Results The average CSA was 33±4°. Intra- and inter-observer reliability was high (ICC≥0.81) but decreased with increasing viewing angle. Views beyond 5° anteversion, 8° retroversion, 15° flexion and 26° extension resulted in >2° deviation of the CSA compared to true AP. The classification system was capable of detecting aberrant viewing perspectives with sensitivity of 95% and specificity of 53%. Correlations between glenoid size and CSA were small (R≤0.3), and CSA did not vary by gender (p=0.426) or side (p=0.821). Conclusions The CSA was most susceptible to malposition in ante/retroversion. Deviations as little as 5° in anteversion resulted in a CSA >2° from true AP. A new classification system refines the ability to collect true AP radiographs of the scapula. The CSA was unaffected by demographic factors. PMID:25591458
Guenther, Daniel; Irarrázaval, Sebastian; Nishizawa, Yuichiro; Vernacchia, Cara; Thorhauer, Eric; Musahl, Volker; Irrgang, James J; Fu, Freddie H
2017-08-01
To propose a classification system for the shape of the tibial insertion site (TIS) of the anterior cruciate ligament (ACL) and to demonstrate the intra- and inter-rater agreement of this system. Due to variation in shape and size, different surgical approaches may be feasible to improve reconstruction of the TIS. One hundred patients with a mean age of 26 ± 11 years were included. The ACL was cut arthroscopically at the base of the tibial insertion site. Arthroscopic images were taken from the lateral and medial portal. Images were de-identified and duplicated. Two blinded observers classified the tibial insertion site according to a classification system. The tibial insertion site was classified as type I (elliptical) in 51 knees (51 %), type II (triangular) in 33 knees (33 %) and type III (C-shaped) in 16 knees (16 %). There was good agreement between raters when viewing the insertion site from the lateral portal (κ = 0.65) as well as from the medial portal (κ = 0.66). Intra-rater reliability was good to excellent. Agreement in the description of the insertion site between the medial and lateral portals was good for rater 1 and good for rater 2 (κ = 0.74 and 0.77, respectively). There is variation in the shape of the ACL TIS. The classification system is a repeatable and reliable tool to summarize the shape of the TIS using three common patterns. For clinical relevance, different shapes may require different types of reconstruction to ensure proper footprint restoration. Consideration of the individual TIS shape is required to prevent iatrogenic damage of adjacent structures like the menisci. III.
Health systems strengthening: a common classification and framework for investment analysis
Shakarishvili, George; Lansang, Mary Ann; Mitta, Vinod; Bornemisza, Olga; Blakley, Matthew; Kley, Nicole; Burgess, Craig; Atun, Rifat
2011-01-01
Significant scale-up of donors’ investments in health systems strengthening (HSS), and the increased application of harmonization mechanisms for jointly channelling donor resources in countries, necessitate the development of a common framework for tracking donors’ HSS expenditures. Such a framework would make it possible to comparatively analyse donors’ contributions to strengthening specific aspects of countries’ health systems in multi-donor-supported HSS environments. Four pre-requisite factors are required for developing such a framework: (i) harmonization of conceptual and operational understanding of what constitutes HSS; (ii) development of a common set of criteria to define health expenditures as contributors to HSS; (iii) development of a common HSS classification system; and (iv) harmonization of HSS programmatic and financial data to allow for inter-agency comparative analyses. Building on the analysis of these aspects, the paper proposes a framework for tracking donors’ investments in HSS, as a departure point for further discussions aimed at developing a commonly agreed approach. Comparative analysis of financial allocations by the Global Fund to Fight AIDS, Tuberculosis and Malaria and the GAVI Alliance for HSS, as an illustrative example of applying the proposed framework in practice, is also presented. PMID:20952397
Geophysical phenomena classification by artificial neural networks
NASA Technical Reports Server (NTRS)
Gough, M. P.; Bruckner, J. R.
1995-01-01
Space science information systems involve accessing vast data bases. There is a need for an automatic process by which properties of the whole data set can be assimilated and presented to the user. Where data are in the form of spectrograms, phenomena can be detected by pattern recognition techniques. Presented are the first results obtained by applying unsupervised Artificial Neural Networks (ANN's) to the classification of magnetospheric wave spectra. The networks used here were a simple unsupervised Hamming network run on a PC and a more sophisticated CALM network run on a Sparc workstation. The ANN's were compared in their geophysical data recognition performance. CALM networks offer such qualities as fast learning, superiority in generalizing, the ability to continuously adapt to changes in the pattern set, and the possibility to modularize the network to allow the inter-relation between phenomena and data sets. This work is the first step toward an information system interface being developed at Sussex, the Whole Information System Expert (WISE). Phenomena in the data are automatically identified and provided to the user in the form of a data occurrence morphology, the Whole Information System Data Occurrence Morphology (WISDOM), along with relationships to other parameters and phenomena.
NASA's mobile satellite development program
NASA Technical Reports Server (NTRS)
Rafferty, William; Dessouky, Khaled; Sue, Miles
1988-01-01
A Mobile Satellite System (MSS) will provide data and voice communications over a vast geographical area to a large population of mobile users. A technical overview is given of the extensive research and development studies and development performed under NASA's mobile satellite program (MSAT-X) in support of the introduction of a U.S. MSS. The critical technologies necessary to enable such a system are emphasized: vehicle antennas, modulation and coding, speech coders, networking and propagation characterization. Also proposed is a first, and future generation MSS architecture based upon realized ground segment equipment and advanced space segment studies.
Yu, Il Je; Kim, Dong Suk; Lim, Cheol Hong; Choi, Jung Yun; Lee, Je Bong; Chung, Ok-Sun; Kwon, Kyungok; Yum, Young Na; Kim, Jeongho; Kuk, Won-Kwen; Kim, Kyun
2007-12-01
To implement the globally harmonized system of classification and labelling of chemicals (GHS) in Korea, an inter-ministerial GHS committee, involving 8 ministries and an expert working group composed of 9 experts from relevant organizations and one private consultant, have made some progress towards implementation by 2008. As such, the first revision of the official Korean translated version of the GHS in accordance with the GHS purple book revision 1 in 2005, including annexes, started in August, 2006, was completed in December, 2006. The Ministry of Labor also finally revised the Industrial Safety and Health Act (ISHA) relating to the GHS and the detailed notification was announced on Dec 12, 2006 and became effective immediately. The revised ISHA will allow continued use of the existing hazard communication system until Jun 30, 2008. Other revisions of chemical-related regulations will follow soon to facilitate the implementation of the GHS by 2008. Besides, inter-ministerial collaborative efforts on harmonizing regulations and disseminating the GHS in Korea will continue to avoid any confusion or duplication and for the effective use of resources.
Practical Qualitative Research Strategies: Training Interviewers and Coders.
Goodell, L Suzanne; Stage, Virginia C; Cooke, Natalie K
2016-09-01
The increased emphasis on incorporating qualitative methodologies into nutrition education development and evaluation underscores the importance of using rigorous protocols to enhance the trustworthiness of the findings. A 5-phase protocol for training qualitative research assistants (data collectors and coders) was developed as an approach to increase the consistency of the data produced. This training provides exposure to the core principles of qualitative research and then asks the research assistant to apply those principles through practice in a setting structured on critical reflection. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Brophy, Jere; And Others
This is the fourth in a series of four reports describing a study of 1,614 junior high school mathematics and English students and 69 of their teachers that was undertaken to discover the effects of different teaching behaviors on cognitive and affective student outcomes. This booklet is the working manual used for coder training and includes…
Automated quasi-3D spine curvature quantification and classification
NASA Astrophysics Data System (ADS)
Khilari, Rupal; Puchin, Juris; Okada, Kazunori
2018-02-01
Scoliosis is a highly prevalent spine deformity that has traditionally been diagnosed through measurement of the Cobb angle on radiographs. More recent technology such as the commercial EOS imaging system, although more accurate, also require manual intervention for selecting the extremes of the vertebrae forming the Cobb angle. This results in a high degree of inter and intra observer error in determining the extent of spine deformity. Our primary focus is to eliminate the need for manual intervention by robustly quantifying the curvature of the spine in three dimensions, making it consistent across multiple observers. Given the vertebrae centroids, the proposed Vertebrae Sequence Angle (VSA) estimation and segmentation algorithm finds the largest angle between consecutive pairs of centroids within multiple inflection points on the curve. To exploit existing clinical diagnostic standards, the algorithm uses a quasi-3-dimensional approach considering the curvature in the coronal and sagittal projection planes of the spine. Experiments were performed with manuallyannotated ground-truth classification of publicly available, centroid-annotated CT spine datasets. This was compared with the results obtained from manual Cobb and Centroid angle estimation methods. Using the VSA, we then automatically classify the occurrence and the severity of spine curvature based on Lenke's classification for idiopathic scoliosis. We observe that the results appear promising with a scoliotic angle lying within +/- 9° of the Cobb and Centroid angle, and vertebrae positions differing by at the most one position. Our system also resulted in perfect classification of scoliotic from healthy spines with our dataset with six cases.
Martínez-Granados, Luis; Serrano, María; González-Utor, Antonio; Ortíz, Nereyda; Badajoz, Vicente; Olaya, Enrique; Prados, Nicolás; Boada, Montse; Castilla, Jose A
2017-01-01
The aim of this study is to determine inter-laboratory variability on embryo assessment using time-lapse platform and conventional morphological assessment. This study compares the data obtained from a pilot study of external quality control (EQC) of time lapse, performed in 2014, with the classical EQC of the Spanish Society for the Study of Reproductive Biology (ASEBIR) performed in 2013 and 2014. In total, 24 laboratories (8 using EmbryoScope™, 15 using Primo Vision™ and one with both platforms) took part in the pilot study. The clinics that used EmbryoScope™ analysed 31 embryos and those using Primo Vision™ analysed 35. The classical EQC was implemented by 39 clinics, based on an analysis of 25 embryos per year. Both groups were required to evaluate various qualitative morphological variables (cell fragmentation, the presence of vacuoles, blastomere asymmetry and multinucleation), to classify the embryos in accordance with ASEBIR criteria and to stipulate the clinical decision taken. In the EQC time-lapse pilot study, the groups were asked to determine, as well as the above characteristics, the embryo development times, the number, opposition and size of pronuclei, the direct division of 1 into 3 cells and/or of 3 into 5 cells and false divisions. The degree of agreement was determined by calculating the intra-class correlation coefficients and the coefficient of variation for the quantitative variables and the Gwet index for the qualitative variables. For both EmbryoScope™ and Primo Vision™, two periods of greater inter-laboratory variability were observed in the times of embryo development events. One peak of variability was recorded among the laboratories addressing the first embryo events (extrusion of the second polar body and the appearance of pronuclei); the second peak took place between the times corresponding to the 8-cell and morula stages. In most of the qualitative variables analysed regarding embryo development, there was almost-perfect inter-laboratory agreement among conventional morphological assessment (CMA), EmbryoScope™ and Primo Vision™, except for false divisions, vacuoles and asymmetry (users of all methods) and multinucleation (users of Primo Vision™), where the degree of agreement was lower. The inter-laboratory agreement on embryo classification according to the ASEBIR criteria was moderate-substantial (Gwet 0.41-0.80) for the laboratories using CMA and EmbryoScope™, and fair-moderate (Gwet 0.21-0.60) for those using Primo Vision™. The inter-laboratory agreement for clinical decision was moderate (Gwet 0.41-0.60) on day 5 for CMA users and almost perfect (Gwet 0.81-1) for time-lapse users. In conclusion, time-lapse technology does not improve inter-laboratory agreement on embryo classification or the analysis of each morphological variable. Moreover, depending on the time-lapse platform used, inter-laboratory agreement may be lower than that obtained by CMA. However, inter-laboratory agreement on clinical decisions is improved with the use of time lapse, regardless of the platform used.
Serrano, María; González-Utor, Antonio; Ortíz, Nereyda; Badajoz, Vicente; Olaya, Enrique; Prados, Nicolás; Boada, Montse; Castilla, Jose A.
2017-01-01
The aim of this study is to determine inter-laboratory variability on embryo assessment using time-lapse platform and conventional morphological assessment. This study compares the data obtained from a pilot study of external quality control (EQC) of time lapse, performed in 2014, with the classical EQC of the Spanish Society for the Study of Reproductive Biology (ASEBIR) performed in 2013 and 2014. In total, 24 laboratories (8 using EmbryoScope™, 15 using Primo Vision™ and one with both platforms) took part in the pilot study. The clinics that used EmbryoScope™ analysed 31 embryos and those using Primo Vision™ analysed 35. The classical EQC was implemented by 39 clinics, based on an analysis of 25 embryos per year. Both groups were required to evaluate various qualitative morphological variables (cell fragmentation, the presence of vacuoles, blastomere asymmetry and multinucleation), to classify the embryos in accordance with ASEBIR criteria and to stipulate the clinical decision taken. In the EQC time-lapse pilot study, the groups were asked to determine, as well as the above characteristics, the embryo development times, the number, opposition and size of pronuclei, the direct division of 1 into 3 cells and/or of 3 into 5 cells and false divisions. The degree of agreement was determined by calculating the intra-class correlation coefficients and the coefficient of variation for the quantitative variables and the Gwet index for the qualitative variables. For both EmbryoScope™ and Primo Vision™, two periods of greater inter-laboratory variability were observed in the times of embryo development events. One peak of variability was recorded among the laboratories addressing the first embryo events (extrusion of the second polar body and the appearance of pronuclei); the second peak took place between the times corresponding to the 8-cell and morula stages. In most of the qualitative variables analysed regarding embryo development, there was almost-perfect inter-laboratory agreement among conventional morphological assessment (CMA), EmbryoScope™ and Primo Vision™, except for false divisions, vacuoles and asymmetry (users of all methods) and multinucleation (users of Primo Vision™), where the degree of agreement was lower. The inter-laboratory agreement on embryo classification according to the ASEBIR criteria was moderate-substantial (Gwet 0.41–0.80) for the laboratories using CMA and EmbryoScope™, and fair-moderate (Gwet 0.21–0.60) for those using Primo Vision™. The inter-laboratory agreement for clinical decision was moderate (Gwet 0.41–0.60) on day 5 for CMA users and almost perfect (Gwet 0.81–1) for time-lapse users. In conclusion, time-lapse technology does not improve inter-laboratory agreement on embryo classification or the analysis of each morphological variable. Moreover, depending on the time-lapse platform used, inter-laboratory agreement may be lower than that obtained by CMA. However, inter-laboratory agreement on clinical decisions is improved with the use of time lapse, regardless of the platform used. PMID:28841654
Kopka, Michaela; Fourman, Mitchell; Soni, Ashish; Cordle, Andrew C; Lin, Albert
2017-09-01
The Walch classification is the most recognized means of assessing glenoid wear in preoperative planning for shoulder arthroplasty. This classification relies on advanced imaging, which is more expensive and less practical than plain radiographs. The purpose of this study was to determine whether the Walch classification could be accurately applied to x-ray images compared with magnetic resonance imaging (MRI) as the gold standard. We hypothesized that x-ray images cannot adequately replace advanced imaging in the evaluation of glenoid wear. Preoperative axillary x-ray images and MRI scans of 50 patients assessed for shoulder arthroplasty were independently reviewed by 5 raters. Glenoid wear was individually classified according to the Walch classification using each imaging modality. The raters then collectively reviewed the MRI scans and assigned a consensus classification to serve as the gold standard. The κ coefficient was used to determine interobserver agreement for x-ray images and independent MRI reads, as well as the agreement between x-ray images and consensus MRI. The inter-rater agreement for x-ray images and MRIs was "moderate" (κ = 0.42 and κ = 0.47, respectively) for the 5-category Walch classification (A1, A2, B1, B2, C) and "moderate" (κ = 0.54 and κ = 0.59, respectively) for the 3-category Walch classification (A, B, C). The agreement between x-ray images and consensus MRI was much lower: "fair-to-moderate" (κ = 0.21-0.51) for the 5-category and "moderate" (κ = 0.36-0.60) for the 3-category Walch classification. The inter-rater agreement between x-ray images and consensus MRI is "fair-to-moderate." This is lower than the previously reported reliability of the Walch classification using computed tomography scans. Accordingly, x-ray images are inferior to advanced imaging when assessing glenoid wear. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Pillai, Anilkumar; Medford, Andrew R L
2013-01-01
Correct coding is essential for accurate reimbursement for clinical activity. Published data confirm that significant aberrations in coding occur, leading to considerable financial inaccuracies especially in interventional procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). Previous data reported a 15% coding error for EBUS-TBNA in a U.K. service. We hypothesised that greater physician involvement with coders would reduce EBUS-TBNA coding errors and financial disparity. The study was done as a prospective cohort study in the tertiary EBUS-TBNA service in Bristol. 165 consecutive patients between October 2009 and March 2012 underwent EBUS-TBNA for evaluation of unexplained mediastinal adenopathy on computed tomography. The chief coder was prospectively electronically informed of all procedures and cross-checked on a prospective database and by Trust Informatics. Cost and coding analysis was performed using the 2010-2011 tariffs. All 165 procedures (100%) were coded correctly as verified by Trust Informatics. This compares favourably with the 14.4% coding inaccuracy rate for EBUS-TBNA in a previous U.K. prospective cohort study [odds ratio 201.1 (1.1-357.5), p = 0.006]. Projected income loss was GBP 40,000 per year in the previous study, compared to a GBP 492,195 income here with no coding-attributable loss in revenue. Greater physician engagement with coders prevents coding errors and financial losses which can be significant especially in interventional specialties. The intervention can be as cheap, quick and simple as a prospective email to the coding team with cross-checks by Trust Informatics and against a procedural database. We suggest that all specialties should engage more with their coders using such a simple intervention to prevent revenue losses. Copyright © 2013 S. Karger AG, Basel.
Elimination of RF inhomogeneity effects in segmentation.
Agus, Onur; Ozkan, Mehmed; Aydin, Kubilay
2007-01-01
There are various methods proposed for the segmentation and analysis of MR images. However the efficiency of these techniques is effected by various artifacts that occur in the imaging system. One of the most encountered problems is the intensity variation across an image. To overcome this problem different methods are used. In this paper we propose a method for the elimination of intensity artifacts in segmentation of MRI images. Inter imager variations are also minimized to produce the same tissue segmentation for the same patient. A well-known multivariate classification algorithm, maximum likelihood is employed to illustrate the enhancement in segmentation.
Classification of customer lifetime value models using Markov chain
NASA Astrophysics Data System (ADS)
Permana, Dony; Pasaribu, Udjianna S.; Indratno, Sapto W.; Suprayogi
2017-10-01
A firm’s potential reward in future time from a customer can be determined by customer lifetime value (CLV). There are some mathematic methods to calculate it. One method is using Markov chain stochastic model. Here, a customer is assumed through some states. Transition inter the states follow Markovian properties. If we are given some states for a customer and the relationships inter states, then we can make some Markov models to describe the properties of the customer. As Markov models, CLV is defined as a vector contains CLV for a customer in the first state. In this paper we make a classification of Markov Models to calculate CLV. Start from two states of customer model, we make develop in many states models. The development a model is based on weaknesses in previous model. Some last models can be expected to describe how real characters of customers in a firm.
Application of a VLSI vector quantization processor to real-time speech coding
NASA Technical Reports Server (NTRS)
Davidson, G.; Gersho, A.
1986-01-01
Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.
FPGA in-the-loop simulations of cardiac excitation model under voltage clamp conditions
NASA Astrophysics Data System (ADS)
Othman, Norliza; Adon, Nur Atiqah; Mahmud, Farhanahani
2017-01-01
Voltage clamp technique allows the detection of single channel currents in biological membranes in identifying variety of electrophysiological problems in the cellular level. In this paper, a simulation study of the voltage clamp technique has been presented to analyse current-voltage (I-V) characteristics of ion currents based on Luo-Rudy Phase-I (LR-I) cardiac model by using a Field Programmable Gate Array (FPGA). Nowadays, cardiac models are becoming increasingly complex which can cause a vast amount of time to run the simulation. Thus, a real-time hardware implementation using FPGA could be one of the best solutions for high-performance real-time systems as it provides high configurability and performance, and able to executes in parallel mode operation. For shorter time development while retaining high confidence results, FPGA-based rapid prototyping through HDL Coder from MATLAB software has been used to construct the algorithm for the simulation system. Basically, the HDL Coder is capable to convert the designed MATLAB Simulink blocks into hardware description language (HDL) for the FPGA implementation. As a result, the voltage-clamp fixed-point design of LR-I model has been successfully conducted in MATLAB Simulink and the simulation of the I-V characteristics of the ionic currents has been verified on Xilinx FPGA Virtex-6 XC6VLX240T development board through an FPGA-in-the-loop (FIL) simulation.
Tzallas, A T; Karvelis, P S; Katsis, C D; Fotiadis, D I; Giannopoulos, S; Konitsiotis, S
2006-01-01
The aim of the paper is to analyze transient events in inter-ictal EEG recordings, and classify epileptic activity into focal or generalized epilepsy using an automated method. A two-stage approach is proposed. In the first stage the observed transient events of a single channel are classified into four categories: epileptic spike (ES), muscle activity (EMG), eye blinking activity (EOG), and sharp alpha activity (SAA). The process is based on an artificial neural network. Different artificial neural network architectures have been tried and the network having the lowest error has been selected using the hold out approach. In the second stage a knowledge-based system is used to produce diagnosis for focal or generalized epileptic activity. The classification of transient events reported high overall accuracy (84.48%), while the knowledge-based system for epilepsy diagnosis correctly classified nine out of ten cases. The proposed method is advantageous since it effectively detects and classifies the undesirable activity into appropriate categories and produces a final outcome related to the existence of epilepsy.
Proposition of a Classification of Adult Patients with Hemiparesis in Chronic Phase.
Chantraine, Frédéric; Filipetti, Paul; Schreiber, Céline; Remacle, Angélique; Kolanowski, Elisabeth; Moissenet, Florent
2016-01-01
Patients who have developed hemiparesis as a result of a central nervous system lesion, often experience reduced walking capacity and worse gait quality. Although clinically, similar gait patterns have been observed, presently, no clinically driven classification has been validated to group these patients' gait abnormalities at the level of the hip, knee and ankle joints. This study has thus intended to put forward a new gait classification for adult patients with hemiparesis in chronic phase, and to validate its discriminatory capacity. Twenty-six patients with hemiparesis were included in this observational study. Following a clinical examination, a clinical gait analysis, complemented by a video analysis, was performed whereby participants were requested to walk spontaneously on a 10m walkway. A patient's classification was established from clinical examination data and video analysis. This classification was made up of three groups, including two sub-groups, defined with key abnormalities observed whilst walking. Statistical analysis was achieved on the basis of 25 parameters resulting from the clinical gait analysis in order to assess the discriminatory characteristic of the classification as displayed by the walking speed and kinematic parameters. Results revealed that the parameters related to the discriminant criteria of the proposed classification were all significantly different between groups and subgroups. More generally, nearly two thirds of the 25 parameters showed significant differences (p<0.05) between the groups and sub-groups. However, prior to being fully validated, this classification must still be tested on a larger number of patients, and the repeatability of inter-operator measures must be assessed. This classification enables patients to be grouped on the basis of key abnormalities observed whilst walking and has the advantage of being able to be used in clinical routines without necessitating complex apparatus. In the midterm, this classification may allow a decision-tree of therapies to be developed on the basis of the group in which the patient has been categorised.
Proposition of a Classification of Adult Patients with Hemiparesis in Chronic Phase
Filipetti, Paul; Remacle, Angélique; Kolanowski, Elisabeth
2016-01-01
Background Patients who have developed hemiparesis as a result of a central nervous system lesion, often experience reduced walking capacity and worse gait quality. Although clinically, similar gait patterns have been observed, presently, no clinically driven classification has been validated to group these patients’ gait abnormalities at the level of the hip, knee and ankle joints. This study has thus intended to put forward a new gait classification for adult patients with hemiparesis in chronic phase, and to validate its discriminatory capacity. Methods and Findings Twenty-six patients with hemiparesis were included in this observational study. Following a clinical examination, a clinical gait analysis, complemented by a video analysis, was performed whereby participants were requested to walk spontaneously on a 10m walkway. A patient’s classification was established from clinical examination data and video analysis. This classification was made up of three groups, including two sub-groups, defined with key abnormalities observed whilst walking. Statistical analysis was achieved on the basis of 25 parameters resulting from the clinical gait analysis in order to assess the discriminatory characteristic of the classification as displayed by the walking speed and kinematic parameters. Results revealed that the parameters related to the discriminant criteria of the proposed classification were all significantly different between groups and subgroups. More generally, nearly two thirds of the 25 parameters showed significant differences (p<0.05) between the groups and sub-groups. However, prior to being fully validated, this classification must still be tested on a larger number of patients, and the repeatability of inter-operator measures must be assessed. Conclusions This classification enables patients to be grouped on the basis of key abnormalities observed whilst walking and has the advantage of being able to be used in clinical routines without necessitating complex apparatus. In the midterm, this classification may allow a decision-tree of therapies to be developed on the basis of the group in which the patient has been categorised. PMID:27271533
Layered Wyner-Ziv video coding.
Xu, Qian; Xiong, Zixiang
2006-12-01
Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.
Developing and utilising a new funding model for home-care services in New Zealand.
Parsons, Matthew; Rouse, Paul; Sajtos, Laszlo; Harrison, Julie; Parsons, John; Gestro, Lisa
2018-05-01
Worldwide increases in the numbers of older people alongside an accompanying international policy incentive to support ageing-in-place have focussed the importance of home-care services as an alternative to institutionalisation. Despite this, funding models that facilitate a responsive, flexible approach are lacking. Casemix provides one solution, but the transition from the well-established hospital system to community has been problematic. This research seeks to develop a Casemix funding solution for home-care services through meaningful client profile groups and supporting pathways. Unique assessments from 3,135 older people were collected from two health board regions in 2012. Of these, 1,009 arose from older people with non-complex needs using the interRAI-Contact Assessment (CA) and 2,126 from the interRAI-Home-Care (HC) from older people with complex needs. Home-care service hours were collected for 3 months following each assessment and the mean weekly hours were calculated. Data were analysed using a decision tree analysis, whereby mean hours of weekly home-care was the dependent variable with responses from the assessment tools, the independent variables. A total of three main groups were developed from the interRAI-CA, each one further classified into "stable" or "flexible." The classification explained 16% of formal home-care service hour variability. Analysis of the interRAI-HC generated 33 clusters, organised through eight disability "sub" groups and five "lead" groups. The groupings explained 24% of formal home-care services hour variance. Adopting a Casemix system within home-care services can facilitate a more appropriate response to the changing needs of older people. © 2017 John Wiley & Sons Ltd.
Braunschmidt, Brigitte; Müller, Gerhard; Jukic-Puntigam, Margareta; Steininger, Alfred
2013-01-01
Incontinence-associated dermatitis (IAD) is the clinical manifestation of moisture related skin damage (Beeckman, Woodward, & Gray, 2011). Valid assessment instruments are needed for risk assessment and classification of IAD. Aim of the quantitative-descriptive cross-sectional study was to determine the inter-rater reliability of the item scores of the German Incontinence Associated Dermatitis Intervention Tool (IADIT-D) between two independent assessors of nursing home residents (n = 381) in long-term care facilities. The 19 pairs of assessors consisted of registered nurses. The data analysis was computed first with the calculation of the total percentage of agreement. Because this value is not randomly adjusted, the calculation of the Kappa-coefficients and AC1-Statistic was done as well. The total percentage of the inter-rater agreement was 84% (n = 319). In a second step of analysis, the calculation of all items determined high (kappa = .70) and very high agreement (AC1 = .83) levels, respectively. For the risk assessment (kappa = .82; AC1 = .94), the values amounted to very high agreement levels and for the classification (kappa(w) = .70; AC1 = .76) to high agreement levels. The high to very high agreement values of IADIT-D demonstrate that the items can be regarded as stable in regards to the inter-rater reliability for the use in long-term care facilities. In addition, further validation studies are needed.
Fesharaki, Nooshin Jafari; Pourghassem, Hossein
2013-07-01
Due to the daily mass production and the widespread variation of medical X-ray images, it is necessary to classify these for searching and retrieving proposes, especially for content-based medical image retrieval systems. In this paper, a medical X-ray image hierarchical classification structure based on a novel merging and splitting scheme and using shape and texture features is proposed. In the first level of the proposed structure, to improve the classification performance, similar classes with regard to shape contents are grouped based on merging measures and shape features into the general overlapped classes. In the next levels of this structure, the overlapped classes split in smaller classes based on the classification performance of combination of shape and texture features or texture features only. Ultimately, in the last levels, this procedure is also continued forming all the classes, separately. Moreover, to optimize the feature vector in the proposed structure, we use orthogonal forward selection algorithm according to Mahalanobis class separability measure as a feature selection and reduction algorithm. In other words, according to the complexity and inter-class distance of each class, a sub-space of the feature space is selected in each level and then a supervised merging and splitting scheme is applied to form the hierarchical classification. The proposed structure is evaluated on a database consisting of 2158 medical X-ray images of 18 classes (IMAGECLEF 2005 database) and accuracy rate of 93.6% in the last level of the hierarchical structure for an 18-class classification problem is obtained.
The ITE Land classification: Providing an environmental stratification of Great Britain.
Bunce, R G; Barr, C J; Gillespie, M K; Howard, D C
1996-01-01
The surface of Great Britain (GB) varies continuously in land cover from one area to another. The objective of any environmentally based land classification is to produce classes that match the patterns that are present by helping to define clear boundaries. The more appropriate the analysis and data used, the better the classes will fit the natural patterns. The observation of inter-correlations between ecological factors is the basis for interpreting ecological patterns in the field, and the Institute of Terrestrial Ecology (ITE) Land Classification formalises such subjective ideas. The data inevitably comprise a large number of factors in order to describe the environment adequately. Single factors, such as altitude, would only be useful on a national basis if they were the only dominant causative agent of ecological variation.The ITE Land Classification has defined 32 environmental categories called 'land classes', initially based on a sample of 1-km squares in Great Britain but subsequently extended to all 240 000 1-km squares. The original classification was produced using multivariate analysis of 75 environmental variables. The extension to all squares in GB was performed using a combination of logistic discrimination and discriminant functions. The classes have provided a stratification for successive ecological surveys, the results of which have characterised the classes in terms of botanical, zoological and landscape features.The classification has also been applied to integrate diverse datasets including satellite imagery, soils and socio-economic information. A variety of models have used the structure of the classification, for example to show potential land use change under different economic conditions. The principal data sets relevant for planning purposes have been incorporated into a user-friendly computer package, called the 'Countryside Information System'.
An adaptive DPCM encoder for NTSC composite video signals
NASA Astrophysics Data System (ADS)
Cox, N. R.
An adaptive DPCM algorithm is proposed for encoding digitized National Television Systems Committee (NTSC) color video signals. This algorithm essentially predicts picture contours in the composite signal without resorting to component separation. Preliminary subjective and objective tests performed on an experimental encoder/simulator indicate that high quality color pictures can be encoded at 4.0 bits/pel or 42.95 Mbit/s. This requires the use of a 4/8 bit dual-word-length coder and buffer memory. Such a system might be useful in certain short hop applications if both large-signal and small-signal responses can be preserved.
Ramirez, Elena; Laosa, Olga; Guerra, Pedro; Duque, Blanca; Mosquera, Beatriz; Borobia, Alberto M; Lei, Suhua H; Carcas, Antonio J; Frias, Jesus
2010-01-01
AIM The aim of this study was to evaluate the acceptability of 124 bioequivalence (BE) studies with 80 active substances categorized according to the Biopharmaceutics Classification System (BCS) in order to establish if there were different probabilities of proving BE between the different BCS classes. METHODS We evaluated the differences between pharmaceutical products with active substances from different BCS classes in terms of acceptability, number of subjects in the study (n), the point estimates, and intra- and inter-subject coefficients of variation data from BE studies with generic products. RESULTS Out of 124 BE studies 89 (71.77%) were performed with pharmaceutical products containing active substances classified by the BCS. In all BCS classes there were non-bioequivalent pharmaceutical products: 4 out of 26 (15.38%) in class 1, 14 out of 28 (50%) in class 2, 3 out of 22 (13.63%) in class 3 and 1 out of 13 (7.69%) in class 4. When we removed those pharmaceutical products in which intra-subject variability was higher than predicted (2 in class 1 active substances, 9 in class 2 and 2 in class 3) there were still non-BE pharmaceutical products in classes 1, 2 and 3. CONCLUSIONS Comparisons between pharmaceutical products with active substances from the four BCS classes have not allowed us to define differential characteristics of each class in terms of n, inter and intra-subject variability for Cmax or AUC. Despite the usually employed test dissolution methodology proposed as quality control, pharmaceutical products with active substances from the four classes of BCS showed non-BE studies. PMID:21039763
Anderer, Peter; Gruber, Georg; Parapatics, Silvia; Woertz, Michael; Miazhynskaia, Tatiana; Klosch, Gerhard; Saletu, Bernd; Zeitlhofer, Josef; Barbanoj, Manuel J; Danker-Hopfe, Heidi; Himanen, Sari-Leena; Kemp, Bob; Penzel, Thomas; Grozinger, Michael; Kunz, Dieter; Rappelsberger, Peter; Schlogl, Alois; Dorffner, Georg
2005-01-01
To date, the only standard for the classification of sleep-EEG recordings that has found worldwide acceptance are the rules published in 1968 by Rechtschaffen and Kales. Even though several attempts have been made to automate the classification process, so far no method has been published that has proven its validity in a study including a sufficiently large number of controls and patients of all adult age ranges. The present paper describes the development and optimization of an automatic classification system that is based on one central EEG channel, two EOG channels and one chin EMG channel. It adheres to the decision rules for visual scoring as closely as possible and includes a structured quality control procedure by a human expert. The final system (Somnolyzer 24 x 7) consists of a raw data quality check, a feature extraction algorithm (density and intensity of sleep/wake-related patterns such as sleep spindles, delta waves, SEMs and REMs), a feature matrix plausibility check, a classifier designed as an expert system, a rule-based smoothing procedure for the start and the end of stages REM, and finally a statistical comparison to age- and sex-matched normal healthy controls (Siesta Spot Report). The expert system considers different prior probabilities of stage changes depending on the preceding sleep stage, the occurrence of a movement arousal and the position of the epoch within the NREM/REM sleep cycles. Moreover, results obtained with and without using the chin EMG signal are combined. The Siesta polysomnographic database (590 recordings in both normal healthy subjects aged 20-95 years and patients suffering from organic or nonorganic sleep disorders) was split into two halves, which were randomly assigned to a training and a validation set, respectively. The final validation revealed an overall epoch-by-epoch agreement of 80% (Cohen's kappa: 0.72) between the Somnolyzer 24 x 7 and the human expert scoring, as compared with an inter-rater reliability of 77% (Cohen's kappa: 0.68) between two human experts scoring the same dataset. Two Somnolyzer 24 x 7 analyses (including a structured quality control by two human experts) revealed an inter-rater reliability close to 1 (Cohen's kappa: 0.991), which confirmed that the variability induced by the quality control procedure, whereby approximately 1% of the epochs (in 9.5% of the recordings) are changed, can definitely be neglected. Thus, the validation study proved the high reliability and validity of the Somnolyzer 24 x 7 and demonstrated its applicability in clinical routine and sleep studies.
Neighborhood graph and learning discriminative distance functions for clinical decision support.
Tsymbal, Alexey; Zhou, Shaohua Kevin; Huber, Martin
2009-01-01
There are two essential reasons for the slow progress in the acceptance of clinical case retrieval and similarity search-based decision support systems; the especial complexity of clinical data making it difficult to define a meaningful and effective distance function on them and the lack of transparency and explanation ability in many existing clinical case retrieval decision support systems. In this paper, we try to address these two problems by introducing a novel technique for visualizing inter-patient similarity based on a node-link representation with neighborhood graphs and by considering two techniques for learning discriminative distance function that help to combine the power of strong "black box" learners with the transparency of case retrieval and nearest neighbor classification.
Qcorp: an annotated classification corpus of Chinese health questions.
Guo, Haihong; Na, Xu; Li, Jiao
2018-03-22
Health question-answering (QA) systems have become a typical application scenario of Artificial Intelligent (AI). An annotated question corpus is prerequisite for training machines to understand health information needs of users. Thus, we aimed to develop an annotated classification corpus of Chinese health questions (Qcorp) and make it openly accessible. We developed a two-layered classification schema and corresponding annotation rules on basis of our previous work. Using the schema, we annotated 5000 questions that were randomly selected from 5 Chinese health websites within 6 broad sections. 8 annotators participated in the annotation task, and the inter-annotator agreement was evaluated to ensure the corpus quality. Furthermore, the distribution and relationship of the annotated tags were measured by descriptive statistics and social network map. The questions were annotated using 7101 tags that covers 29 topic categories in the two-layered schema. In our released corpus, the distribution of questions on the top-layered categories was treatment of 64.22%, diagnosis of 37.14%, epidemiology of 14.96%, healthy lifestyle of 10.38%, and health provider choice of 4.54% respectively. Both the annotated health questions and annotation schema were openly accessible on the Qcorp website. Users can download the annotated Chinese questions in CSV, XML, and HTML format. We developed a Chinese health question corpus including 5000 manually annotated questions. It is openly accessible and would contribute to the intelligent health QA system development.
Annotation and prediction of stress and workload from physiological and inertial signals.
Ghosh, Arindam; Danieli, Morena; Riccardi, Giuseppe
2015-08-01
Continuous daily stress and high workload can have negative effects on individuals' physical and mental well-being. It has been shown that physiological signals may support the prediction of stress and workload. However, previous research is limited by the low diversity of signals concurring to such predictive tasks and controlled experimental design. In this paper we present 1) a pipeline for continuous and real-life acquisition of physiological and inertial signals 2) a mobile agent application for on-the-go event annotation and 3) an end-to-end signal processing and classification system for stress and workload from diverse signal streams. We study physiological signals such as Galvanic Skin Response (GSR), Skin Temperature (ST), Inter Beat Interval (IBI) and Blood Volume Pulse (BVP) collected using a non-invasive wearable device; and inertial signals collected from accelerometer and gyroscope sensors. We combine them with subjects' inputs (e.g. event tagging) acquired using the agent application, and their emotion regulation scores. In our experiments we explore signal combination and selection techniques for stress and workload prediction from subjects whose signals have been recorded continuously during their daily life. The end-to-end classification system is described for feature extraction, signal artifact removal, and classification. We show that a combination of physiological, inertial and user event signals provides accurate prediction of stress for real-life users and signals.
Della Mea, Vincenzo; Vuattolo, Omar; Frattura, Lucilla; Munari, Flavia; Verdini, Eleonora; Zanier, Loris; Arcangeli, Laura; Carle, Flavia
2015-01-01
In Italy, ICD-9-CM is currently used for coding health conditions at hospital discharge, but ICD-10 is being introduced thanks to the IT-DRG Project. In this project, one needed component is a set of transcoding rules and associated tools for easing coders work in the transition. The present paper illustrates design and development of those transcoding rules, and their preliminary testing on a subset of Italian hospital discharge data.
Hierarchical image coding with diamond-shaped sub-bands
NASA Technical Reports Server (NTRS)
Li, Xiaohui; Wang, Jie; Bauer, Peter; Sauer, Ken
1992-01-01
We present a sub-band image coding/decoding system using a diamond-shaped pyramid frequency decomposition to more closely match visual sensitivities than conventional rectangular bands. Filter banks are composed of simple, low order IIR components. The coder is especially designed to function in a multiple resolution reconstruction setting, in situations such as variable capacity channels or receivers, where images must be reconstructed without the entire pyramid of sub-bands. We use a nonlinear interpolation technique for lost subbands to compensate for loss of aliasing cancellation.
Reliability of a New Radiographic Classification for Developmental Dysplasia of the Hip.
Narayanan, Unni; Mulpuri, Kishore; Sankar, Wudbhav N; Clarke, Nicholas M P; Hosalkar, Harish; Price, Charles T
2015-01-01
Existing radiographic classification schemes (eg, Tönnis criteria) for DDH quantify the severity of disease based on the position of the ossific nucleus relative to Hilgenreiner's and Perkin's lines. By definition, this method requires the presence of an ossification centre, which can be delayed in appearance and eccentric in location within the femoral head. A new radiographic classification system has been developed by the International Hip Dysplasia Institute (IHDI), which uses the mid-point of the proximal femoral metaphysis as a reference landmark, and can therefore be applied to children of all ages. The purpose of this study was to compare the reliability of this new method with that of Tönnis, as the first step in establishing its validity and clinical utility. Twenty standardized anteroposterior pelvic radiographs of children with untreated DDH were selected purposefully to capture the spectrum of age (range, 3 to 32 mo) at presentation and disease severity. Each of the hips was classified separately by the IHDI and Tönnis methods by 6 experienced pediatric orthopaedists from the United States, Canada, Mexico, United Kingdom, and by 2 orthopaedic senior residents. The inter-rater reliability was tested using the Intra Class Correlation coefficient (ICC) to measure concordance between raters. All 40 hips were classifiable by the IHDI method by all raters. Ten of the 40 hips could not be classified by the Tönnis method because of the absence of the ossific nucleus on one or both sides. The ICC (95% confidence interval) for the IHDI method for all raters was 0.90 (0.83-0.95) and 0.95 (0.91-0.98) for the right and left hips, respectively. The corresponding ICCs for the Tönnis method were 0.63 (0.46-0.80) and 0.60 (0.43-0.78), respectively. There was no significant difference between the ICCs of the 6 experts and 2 trainees. The IHDI method of classification has excellent inter-rater reliability, both among experts and novices, and is more widely applicable than the Tönnis method as it can be applied even when the ossification centre is absent. Level II (diagnostic).
Nakajima, Erica C; Frankland, Michael P; Johnson, Tucker F; Antic, Sanja L; Chen, Heidi; Chen, Sheau-Chiann; Karwoski, Ronald A; Walker, Ronald; Landman, Bennett A; Clay, Ryan D; Bartholmai, Brian J; Rajagopalan, Srinivasan; Peikert, Tobias; Massion, Pierre P; Maldonado, Fabien
2018-01-01
Lung adenocarcinoma (ADC), the most common lung cancer type, is recognized increasingly as a disease spectrum. To guide individualized patient care, a non-invasive means of distinguishing indolent from aggressive ADC subtypes is needed urgently. Computer-Aided Nodule Assessment and Risk Yield (CANARY) is a novel computed tomography (CT) tool that characterizes early ADCs by detecting nine distinct CT voxel classes, representing a spectrum of lepidic to invasive growth, within an ADC. CANARY characterization has been shown to correlate with ADC histology and patient outcomes. This study evaluated the inter-observer variability of CANARY analysis. Three novice observers segmented and analyzed independently 95 biopsy-confirmed lung ADCs from Vanderbilt University Medical Center/Nashville Veterans Administration Tennessee Valley Healthcare system (VUMC/TVHS) and the Mayo Clinic (Mayo). Inter-observer variability was measured using intra-class correlation coefficient (ICC). The average ICC for all CANARY classes was 0.828 (95% CI 0.76, 0.895) for the VUMC/TVHS cohort, and 0.852 (95% CI 0.804, 0.901) for the Mayo cohort. The most invasive voxel classes had the highest ICC values. To determine whether nodule size influenced inter-observer variability, an additional cohort of 49 sub-centimeter nodules from Mayo were also segmented by three observers, with similar ICC results. Our study demonstrates that CANARY ADC classification between novice CANARY users has an acceptably low degree of variability, and supports the further development of CANARY for clinical application.
Algorithms for a very high speed universal noiseless coding module
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Yeh, Pen-Shu
1991-01-01
The algorithmic definitions and performance characterizations are presented for a high performance adaptive coding module. Operation of at least one of these (single chip) implementations is expected to exceed 500 Mbits/s under laboratory conditions. Operation of a companion decoding module should operate at up to half the coder's rate. The module incorporates a powerful noiseless coder for Standard Form Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers where the smaller integers are more likely than the larger ones). Performance close to data entropies can be expected over a Dynamic Range of from 1.5 to 12 to 14 bits/sample (depending on the implementation).
Calcium, Vitamin D, Iron, and Folate Messages in Three Canadian Magazines.
Cooper, Marcia; Zalot, Lindsay; Wadsworth, Laurie A
2014-12-01
Data from the Canadian Community Health Survey showed that calcium, vitamin D, iron, and folate are nutrients of concern for females 19-50 years of age. The study objectives were to assess the quantity, format, and accuracy of messages related to these nutrients in selected Canadian magazines and to examine their congruency with Canadian nutrition policies. Using content analysis methodology, messages were coded using a stratified sample of a constructed year for Canadian Living, Chatelaine, and Homemakers magazines (n = 33) from 2003-2008. Pilot research was conducted to assess inter-coder agreement and to develop the study coding sheet and codebook. The messages identified (n = 595) averaged 18 messages per magazine issue. The most messages were found for calcium, followed by folate, iron, and vitamin D, and the messages were found primarily in articles (46%) and advertisements (37%). Overall, most messages were coded as accurate (82%) and congruent with Canadian nutrition policies (90%). This research demonstrated that the majority of messages in 3 Canadian magazines between 2003 and 2008 were accurate and reflected Canadian nutrition policies. Because Canadian women continue to receive much nutrition information via print media, this research provides important insights for dietitians into media messaging.
Bagot, Kathleen L; Cadilhac, Dominique A; Bladin, Christopher F; Watkins, Caroline L; Vu, Michelle; Donnan, Geoffrey A; Dewey, Helen M; Emsley, Hedley C A; Davies, D Paul; Day, Elaine; Ford, Gary A; Price, Christopher I; May, Carl R; McLoughlin, Alison S R; Gibson, Josephine M E; Lightbody, Catherine E
2017-11-21
Stroke telemedicine can reduce healthcare inequities by increasing access to specialists. Successful telemedicine networks require specialists adapting clinical practice to provide remote consultations. Variation in experiences of specialists between different countries is unknown. To support future implementation, we compared perceptions of Australian and United Kingdom specialists providing remote acute stroke consultations. Specialist participants were identified using purposive sampling from two new services: Australia's Victorian Stroke Telemedicine Program (n = 6; 2010-13) and the United Kingdom's Cumbria and Lancashire telestroke network (n = 5; 2010-2012). Semi-structured interviews were conducted pre- and post-implementation, recorded and transcribed verbatim. Deductive thematic and content analysis (NVivo) was undertaken by two independent coders using Normalisation Process Theory to explore integration of telemedicine into practice. Agreement between coders was M = 91%, SD = 9 and weighted average κ = 0.70. Cross-cultural similarities and differences were found. In both countries, specialists described old and new consulting practices, the purpose and value of telemedicine systems, and concerns regarding confidence in the assessment and diagnostic skills of unknown colleagues requesting telemedicine support. Australian specialists discussed how remote consultations impacted on usual roles and suggested future improvements, while United Kingdom specialists discussed system governance, policy and procedures. Australian and United Kingdom specialists reported telemedicine required changes in work practice and development of new skills. Both groups described potential for improvements in stroke telemedicine systems with Australian specialists more focused on role change and the United Kingdom on system governance issues. Future research should examine if cross-cultural variation reflects different models of care and extends to other networks.
Ridder, Hans-Gerd; Doege, Vanessa; Martini, Susanne
2007-12-01
This article aims to examine the implementation process of diagnosis-related groups (DRGs) in the clinical departments of a German hospital group and to explain why some gain competitive advantage while others do not. To investigate this research question, we conducted a qualitative study based on primary data obtained in six clinical departments in a German hospital group between 2003 and 2005. We chose the case study method in order to gain deep insights into the process dynamics of the implementation of DRGs in the six clinical departments. The dynamic capability approach is used as a theoretical foundation. Employing theory-driven categories we focused on idiosyncratic and common patterns of "successful coders" and "unsuccessful coders." To observe the implementation process of DRGs, we conducted 43 semistructured interviews with key persons, carried out direct observations of the monthly meetings of the DRG project group, and sampled written materials. "Successful coders" invest into change resources, demonstrate a high level of acceptance of innovations, and organize effective processes of coordination and learning. All clinical departments only put an emphasis on the coding aspects of the DRGs. There is a lack of vision regarding the optimization of patient treatment processes and specialization. Physicians are the most important key actors, rather than the main barriers.
Ethical and educational considerations in coding hand surgeries.
Lifchez, Scott D; Leinberry, Charles F; Rivlin, Michael; Blazar, Philip E
2014-07-01
To assess treatment coding knowledge and practices among residents, fellows, and attending hand surgeons. Through the use of 6 hypothetical cases, we developed a coding survey to assess coding knowledge and practices. We e-mailed this survey to residents, fellows, and attending hand surgeons. In additionally, we asked 2 professional coders to code these cases. A total of 71 participants completed the survey out of 134 people to whom the survey was sent (response rate = 53%). We observed marked disparity in codes chosen among surgeons and among professional coders. Results of this study indicate that coding knowledge, not just its ethical application, had a major role in coding procedures accurately. Surgical coding is an essential part of a hand surgeon's practice and is not well learned during residency or fellowship. Whereas ethical issues such as deliberate unbundling and upcoding may have a role in inaccurate coding, lack of knowledge among surgeons and coders has a major role as well. Coding has a critical role in every hand surgery practice. Inconstancies among those polled in this study reveal that an increase in education on coding during training and improvement in the clarity and consistency of the Current Procedural Terminology coding rules themselves are needed. Copyright © 2014 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Morawietz, L; Gehrke, Th; Classen, R-A; Barden, B; Otto, M; Hansen, T; Aigner, Th; Stiehl, P; Neidel, J; Schröder, J H; Frommelt, L; Schubert, Th; Meyer-Scholten, C; König, A; Ströbel, Ph; Rader, Ch P; Kirschner, S; Lintner, F; Rüther, W; Skwara, A; Bos, I; Kriegsmann, J; Krenn, V
2004-09-01
After 10 years, loosening of total joint endoprostheses occurs in about 3 to 10 percent of all patients, requiring elaborate revision surgery. A periprosthetic membrane is routinely found between bone and loosened prosthesis. Further histomorphological examination allows determination of the etiology of the loosening process. Aim of this study is the introduction of clearly defined histopathological criteria for a standardized evaluation of the periprosthetic membrane. Based on histomorphological criteria and polarized light microscopy, four types of the periprosthetic membrane were defined: periprosthetic membrane of wear particle type (type I), periprosthetic membrane of infectious type (type II), periprosthetic membrane of combined type (type III), periprosthetic membrane of indifferent type (type IV). Periprosthetic membranes of 268 patients were analyzed according to the defined criteria. The correlation between histopathological and microbiological diagnosis was high (89%, p<0,001), the inter-observer reproducibility was sufficient (95%). This classification system enables a standardized diagnostic procedure and therefore is a basis for further studies concerning the etiology of and pathogenesis of prosthesis loosening.
Implementation and impact of ICD-10 (Part II)
Rahmathulla, Gazanfar; Deen, H. Gordon; Dokken, Judith A.; Pirris, Stephen M.; Pichelmann, Mark A.; Nottmeier, Eric W.; Reimer, Ronald; Wharen, Robert E.
2014-01-01
Background: The transition from the International Classification of Disease-9th clinical modification to the new ICD-10 was all set to occur on 1 October 2015. The American Medical Association has previously been successful in delaying the transition by over 10 years and has been able to further postpone its introduction to 2015. The new system will overcome many of the limitations present in the older version, thus paving the way to more accurate capture of clinical information. Methods: The benefits of the new ICD-10 system include improved quality of care, potential cost savings, reduction of unpaid claims, and improved tracking of healthcare data. The areas where challenges will be evident include planning and implementation, the cost to transition, a shortage of qualified coders, training and education of the healthcare workforce, and a loss of productivity when this occurs. The impacts include substantial costs to the healthcare system, but the projected long-term savings and benefits will be significant. Improved fraud detection, accurate data entry, ability to analyze cost benefits with procedures, and enhanced quality outcome measures are the most significant beneficial factors with this change. Results: The present Current Procedural Terminology and Healthcare Common Procedure Coding System code sets will be used for reporting ambulatory procedures in the same manner as they have been. ICD-10-PCS will replace ICD-9 procedure codes for inpatient hospital services. The ICD-10-CM will replace the clinical code sets. Our article will focus on the challenges to execution of an ICD change and strategies to minimize risk while transitioning to the new system. Conclusion: With the implementation deadline gradually approaching, spine surgery practices that include multidisciplinary health specialists have to anticipate and prepare for the ICD change in order to mitigate risk. Education and communication is the key to this process in spine practices. PMID:25184098
Shahraz, Saeid; Lagu, Tara; Ritter, Grant A; Liu, Xiadong; Tompkins, Christopher
2017-03-01
Selection of International Classification of Diseases (ICD)-based coded information for complex conditions such as severe sepsis is a subjective process and the results are sensitive to the codes selected. We use an innovative data exploration method to guide ICD-based case selection for severe sepsis. Using the Nationwide Inpatient Sample, we applied Latent Class Analysis (LCA) to determine if medical coders follow any uniform and sensible coding for observations with severe sepsis. We examined whether ICD-9 codes specific to sepsis (038.xx for septicemia, a subset of 995.9 codes representing Systemic Inflammatory Response syndrome, and 785.52 for septic shock) could all be members of the same latent class. Hospitalizations coded with sepsis-specific codes could be assigned to a latent class of their own. This class constituted 22.8% of all potential sepsis observations. The probability of an observation with any sepsis-specific codes being assigned to the residual class was near 0. The chance of an observation in the residual class having a sepsis-specific code as the principal diagnosis was close to 0. Validity of sepsis class assignment is supported by empirical results, which indicated that in-hospital deaths in the sepsis-specific class were around 4 times as likely as that in the residual class. The conventional methods of defining severe sepsis cases in observational data substantially misclassify sepsis cases. We suggest a methodology that helps reliable selection of ICD codes for conditions that require complex coding.
Job coding (PCS 2003): feedback from a study conducted in an Occupational Health Service
Henrotin, Jean-Bernard; Vaissière, Monique; Etaix, Maryline; Malard, Stéphane; Dziurla, Mathieu; Lafon, Dominique
2016-10-19
Aim: To examine the quality of manual job coding carried out by occupational health teams with access to a software application that provides assistance in job and business sector coding (CAPS). Methods: Data from a study conducted in an Occupational Health Service were used to examine the first-level coding of 1,495 jobs by occupational health teams according to the French job classification entitled “PSC- Professions and socio-professional categories” (INSEE, 2003 version). A second level of coding was also performed by an experienced coder and the first and second level codes were compared. Agreement between the two coding systems was studied using the kappa coefficient (κ) and frequencies were compared by Chi2 tests. Results: Missing data or incorrect codes were observed for 14.5% of social groups (1 digit) and 25.7% of job codes (4 digits). While agreement between the first two levels of PCS 2003 appeared to be satisfactory (κ=0.73 and κ=0.75), imbalances in reassignment flows were effectively noted. The divergent job code rate was 48.2%. Variation in the frequency of socio-occupational variables was as high as 8.6% after correcting for missing data and divergent codes. Conclusions: Compared with other studies, the use of the CAPS tool appeared to provide effective coding assistance. However, our results indicate that job coding based on PSC 2003 should be conducted using ancillary data by personnel trained in the use of this tool.
Accuracy of the Interpretation of Chest Radiographs for the Diagnosis of Paediatric Pneumonia
Elemraid, Mohamed A.; Muller, Michelle; Spencer, David A.; Rushton, Stephen P.; Gorton, Russell; Thomas, Matthew F.; Eastham, Katherine M.; Hampton, Fiona; Gennery, Andrew R.; Clark, Julia E.
2014-01-01
Introduction World Health Organization (WHO) radiological classification remains an important entry criterion in epidemiological studies of pneumonia in children. We report inter-observer variability in the interpretation of 169 chest radiographs in children suspected of having pneumonia. Methods An 18-month prospective aetiological study of pneumonia was undertaken in Northern England. Chest radiographs were performed on eligible children aged ≤16 years with clinical features of pneumonia. The initial radiology report was compared with a subsequent assessment by a consultant cardiothoracic radiologist. Chest radiographic changes were categorised according to the WHO classification. Results There was significant disagreement (22%) between the first and second reports (kappa = 0.70, P<0.001), notably in those aged <5 years (26%, kappa = 0.66, P<0.001). The most frequent sources of disagreement were the reporting of patchy and perihilar changes. Conclusion This substantial inter-observer variability highlights the need for experts from different countries to create a consensus to review the radiological definition of pneumonia in children. PMID:25148361
NASA Astrophysics Data System (ADS)
Brooks, Kristine M.
The goal of science education is the preparation of scientifically literate students (Abd-El-Khalick & Lederman, 2000, & American Association for the Advancement of Science (AAAS), 1990). In order to instruct students in the nature of science with its history, development, methods and applications, science teachers use textbooks as the primary organizer for the curriculum (Chippetta, Ganesh, Lee, & Phillips, 2006). Science textbooks are the dominant instructional tool that exerts great influence on instructional content and its delivery (Wang, 1998). Science and science literacy requires acquiring knowledge about the natural world and understanding its application in society, or, in other words, the nature of science. An understanding of the nature of science is an important part of science literacy (Abd-El-Khalik & Lederman, 2000, & AAAS, 1990). The nature of science has four basic themes or dimensions: science as a body of knowledge, science as a way of thinking, science as a way of investigating, and science with its interaction with technology and society (Chippetta & Koballa, 2006). Textbooks must relay and incorporate these themes to promote science literacy. The results from this content analysis provide further insights into science textbooks and their content with regard to the inclusion of the nature of science and ethnic diversity. Science textbooks usually downplay human influences (Clough & Olson, 2004) whether as part of the nature of science with its historical development or its interaction with societies of diverse cultures. Minority students are underperforming in science and science is divided on ethnic, linguistic, and gender identity (Brown, 2005). Greater representations of diversity in curriculum materials enable minority students to identify with science (Nines, 2000). Textbooks, with their influence on curriculum and presentation, must include links for science and students of diverse cultures. What is the balance of the four aspects of the nature of science and what is the balance of ethnic diversity in the participants in science (students and scientists) in physical science textbooks? To establish an answer to these questions, this investigation used content analysis. For the balance of the four aspects of the nature of science, the analysis was conducted on random page samples of five physical science textbooks. A random sampling of the pages within the physical science textbooks should be sufficient to represent the content of the textbooks (Garcia, 1985). For the balance of ethnic diversity of the participants in science, the analysis was conducted on all pictures or drawings of students and scientists within the content of the five textbooks. One of these IPC books is under current use in a large, local school district and the other four were published during the same, or similar, year. Coding procedures for the sample used two sets of coders. One set of coders have previously analyzed for the nature of science in a study on middle school science textbooks (Phillips, 2006) and the coders for ethnic diversity are public school teachers who have worked with ethnically diverse students for over ten years. Both sets of coders were trained and the reliability of their coding checked before coding the five textbooks. To check for inter-coder reliability, percent agreement, Cohen's kappa and Krippendorff's alpha were calculated. The results from this study indicate that science as a body of knowledge and science as a way of investigating are the prevalent themes of the nature of science in the five physical science textbooks. This investigation also found that there is an imbalance in the ethnic diversity of students and scientists portrayed within the chapters of the physical science textbooks studied. This imbalance reflects ratios that are neither equally balanced nor in align with the U.S. Census. Given that textbooks are the main sources of information in most classrooms, the imbalance of the nature of science could provide the students, and the teachers, with an incomplete perception and understanding of the nature of science. This imbalance could also provide the students with inadequate skills to develop and process science information and apply it to their world. The ethnic diversity portrayed in the physical science textbooks provides an inadequate link between the students' ethnic backgrounds and the ethnic diversity of the participants of science. Educators and publishers should provide science textbooks that incorporate all four aspects of the nature of science to a degree that science is perceived as more than just facts and information. Science must be recognized as a way of investigating, a way of thinking, and a way of applying knowledge to society. Further, in order to recognize all people who take part in science, students and scientists from a variety of ethnic groups should be portrayed in the physical science textbooks.
Keenan, S J; Diamond, J; McCluggage, W G; Bharucha, H; Thompson, D; Bartels, P H; Hamilton, P W
2000-11-01
The histological grading of cervical intraepithelial neoplasia (CIN) remains subjective, resulting in inter- and intra-observer variation and poor reproducibility in the grading of cervical lesions. This study has attempted to develop an objective grading system using automated machine vision. The architectural features of cervical squamous epithelium are quantitatively analysed using a combination of computerized digital image processing and Delaunay triangulation analysis; 230 images digitally captured from cases previously classified by a gynaecological pathologist included normal cervical squamous epithelium (n=30), koilocytosis (n=46), CIN 1 (n=52), CIN 2 (n=56), and CIN 3 (n=46). Intra- and inter-observer variation had kappa values of 0.502 and 0.415, respectively. A machine vision system was developed in KS400 macro programming language to segment and mark the centres of all nuclei within the epithelium. By object-oriented analysis of image components, the positional information of nuclei was used to construct a Delaunay triangulation mesh. Each mesh was analysed to compute triangle dimensions including the mean triangle area, the mean triangle edge length, and the number of triangles per unit area, giving an individual quantitative profile of measurements for each case. Discriminant analysis of the geometric data revealed the significant discriminatory variables from which a classification score was derived. The scoring system distinguished between normal and CIN 3 in 98.7% of cases and between koilocytosis and CIN 1 in 76.5% of cases, but only 62.3% of the CIN cases were classified into the correct group, with the CIN 2 group showing the highest rate of misclassification. Graphical plots of triangulation data demonstrated the continuum of morphological change from normal squamous epithelium to the highest grade of CIN, with overlapping of the groups originally defined by the pathologists. This study shows that automated location of nuclei in cervical biopsies using computerized image analysis is possible. Analysis of positional information enables quantitative evaluation of architectural features in CIN using Delaunay triangulation meshes, which is effective in the objective classification of CIN. This demonstrates the future potential of automated machine vision systems in diagnostic histopathology. Copyright 2000 John Wiley & Sons, Ltd.
Krishnaprasad, Krupa; Andrews, Jane M; Lawrance, Ian C; Florin, Timothy; Gearry, Richard B; Leong, Rupert W L; Mahy, Gillian; Bampton, Peter; Prosser, Ruth; Leach, Peta; Chitti, Laurie; Cock, Charles; Grafton, Rachel; Croft, Anthony R; Cooke, Sharon; Doecke, James D; Radford-Smith, Graham L
2012-04-01
Crohn's disease (CD) exhibits significant clinical heterogeneity. Classification systems attempt to describe this; however, their utility and reliability depends on inter-observer agreement (IOA). We therefore sought to evaluate IOA using the Montreal Classification (MC). De-identified clinical records of 35 CD patients from 6 Australian IBD centres were presented to 13 expert practitioners from 8 Australia and New Zealand Inflammatory Bowel Disease Consortium (ANZIBDC) centres. Practitioners classified the cases using MC and forwarded data for central blinded analysis. IOA on smoking and medications was also tested. Kappa statistics, with pre-specified outcomes of κ>0.8 excellent; 0.61-0.8 good; 0.41-0.6 moderate and ≤0.4 poor, were used. 97% of study cases had colonoscopy reports, however, only 31% had undergone a complete set of diagnostic investigations (colonoscopy, histology, SB imaging). At diagnosis, IOA was excellent for age, κ=0.84; good for disease location, κ=0.73; only moderate for upper GI disease (κ=0.57) and disease behaviour, κ=0.54; and good for the presence of perianal disease, κ=0.6. At last follow-up, IOA was good for location, κ=0.68; only moderate for upper GI disease (κ=0.43) and disease behaviour, κ=0.46; but excellent for the presence/absence of perianal disease, κ=0.88. IOA for immunosuppressant use ever and presence of stricture were both good (κ=0.79 and 0.64 respectively). IOA using MC is generally good; however some areas are less consistent than others. Omissions and inaccuracies reduce the value of clinical data when comparing cohorts across different centres, and may impair the ability to translate genetic discoveries into clinical practice. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
Pulley, Simon; Foster, Ian; Collins, Adrian L
2017-06-01
The objective classification of sediment source groups is at present an under-investigated aspect of source tracing studies, which has the potential to statistically improve discrimination between sediment sources and reduce uncertainty. This paper investigates this potential using three different source group classification schemes. The first classification scheme was simple surface and subsurface groupings (Scheme 1). The tracer signatures were then used in a two-step cluster analysis to identify the sediment source groupings naturally defined by the tracer signatures (Scheme 2). The cluster source groups were then modified by splitting each one into a surface and subsurface component to suit catchment management goals (Scheme 3). The schemes were tested using artificial mixtures of sediment source samples. Controlled corruptions were made to some of the mixtures to mimic the potential causes of tracer non-conservatism present when using tracers in natural fluvial environments. It was determined how accurately the known proportions of sediment sources in the mixtures were identified after unmixing modelling using the three classification schemes. The cluster analysis derived source groups (2) significantly increased tracer variability ratios (inter-/intra-source group variability) (up to 2122%, median 194%) compared to the surface and subsurface groupings (1). As a result, the composition of the artificial mixtures was identified an average of 9.8% more accurately on the 0-100% contribution scale. It was found that the cluster groups could be reclassified into a surface and subsurface component (3) with no significant increase in composite uncertainty (a 0.1% increase over Scheme 2). The far smaller effects of simulated tracer non-conservatism for the cluster analysis based schemes (2 and 3) was primarily attributed to the increased inter-group variability producing a far larger sediment source signal that the non-conservatism noise (1). Modified cluster analysis based classification methods have the potential to reduce composite uncertainty significantly in future source tracing studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wu, Zhiyuan; Yuan, Hong; Zhang, Xinju; Liu, Weiwei; Xu, Jinhua; Zhang, Wei; Guan, Ming
2011-01-01
JAK2 V617F, a somatic point mutation that leads to constitutive JAK2 phosphorylation and kinase activation, has been incorporated into the WHO classification and diagnostic criteria of myeloid neoplasms. Although various approaches such as restriction fragment length polymorphism, amplification refractory mutation system and real-time PCR have been developed for its detection, a generic rapid closed-tube method, which can be utilized on routine genetic testing instruments with stability and cost-efficiency, has not been described. Asymmetric PCR for detection of JAK2 V617F with a 3'-blocked unlabeled probe, saturate dye and subsequent melting curve analysis was performed on a Rotor-Gene® Q real-time cycler to establish the methodology. We compared this method to the existing amplification refractory mutation systems and direct sequencing. Hereafter, the broad applicability of this unlabeled probe melting method was also validated on three diverse real-time systems (Roche LightCycler® 480, Applied Biosystems ABI® 7500 and Eppendorf Mastercycler® ep realplex) in two different laboratories. The unlabeled probe melting analysis could genotype JAK2 V617F mutation explicitly with a 3% mutation load detecting sensitivity. At level of 5% mutation load, the intra- and inter-assay CVs of probe-DNA heteroduplex (mutation/wild type) covered 3.14%/3.55% and 1.72%/1.29% respectively. The method could equally discriminate mutant from wild type samples on the other three real-time instruments. With a high detecting sensitivity, unlabeled probe melting curve analysis is more applicable to disclose JAK2 V617F mutation than conventional methodologies. Verified with the favorable inter- and intra-assay reproducibility, unlabeled probe melting analysis provided a generic mutation detecting alternative for real-time instruments.
Real-time compression of raw computed tomography data: technology, architecture, and benefits
NASA Astrophysics Data System (ADS)
Wegener, Albert; Chandra, Naveen; Ling, Yi; Senzig, Robert; Herfkens, Robert
2009-02-01
Compression of computed tomography (CT) projection samples reduces slip ring and disk drive costs. A lowcomplexity, CT-optimized compression algorithm called Prism CTTM achieves at least 1.59:1 and up to 2.75:1 lossless compression on twenty-six CT projection data sets. We compare the lossless compression performance of Prism CT to alternative lossless coders, including Lempel-Ziv, Golomb-Rice, and Huffman coders using representative CT data sets. Prism CT provides the best mean lossless compression ratio of 1.95:1 on the representative data set. Prism CT compression can be integrated into existing slip rings using a single FPGA. Prism CT decompression operates at 100 Msamp/sec using one core of a dual-core Xeon CPU. We describe a methodology to evaluate the effects of lossy compression on image quality to achieve even higher compression ratios. We conclude that lossless compression of raw CT signals provides significant cost savings and performance improvements for slip rings and disk drive subsystems in all CT machines. Lossy compression should be considered in future CT data acquisition subsystems because it provides even more system benefits above lossless compression while achieving transparent diagnostic image quality. This result is demonstrated on a limited dataset using appropriately selected compression ratios and an experienced radiologist.
Mjaaland, Trond A; Finset, Arnstein
2009-07-01
There is increasing focus on patient-centred communicative approaches in medical consultations, but few studies have shown the extent to which patients' positive coping strategies and psychological assets are addressed by general practitioners (GPs) on a regular day at the office. This study measures the frequency of GPs' use of questions and comments addressing their patients' coping strategies or resources. Twenty-four GPs were video-recorded in 145 consultations. The consultations were coded using a modified version of the Roter Interaction Analysis System. In this study, we also developed four additional coding categories based on cognitive therapy and solution-focused therapy: attribution, resources, coping, and solution-focused techniques.The reliability between coders was established, a factor analysis was applied to test the relationship between the communication categories, and a tentative validating exercise was performed by reversed coding. Cohen's kappa was 0.52 between coders. Only 2% of the utterances could be categorized as resource or coping oriented. Six GPs contributed 59% of these utterances. The factor analysis identified two factors, one task oriented and one patient oriented. The frequency of communication about coping and resources was very low. Communication skills training for GPs in this field is required. Further validating studies of this kind of measurement tool are warranted.
Hsu, Kean J.; Babeva, Kalina N.; Feng, Michelle C.; Hummer, Justin F.; Davison, Gerald C.
2014-01-01
Studies have examined the impact of distraction on basic task performance (e.g., working memory, motor responses), yet research is lacking regarding its impact in the domain of think-aloud cognitive assessment, where the threat to assessment validity is high. The Articulated Thoughts in Simulated Situations think-aloud cognitive assessment paradigm was employed to address this issue. Participants listened to scenarios under three conditions (i.e., while answering trivia questions, playing a visual puzzle game, or with no experimental distractor). Their articulated thoughts were then content-analyzed both by the Linguistic Inquiry and Word Count (LIWC) program and by content analysis of emotion and cognitive processes conducted by trained coders. Distraction did not impact indices of emotion but did affect cognitive processes. Specifically, with the LIWC system, the trivia questions distraction condition resulted in significantly higher proportions of insight and causal words, and higher frequencies of non-fluencies (e.g., “uh” or “umm”) and filler words (e.g., “like” or “you know”). Coder-rated content analysis found more disengagement and more misunderstanding particularly in the trivia questions distraction condition. A better understanding of how distraction disrupts the amount and type of cognitive engagement holds important implications for future studies employing cognitive assessment methods. PMID:24904488
Tastan, Sevinc; Linch, Graciele C. F.; Keenan, Gail M.; Stifter, Janet; McKinney, Dawn; Fahey, Linda; Dunn Lopez, Karen; Yao, Yingwei; Wilkie, Diana J.
2014-01-01
Objective To determine the state of the science for the five standardized nursing terminology sets in terms of level of evidence and study focus. Design Systematic Review. Data sources Keyword search of PubMed, CINAHL, and EMBASE databases from 1960s to March 19, 2012 revealed 1,257 publications. Review Methods From abstract review we removed duplicate articles, those not in English or with no identifiable standardized nursing terminology, and those with a low-level of evidence. From full text review of the remaining 312 articles, eight trained raters used a coding system to record standardized nursing terminology names, publication year, country, and study focus. Inter-rater reliability confirmed the level of evidence. We analyzed coded results. Results On average there were 4 studies per year between 1985 and 1995. The yearly number increased to 14 for the decade between 1996–2005, 21 between 2006–2010, and 25 in 2011. Investigators conducted the research in 27 countries. By evidence level for the 312 studies 72.4% were descriptive, 18.9% were observational, and 8.7% were intervention studies. Of the 312 reports, 72.1% focused on North American Nursing Diagnosis-International, Nursing Interventions Classification, Nursing Outcome Classification, or some combination of those three standardized nursing terminologies; 9.6% on Omaha System; 7.1% on International Classification for Nursing Practice; 1.6% on Clinical Care Classification/Home Health Care Classification; 1.6% on Perioperative Nursing Data Set; and 8.0% on two or more standardized nursing terminology sets. There were studies in all 10 foci categories including those focused on concept analysis/classification infrastructure (n = 43), the identification of the standardized nursing terminology concepts applicable to a health setting from registered nurses’ documentation (n = 54), mapping one terminology to another (n = 58), implementation of standardized nursing terminologies into electronic health records (n = 12), and secondary use of electronic health record data (n = 19). Conclusions Findings reveal that the number of standardized nursing terminology publications increased primarily since 2000 with most focusing on North American Nursing Diagnosis-International, Nursing Interventions Classification, and Nursing Outcome Classification. The majority of the studies were descriptive, qualitative, or correlational designs that provide a strong base for understanding the validity and reliability of the concepts underlying the standardized nursing terminologies. There is evidence supporting the successful integration and use in electronic health records for two standardized nursing terminology sets: (1) the North American Nursing Diagnosis-International, Nursing Interventions Classification, and Nursing Outcome Classification set; and (2) the Omaha System set. Researchers, however, should continue to strengthen standardized nursing terminology study designs to promote continuous improvement of the standardized nursing terminologies and use in clinical practice. PMID:24412062
Crowdsourcing the Measurement of Interstate Conflict
2016-01-01
Much of the data used to measure conflict is extracted from news reports. This is typically accomplished using either expert coders to quantify the relevant information or machine coders to automatically extract data from documents. Although expert coding is costly, it produces quality data. Machine coding is fast and inexpensive, but the data are noisy. To diminish the severity of this tradeoff, we introduce a method for analyzing news documents that uses crowdsourcing, supplemented with computational approaches. The new method is tested on documents about Militarized Interstate Disputes, and its accuracy ranges between about 68 and 76 percent. This is shown to be a considerable improvement over automated coding, and to cost less and be much faster than expert coding. PMID:27310427
Vector excitation speech or audio coder for transmission or storage
NASA Technical Reports Server (NTRS)
Davidson, Grant (Inventor); Gersho, Allen (Inventor)
1989-01-01
A vector excitation coder compresses vectors by using an optimum codebook designed off line, using an initial arbitrary codebook and a set of speech training vectors exploiting codevector sparsity (i.e., by making zero all but a selected number of samples of lowest amplitude in each of N codebook vectors). A fast-search method selects a number N.sub.c of good excitation vectors from the codebook, where N.sub.c is much smaller tha ORIGIN OF INVENTION The invention described herein was made in the performance of work under a NASA contract, and is subject to the provisions of Public Law 96-517 (35 USC 202) under which the inventors were granted a request to retain title.
Alignment of classification paradigms for communication abilities in children with cerebral palsy
Hustad, Katherine C.; Oakes, Ashley; McFadd, Emily; Allison, Kristen M.
2015-01-01
Aim We examined three communication ability classification paradigms for children with cerebral palsy (CP): the Communication Function Classification System (CFCS), the Viking Speech Scale (VSS), and the Speech Language Profile Groups (SLPG). Questions addressed inter-judge reliability, whether the VSS and the CFCS captured impairments in speech and language, and whether there were differences in speech intelligibility among levels within each classification paradigm. Method 80 children (42 males) with a range of types and severity levels of CP participated (mean age, 60 months; SD 4.8 months). Two speech-language pathologists classified each child via parent-child interaction samples and previous experience with the children for the CFCS and VSS, and uisng quantitative speech and language assessment data for the SLPG. Intelligibility scores were obtained using standard clinical intelligibility measurement. Results Kappa values were .67 (95% CI [.55, .79]) for the CFCS, .82 (95% CI [.72, .92]), for the VSS, .95 (95% CI [.72, .92]) for the SLPG. Descriptively, reliability within levels of each paradigm varied, with the lowest agreement occurring within the CFCS at levels II (42%), III (40%), and IV (61%). Neither the CFCS nor the VSS were sensitive to language impairments captured by the SLPG. Significant differences in speech intelligibility were found among levels for all classification paradigms. Interpretation Multiple tools are necessary to understand speech, language, and communication profiles in children with CP. Characterization of abilities at all levels of the ICF will advance our understanding of the ways that speech, language, and communication abilities present in children with CP. PMID:26521844
A European classification of services for long-term care—the EU-project eDESDE-LTC
Weber, Germain; Brehmer, Barbara; Zeilinger, Elisabeth; Salvador-Carulla, Luis
2009-01-01
Purpose and theory The eDESDE-LTC project aims at developing an operational system for coding, mapping and comparing services for long-term care (LTC) across EU. The projects strategy is to improve EU listing and access to relevant sources of healthcare information via development of SEMANTIC INTER-OPERABILITY in eHEALTH (coding and listing of services for LTC); to increase access to relevant sources of information on LTC services, and to improve linkages between national and regional websites; to foster cooperation with international organizations (OECD). Methods This operational system will include a standard classification of main types of care for persons with LTC needs and an instrument for mapping and standard description of services. These instruments are based on previous classification systems for mental health services (ESMS), disabilities services (DESDE) and ageing services (DESDAE). A Delphi panel made by seven partners developed a DESDE-LTC beta version, which was translated into six languages. The feasibility of DESDE-LTC is tested in six countries using national focal groups. Then the final version will be developed by the Delphi panel, a webpage, training material and course will be carried out. Results and conclusions The eDESDE-LTC system will be piloted in two EU countries (Spain and Bulgaria). Evaluation will focus primarily on usability and impact analysis. Discussion The added value of this project is related to the right of “having access to high-quality healthcare when and where it is needed” by EU citizens. Due to semantic variability and service complexity, existing national listings of services do not provide an adequate framework for patient mobility.
Alwanni, Hisham; Baslan, Yara; Alnuman, Nasim; Daoud, Mohammad I.
2017-01-01
This paper presents an EEG-based brain-computer interface system for classifying eleven motor imagery (MI) tasks within the same hand. The proposed system utilizes the Choi-Williams time-frequency distribution (CWD) to construct a time-frequency representation (TFR) of the EEG signals. The constructed TFR is used to extract five categories of time-frequency features (TFFs). The TFFs are processed using a hierarchical classification model to identify the MI task encapsulated within the EEG signals. To evaluate the performance of the proposed approach, EEG data were recorded for eighteen intact subjects and four amputated subjects while imagining to perform each of the eleven hand MI tasks. Two performance evaluation analyses, namely channel- and TFF-based analyses, are conducted to identify the best subset of EEG channels and the TFFs category, respectively, that enable the highest classification accuracy between the MI tasks. In each evaluation analysis, the hierarchical classification model is trained using two training procedures, namely subject-dependent and subject-independent procedures. These two training procedures quantify the capability of the proposed approach to capture both intra- and inter-personal variations in the EEG signals for different MI tasks within the same hand. The results demonstrate the efficacy of the approach for classifying the MI tasks within the same hand. In particular, the classification accuracies obtained for the intact and amputated subjects are as high as 88.8% and 90.2%, respectively, for the subject-dependent training procedure, and 80.8% and 87.8%, respectively, for the subject-independent training procedure. These results suggest the feasibility of applying the proposed approach to control dexterous prosthetic hands, which can be of great benefit for individuals suffering from hand amputations. PMID:28832513
Product code optimization for determinate state LDPC decoding in robust image transmission.
Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G
2006-08-01
We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.
Bit-wise arithmetic coding for data compression
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
Frøen, J Frederik; Pinar, Halit; Flenady, Vicki; Bahrin, Safiah; Charles, Adrian; Chauke, Lawrence; Day, Katie; Duke, Charles W; Facchinetti, Fabio; Fretts, Ruth C; Gardener, Glenn; Gilshenan, Kristen; Gordijn, Sanne J; Gordon, Adrienne; Guyon, Grace; Harrison, Catherine; Koshy, Rachel; Pattinson, Robert C; Petersson, Karin; Russell, Laurie; Saastad, Eli; Smith, Gordon CS; Torabi, Rozbeh
2009-01-01
A carefully classified dataset of perinatal mortality will retain the most significant information on the causes of death. Such information is needed for health care policy development, surveillance and international comparisons, clinical services and research. For comparability purposes, we propose a classification system that could serve all these needs, and be applicable in both developing and developed countries. It is developed to adhere to basic concepts of underlying cause in the International Classification of Diseases (ICD), although gaps in ICD prevent classification of perinatal deaths solely on existing ICD codes. We tested the Causes of Death and Associated Conditions (Codac) classification for perinatal deaths in seven populations, including two developing country settings. We identified areas of potential improvements in the ability to retain existing information, ease of use and inter-rater agreement. After revisions to address these issues we propose Version II of Codac with detailed coding instructions. The ten main categories of Codac consist of three key contributors to global perinatal mortality (intrapartum events, infections and congenital anomalies), two crucial aspects of perinatal mortality (unknown causes of death and termination of pregnancy), a clear distinction of conditions relevant only to the neonatal period and the remaining conditions are arranged in the four anatomical compartments (fetal, cord, placental and maternal). For more detail there are 94 subcategories, further specified in 577 categories in the full version. Codac is designed to accommodate both the main cause of death as well as two associated conditions. We suggest reporting not only the main cause of death, but also the associated relevant conditions so that scenarios of combined conditions and events are captured. The appropriately applied Codac system promises to better manage information on causes of perinatal deaths, the conditions associated with them, and the most common clinical scenarios for future study and comparisons. PMID:19515228
Frøen, J Frederik; Pinar, Halit; Flenady, Vicki; Bahrin, Safiah; Charles, Adrian; Chauke, Lawrence; Day, Katie; Duke, Charles W; Facchinetti, Fabio; Fretts, Ruth C; Gardener, Glenn; Gilshenan, Kristen; Gordijn, Sanne J; Gordon, Adrienne; Guyon, Grace; Harrison, Catherine; Koshy, Rachel; Pattinson, Robert C; Petersson, Karin; Russell, Laurie; Saastad, Eli; Smith, Gordon C S; Torabi, Rozbeh
2009-06-10
A carefully classified dataset of perinatal mortality will retain the most significant information on the causes of death. Such information is needed for health care policy development, surveillance and international comparisons, clinical services and research. For comparability purposes, we propose a classification system that could serve all these needs, and be applicable in both developing and developed countries. It is developed to adhere to basic concepts of underlying cause in the International Classification of Diseases (ICD), although gaps in ICD prevent classification of perinatal deaths solely on existing ICD codes.We tested the Causes of Death and Associated Conditions (Codac) classification for perinatal deaths in seven populations, including two developing country settings. We identified areas of potential improvements in the ability to retain existing information, ease of use and inter-rater agreement. After revisions to address these issues we propose Version II of Codac with detailed coding instructions.The ten main categories of Codac consist of three key contributors to global perinatal mortality (intrapartum events, infections and congenital anomalies), two crucial aspects of perinatal mortality (unknown causes of death and termination of pregnancy), a clear distinction of conditions relevant only to the neonatal period and the remaining conditions are arranged in the four anatomical compartments (fetal, cord, placental and maternal).For more detail there are 94 subcategories, further specified in 577 categories in the full version. Codac is designed to accommodate both the main cause of death as well as two associated conditions. We suggest reporting not only the main cause of death, but also the associated relevant conditions so that scenarios of combined conditions and events are captured.The appropriately applied Codac system promises to better manage information on causes of perinatal deaths, the conditions associated with them, and the most common clinical scenarios for future study and comparisons.
Deplano, Ariane; Schuermans, Annette; Van Eldere, Johan; Witte, Wolfgang; Meugnier, Hèléne; Etienne, Jerome; Grundmann, Hajo; Jonas, Daniel; Noordhoek, Gerda T.; Dijkstra, Jolanda; van Belkum, Alex; van Leeuwen, Willem; Tassios, Panayotis T.; Legakis, Nicholas J.; van der Zee, Anneke; Bergmans, Anneke; Blanc, Dominique S.; Tenover, Fred C.; Cookson, Barry C.; O'Neil, Gael; Struelens, Marc J.
2000-01-01
Rapid and efficient epidemiologic typing systems would be useful to monitor transmission of methicillin-resistant Staphylococcus aureus (MRSA) at both local and interregional levels. To evaluate the intralaboratory performance and interlaboratory reproducibility of three recently developed repeat-element PCR (rep-PCR) methods for the typing of MRSA, 50 MRSA strains characterized by pulsed-field gel electrophoresis (PFGE) (SmaI) analysis and epidemiological data were blindly typed by inter-IS256, 16S-23S ribosomal DNA (rDNA), and MP3 PCR in 12 laboratories in eight countries using standard reagents and protocols. Performance of typing was defined by reproducibility (R), discriminatory power (D), and agreement with PFGE analysis. Interlaboratory reproducibility of pattern and type classification was assessed visually and using gel analysis software. Each typing method showed a different performance level in each center. In the center performing best with each method, inter-IS256 PCR typing achieved R = 100% and D = 100%; 16S-23S rDNA PCR, R = 100% and D = 82%; and MP3 PCR, R = 80% and D = 83%. Concordance between rep-PCR type and PFGE type ranged by center: 70 to 90% for inter-IS256 PCR, 44 to 57% for 16S-23S rDNA PCR, and 53 to 54% for MP3 PCR analysis. In conclusion, the performance of inter-IS256 PCR typing was similar to that of PFGE analysis in some but not all centers, whereas other rep-PCR protocols showed lower discrimination and intralaboratory reproducibility. None of these assays, however, was sufficiently reproducible for interlaboratory exchange of data. PMID:11015358
Toward automated assessment of health Web page quality using the DISCERN instrument.
Allam, Ahmed; Schulz, Peter J; Krauthammer, Michael
2017-05-01
As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers. The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms. Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria. First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers. Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN classifiers. The code for the probabilistic consensus model is available at https://bitbucket.org/A_2/em_dawid/ . © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
An Evolutionary Machine Learning Framework for Big Data Sequence Mining
ERIC Educational Resources Information Center
Kamath, Uday Krishna
2014-01-01
Sequence classification is an important problem in many real-world applications. Unlike other machine learning data, there are no "explicit" features or signals in sequence data that can help traditional machine learning algorithms learn and predict from the data. Sequence data exhibits inter-relationships in the elements that are…
Caffarel, Jennifer; Gibson, G John; Harrison, J Phil; Griffiths, Clive J; Drinnan, Michael J
2006-03-01
We have compared sleep staging by an automated neural network (ANN) system, BioSleep (Oxford BioSignals) and a human scorer using the Rechtschaffen and Kales scoring system. Sleep study recordings from 114 patients with suspected obstructed sleep apnoea syndrome (OSA) were analysed by ANN and by a blinded human scorer. We also examined human scorer reliability by calculating the agreement between the index scorer and a second independent blinded scorer for 28 of the 114 studies. For each study, we built contingency tables on an epoch-by-epoch (30 s epochs) comparison basis. From these, we derived kappa (kappa) coefficients for different combinations of sleep stages. The overall agreement of automatic and manual scoring for the 114 studies for the classification {wake / light-sleep / deep-sleep / REM} was poor (median kappa = 0.305) and only a little better (kappa = 0.449) for the crude {wake / sleep} distinction. For the subgroup of 28 randomly selected studies, the overall agreement of automatic and manual scoring was again relatively low (kappa = 0.331 for {wake light-sleep / deep-sleep REM} and kappa = 0.505 for {wake / sleep}), whereas inter-scorer reliability was higher (kappa = -0.641 for {wake / light-sleep / deep-sleep / REM} and kappa = 0.737 for {wake / sleep}). We conclude that such an ANN-based analysis system is not sufficiently accurate for sleep study analyses using the R&K classification system.
Pairwise Classifier Ensemble with Adaptive Sub-Classifiers for fMRI Pattern Analysis.
Kim, Eunwoo; Park, HyunWook
2017-02-01
The multi-voxel pattern analysis technique is applied to fMRI data for classification of high-level brain functions using pattern information distributed over multiple voxels. In this paper, we propose a classifier ensemble for multiclass classification in fMRI analysis, exploiting the fact that specific neighboring voxels can contain spatial pattern information. The proposed method converts the multiclass classification to a pairwise classifier ensemble, and each pairwise classifier consists of multiple sub-classifiers using an adaptive feature set for each class-pair. Simulated and real fMRI data were used to verify the proposed method. Intra- and inter-subject analyses were performed to compare the proposed method with several well-known classifiers, including single and ensemble classifiers. The comparison results showed that the proposed method can be generally applied to multiclass classification in both simulations and real fMRI analyses.
A pore-size classification for peat bogs derived from unsaturated hydraulic properties
NASA Astrophysics Data System (ADS)
Weber, Tobias Karl David; Iden, Sascha Christian; Durner, Wolfgang
2017-12-01
In ombrotrophic peatlands, the moisture content of the acrotelm (vadoze zone) controls oxygen diffusion rates, redox state, and the turnover of organic matter. Thus, variably saturated flow processes determine whether peatlands act as sinks or sources of atmospheric carbon, and modelling these processes is crucial to assess effects of changed environmental conditions on the future development of these ecosystems. We show that the Richards equation can be used to accurately describe the moisture dynamics under evaporative conditions in variably saturated peat soil, encompassing the transition from the topmost living moss layer to the decomposed peat as part of the vadose zone. Soil hydraulic properties (SHP) were identified by inverse simulation of evaporation experiments on samples from the entire acrotelm. To obtain consistent descriptions of the observations, the traditional van Genuchten-Mualem model was extended to account for non-capillary water storage and flow. We found that the SHP of the uppermost moss layer reflect a pore-size distribution (PSD) that combines three distinct pore systems of the Sphagnum moss. For deeper samples, acrotelm pedogenesis changes the shape of the SHP due to the collapse of inter-plant pores and an infill with smaller particles. This leads to gradually more homogeneous and bi-modal PSDs with increasing depth, which in turn can serve as a proxy for increasing state of pedogenesis in peatlands. From this, we derive a nomenclature and size classification for the pore spaces of Sphagnum mosses and define inter-, intra-, and inner-plant pore spaces, with effective pore diameters of > 300, 300-30, and 30-10 µm, respectively.
Tissue discrimination in magnetic resonance imaging of the rotator cuff
NASA Astrophysics Data System (ADS)
Meschino, G. J.; Comas, D. S.; González, M. A.; Capiel, C.; Ballarin, V. L.
2016-04-01
Evaluation and diagnosis of diseases of the muscles within the rotator cuff can be done using different modalities, being the Magnetic Resonance the method more widely used. There are criteria to evaluate the degree of fat infiltration and muscle atrophy, but these have low accuracy and show great variability inter and intra observer. In this paper, an analysis of the texture features of the rotator cuff muscles is performed to classify them and other tissues. A general supervised classification approach was used, combining forward-search as feature selection method with kNN as classification rule. Sections of Magnetic Resonance Images of the tissues of interest were selected by specialist doctors and they were considered as Gold Standard. Accuracies obtained were of 93% for T1-weighted images and 92% for T2-weighted images. As an immediate future work, the combination of both sequences of images will be considered, expecting to improve the results, as well as the use of other sequences of Magnetic Resonance Images. This work represents an initial point for the classification and quantification of fat infiltration and muscle atrophy degree. From this initial point, it is expected to make an accurate and objective system which will result in benefits for future research and for patients’ health.
AO Distal Radius Fracture Classification: Global Perspective on Observer Agreement.
Jayakumar, Prakash; Teunis, Teun; Giménez, Beatriz Bravo; Verstreken, Frederik; Di Mascio, Livio; Jupiter, Jesse B
2017-02-01
Background The primary objective of this study was to test interobserver reliability when classifying fractures by consensus by AO types and groups among a large international group of surgeons. Secondarily, we assessed the difference in inter- and intraobserver agreement of the AO classification in relation to geographical location, level of training, and subspecialty. Methods A randomized set of radiographic and computed tomographic images from a consecutive series of 96 distal radius fractures (DRFs), treated between October 2010 and April 2013, was classified using an electronic web-based portal by an invited group of participants on two occasions. Results Interobserver reliability was substantial when classifying AO type A fractures but fair and moderate for type B and C fractures, respectively. No difference was observed by location, except for an apparent difference between participants from India and Australia classifying type B fractures. No statistically significant associations were observed comparing interobserver agreement by level of training and no differences were shown comparing subspecialties. Intra-rater reproducibility was "substantial" for fracture types and "fair" for fracture groups with no difference accounting for location, training level, or specialty. Conclusion Improved definition of reliability and reproducibility of this classification may be achieved using large international groups of raters, empowering decision making on which system to utilize. Level of Evidence Level III.
AO Distal Radius Fracture Classification: Global Perspective on Observer Agreement
Jayakumar, Prakash; Teunis, Teun; Giménez, Beatriz Bravo; Verstreken, Frederik; Di Mascio, Livio; Jupiter, Jesse B.
2016-01-01
Background The primary objective of this study was to test interobserver reliability when classifying fractures by consensus by AO types and groups among a large international group of surgeons. Secondarily, we assessed the difference in inter- and intraobserver agreement of the AO classification in relation to geographical location, level of training, and subspecialty. Methods A randomized set of radiographic and computed tomographic images from a consecutive series of 96 distal radius fractures (DRFs), treated between October 2010 and April 2013, was classified using an electronic web-based portal by an invited group of participants on two occasions. Results Interobserver reliability was substantial when classifying AO type A fractures but fair and moderate for type B and C fractures, respectively. No difference was observed by location, except for an apparent difference between participants from India and Australia classifying type B fractures. No statistically significant associations were observed comparing interobserver agreement by level of training and no differences were shown comparing subspecialties. Intra-rater reproducibility was “substantial” for fracture types and “fair” for fracture groups with no difference accounting for location, training level, or specialty. Conclusion Improved definition of reliability and reproducibility of this classification may be achieved using large international groups of raters, empowering decision making on which system to utilize. Level of Evidence Level III PMID:28119795
a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
He, H.; Khoshelham, K.; Fraser, C.
2017-09-01
Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.
Comparison Of The Performance Of Hybrid Coders Under Different Configurations
NASA Astrophysics Data System (ADS)
Gunasekaran, S.; Raina J., P.
1983-10-01
Picture bandwidth reduction employing DPCM and Orthogonal Transform (OT) coding for TV transmission have been widely discussed in literature; both the techniques have their own advantages and limitations in terms of compression ratio, implementation, sensitivity to picture statistics and their sensitivity to the channel noise. Hybrid coding introduced by Habibi, - a cascade of the two techniques, offers excellent performance and proves to be attractive retaining the special advantages of both the techniques. In the recent times, the interest has shifted over to Hybrid coding, and in the absence of a report on the relative performance specifications of hybrid coders at different configurations, an attempt has been made to colate the information. Fourier, Hadamard, Slant, Sine, Cosine and Harr transforms have been considered for the present work.
A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.
Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous
2017-08-30
While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.
Identification and classification of hubs in brain networks.
Sporns, Olaf; Honey, Christopher J; Kötter, Rolf
2007-10-17
Brain regions in the mammalian cerebral cortex are linked by a complex network of fiber bundles. These inter-regional networks have previously been analyzed in terms of their node degree, structural motif, path length and clustering coefficient distributions. In this paper we focus on the identification and classification of hub regions, which are thought to play pivotal roles in the coordination of information flow. We identify hubs and characterize their network contributions by examining motif fingerprints and centrality indices for all regions within the cerebral cortices of both the cat and the macaque. Motif fingerprints capture the statistics of local connection patterns, while measures of centrality identify regions that lie on many of the shortest paths between parts of the network. Within both cat and macaque networks, we find that a combination of degree, motif participation, betweenness centrality and closeness centrality allows for reliable identification of hub regions, many of which have previously been functionally classified as polysensory or multimodal. We then classify hubs as either provincial (intra-cluster) hubs or connector (inter-cluster) hubs, and proceed to show that lesioning hubs of each type from the network produces opposite effects on the small-world index. Our study presents an approach to the identification and classification of putative hub regions in brain networks on the basis of multiple network attributes and charts potential links between the structural embedding of such regions and their functional roles.
American Academy of Home Care Medicine
... Providers) House Call Coding 101 (Coders) Moving to Value and CPC+ Resource Library ... IAHnow.org to contact your senators. Visit IAHnow.org Education AAHCM offers educational resources such as webinars and ...
O'Connor, S; McCaffrey, N; Whyte, E; Moran, K
2016-07-01
To adapt the trunk stability test to facilitate further sub-classification of higher levels of core stability in athletes for use as a screening tool. To establish the inter-tester and intra-tester reliability of this adapted core stability test. Reliability study. Collegiate athletic therapy facilities. Fifteen physically active male subjects (19.46 ± 0.63) free from any orthopaedic or neurological disorders were recruited from a convenience sample of collegiate students. The intraclass correlation coefficients (ICC) and 95% Confidence Intervals (CI) were computed to establish inter-tester and intra-tester reliability. Excellent ICC values were observed in the adapted core stability test for inter-tester reliability (0.97) and good to excellent intra-tester reliability (0.73-0.90). While the 95% CI were narrow for inter-tester reliability, Tester A and C 95% CI's were widely distributed compared to Tester B. The adapted core stability test developed in this study is a quick and simple field based test to administer that can further subdivide athletes with high levels of core stability. The test demonstrated high inter-tester and intra-tester reliability. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chang, Shang-Jen; Yang, Stephen S D
2008-12-01
To evaluate the inter-observer and intra-observer agreement on the interpretation of uroflowmetry curves of children. Healthy kindergarten children were enrolled for evaluation of uroflowmetry. Uroflowmetry curves were classified as bell-shaped, tower, plateau, staccato and interrupted. Only the bell-shaped curves were regarded as normal. Two urodynamists evaluated the curves independently after reviewing the definitions of the different types of uroflowmetry curve. The senior urodynamist evaluated the curves twice 3 months apart. The final conclusion was made when consensus was reached. Agreement among observers was analyzed using kappa statistics. Of 190 uroflowmetry curves eligible for analysis, the intra-observer agreement in interpreting each type of curve and interpreting normalcy vs abnormality was good (kappa=0.71 and 0.68, respectively). Very good inter-observer agreement (kappa=0.81) on normalcy and good inter-observer agreement (kappa=0.73) on types of uroflowmetry were observed. Poor inter-observer agreement existed on the classification of specific types of abnormal uroflowmetry curves (kappa=0.07). Uroflowmetry is a good screening tool for normalcy of kindergarten children, while not a good tool to define the specific types of abnormal uroflowmetry.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Hilbert, E. E. (Inventor)
1976-01-01
A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.
Noiseless coding for the magnetometer
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Lee, Jun-Ji
1987-01-01
Future unmanned space missions will continue to seek a full understanding of magnetic fields throughout the solar system. Severely constrained data rates during certain portions of these missions could limit the possible science return. This publication investigates the application of universal noiseless coding techniques to more efficiently represent magnetometer data without any loss in data integrity. Performance results indicated that compression factors of 2:1 to 6:1 can be expected. Feasibility for general deep space application was demonstrated by implementing a microprocessor breadboard coder/decoder using the Intel 8086 processor. The Comet Rendezvous Asteroid Flyby mission will incorporate these techniques in a buffer feedback, rate-controlled configuration. The characteristics of this system are discussed.
A statistical characterization of the Galileo-to-GPS inter-system bias
NASA Astrophysics Data System (ADS)
Gioia, Ciro; Borio, Daniele
2016-11-01
Global navigation satellite system operates using independent time scales and thus inter-system time offsets have to be determined to enable multi-constellation navigation solutions. GPS/Galileo inter-system bias and drift are evaluated here using different types of receivers: two mass market and two professional receivers. Moreover, three different approaches are considered for the inter-system bias determination: in the first one, the broadcast Galileo to GPS time offset is used to align GPS and Galileo time scales. In the second, the inter-system bias is included in the multi-constellation navigation solution and is estimated using the measurements available. Finally, an enhanced algorithm using constraints on the inter-system bias time evolution is proposed. The inter-system bias estimates obtained with the different approaches are analysed and their stability is experimentally evaluated using the Allan deviation. The impact of the inter-system bias on the position velocity time solution is also considered and the performance of the approaches analysed is evaluated in terms of standard deviation and mean errors for both horizontal and vertical components. From the experiments, it emerges that the inter-system bias is very stable and that the use of constraints, modelling the GPS/Galileo inter-system bias behaviour, significantly improves the performance of multi-constellation navigation.
Korczowski, L; Congedo, M; Jutten, C
2015-08-01
The classification of electroencephalographic (EEG) data recorded from multiple users simultaneously is an important challenge in the field of Brain-Computer Interface (BCI). In this paper we compare different approaches for classification of single-trials Event-Related Potential (ERP) on two subjects playing a collaborative BCI game. The minimum distance to mean (MDM) classifier in a Riemannian framework is extended to use the diversity of the inter-subjects spatio-temporal statistics (MDM-hyper) or to merge multiple classifiers (MDM-multi). We show that both these classifiers outperform significantly the mean performance of the two users and analogous classifiers based on the step-wise linear discriminant analysis. More importantly, the MDM-multi outperforms the performance of the best player within the pair.
Hybridization and classification of the white pines (Pinus section strobus)
William B. Critchfield
1986-01-01
Many North American and Eurasian white pines retain their ability to hybridize even after long isolation, and about half of all white pine hybrids from controlled pollinations are inter-hemisphere crosses. Within the morphologically homogeneous and otherwise highly crossable core group of white pines, an exception in crossing behavior is Pinus lambertiana...
Comparison of English Language Rhythm and Kalhori Kurdish Language Rhythm
ERIC Educational Resources Information Center
Taghva, Nafiseh; Zadeh, Vahideh Abolhasani
2016-01-01
Interval-based method is a method of studying the rhythmic quantitative features of languages. This method use Pairwise Variability Index (PVI) to consider the variability of vocalic duration and inter-vocalic duration of sentences which leads to classification of languages rhythm into stress-timed languages and syllable-timed ones. This study…
Li, Hai-juan; Zhao, Xin; Jia, Qing-fei; Li, Tian-lai; Ning, Wei
2012-08-01
The achenes morphological and micro-morphological characteristics of six species of genus Taraxacum from northeastern China as well as SRAP cluster analysis were observed for their classification evidences. The achenes were observed by microscope and EPMA. Cluster analysis was given on the basis of the size, shape, cone proportion, color and surface sculpture of achenes. The Taraxacum inter-species achene shape characteristic difference is obvious, particularly spinulose distribution and size, achene color and achene size; with the Taraxacum plant achene shape the cluster method T. antungense Kitag. and the T. urbanum Kitag. should combine for the identical kind; the achene morphology cluster analysis and the SRAP tagged molecule systematics's cluster result retrieves in the table with "the Chinese flora". The class group to divide the result is consistent. Taraxacum plant achene shape characteristic stable conservative, may carry on the inter-species division and the sibship analysis according to the achene shape characteristic combination difference; the achene morphology cluster analysis as well as the SRAP tagged molecule systematics confirmation support dandelion classification result of "the Chinese flora".
Chemical-induced disease relation extraction with various linguistic features.
Gu, Jinghang; Qian, Longhua; Zhou, Guodong
2016-01-01
Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask. © The Author(s) 2016. Published by Oxford University Press.
Kondoh, Shun; Chiba, Hirofumi; Nishikiori, Hirotaka; Umeda, Yasuaki; Kuronuma, Koji; Otsuka, Mitsuo; Yamada, Gen; Ohnishi, Hirofumi; Mori, Mitsuru; Kondoh, Yasuhiro; Taniguchi, Hiroyuki; Homma, Sakae; Takahashi, Hiroki
2016-09-01
The clinical course of idiopathic pulmonary fibrosis (IPF) shows great inter-individual differences. It is important to standardize the severity classification to accurately evaluate each patient׳s prognosis. In Japan, an original severity classification (the Japanese disease severity classification, JSC) is used. In the United States, the new multidimensional index and staging system (the GAP model) has been proposed. The objective of this study was to evaluate the model performance for the prediction of mortality risk of the JSC and GAP models using a large cohort of Japanese patients with IPF. This is a retrospective cohort study including 326 patients with IPF in the Hokkaido prefecture from 2003 to 2007. We obtained the survival curves of each stage of the GAP and JSC models to perform a comparison. In the GAP model, the prognostic value for mortality risk of Japanese patients was also evaluated. In the JSC, patient prognoses were roughly divided into two groups, mild cases (Stages I and II) and severe cases (Stages III and IV). In the GAP model, there was no significant difference in survival between Stages II and III, and the mortality rates in the patients classified into the GAP Stages I and II were underestimated. It is difficult to predict accurate prognosis of IPF using the JSC and the GAP models. A re-examination of the variables from the two models is required, as well as an evaluation of the prognostic value to revise the severity classification for Japanese patients with IPF. Copyright © 2016 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.
De Martino, Federico; Gentile, Francesco; Esposito, Fabrizio; Balsi, Marco; Di Salle, Francesco; Goebel, Rainer; Formisano, Elia
2007-01-01
We present a general method for the classification of independent components (ICs) extracted from functional MRI (fMRI) data sets. The method consists of two steps. In the first step, each fMRI-IC is associated with an IC-fingerprint, i.e., a representation of the component in a multidimensional space of parameters. These parameters are post hoc estimates of global properties of the ICs and are largely independent of a specific experimental design and stimulus timing. In the second step a machine learning algorithm automatically separates the IC-fingerprints into six general classes after preliminary training performed on a small subset of expert-labeled components. We illustrate this approach in a multisubject fMRI study employing visual structure-from-motion stimuli encoding faces and control random shapes. We show that: (1) IC-fingerprints are a valuable tool for the inspection, characterization and selection of fMRI-ICs and (2) automatic classifications of fMRI-ICs in new subjects present a high correspondence with those obtained by expert visual inspection of the components. Importantly, our classification procedure highlights several neurophysiologically interesting processes. The most intriguing of which is reflected, with high intra- and inter-subject reproducibility, in one IC exhibiting a transiently task-related activation in the 'face' region of the primary sensorimotor cortex. This suggests that in addition to or as part of the mirror system, somatotopic regions of the sensorimotor cortex are involved in disambiguating the perception of a moving body part. Finally, we show that the same classification algorithm can be successfully applied, without re-training, to fMRI collected using acquisition parameters, stimulation modality and timing considerably different from those used for training.
Löwing, Kristina; Arredondo, Ynes C; Tedroff, Marika; Tedroff, Kristina
2015-09-04
A current worldwide common goal is to optimize the health and well-being of children with cerebral palsy (CP). In order to reach that goal, for this heterogeneous group, a common language and classification systems are required to predict development and offer evidence based interventions. In most countries in Africa, South America, Asia and Eastern Europe the classification systems for CP are unfamiliar and rarely used. Education and implementation are required. The specific aims of this study were to examine a model in order to introduce the Gross Motor Function Classification System (GMFCS-E&R) in Venezuela, and to examine the validity and the reliability. Children with CP, registered at a National child rehabilitation centre in Venezuela, were invited to participate. The Spanish version of GMFCS-E&R was used. The Wilson mobility scale was translated and used to examine the concurrent validity. A structured questionnaire, comprising aspects of mobility and gross motor function, was constructed. In addition, each child was filmed. A paediatrician in Venezuela received supervised self-education in GMFCS-E&R and the Wilson mobility scale. A Swedish student was educated in GMFCS-E&R and the Wilson mobility scale prior to visiting Venezuela. In Venezuela, all children were classified and scored by the paediatrician and student independently. An experienced paediatric physiotherapist (PT) in Sweden made independent GMFCS-E&R classifications and Wilson mobility scale scorings, accomplished through merging data from the structured questionnaire with observations of the films. Descriptive statistics were used and reliability was presented with weighted Kappa (Kw). Spearman's correlation coefficient was calculated to explore the concurrent validity between GMFCS-E&R and Wilson mobility scale. Eighty-eight children (56 boys), mean age 10 years (3-18), with CP participated. The inter-rater reliability of GMFCS-E&R between; the paediatrician and the PT was Kw = 0.85 (95% CI: 0.75-0.88), the PT and student was Kw = 0.91 (95% CI: 0.86-0.95) and the paediatrician and student was Kw = 0.85 (95 % CI: 0.79-0.90). The correlations between GMFCS-E&R and Wilson mobility scale were high rs =0.94-0.95 (p < 0.001). In a setting with no previous knowledge of GMFCS-E&R, the model with education, supervised self-education and practice was efficient and resulted in very good reliability and validity.
Understanding the Attitudes of Latino Parents Towards Confidential Health Services for Teens
Tebb, Kathleen; Hernandez, Liz Karime; Shafer, Mary-Ann; Chang, Fay; Otero-Sabogal, Regina
2015-01-01
Objectives To explore the knowledge and attitudes that Latino parents have about confidential health services for their teens and identify factors that may influence those attitudes. Methods Latino parents of teens (12-17 years old) were randomly selected from a large health maintenance organization and a community-based hospital to participate in one-hour focus groups. We conducted eight focus groups in the parent's preferred language. Spanish and English transcripts were translated and coded with inter-coder reliability > 80%. Results There were 52 participants (30 mothers, 22 fathers). There is a wide range of parental knowledge and attitudes about confidential health services for teens. Parents felt they had the right to know about their teens’ health but were uncomfortable discussing sexual topics and thought confidential teen-clinician discussions would be helpful. Factors that influence parental acceptability of confidential health services include: parental trust in the clinician, clinician's interpersonal skills; clinical competencies, ability to partner with parents and teens and clinician-teen gender concordance. Most parents preferred teens’ access to confidential services than having their teens forego needed care. Conclusions This study identifies several underlying issues that may influence Latino youth's access to confidential health services. Implications for clinical application and future research are discussed. PMID:22626483
The Quality of Written Feedback by Attendings of Internal Medicine Residents.
Jackson, Jeffrey L; Kay, Cynthia; Jackson, Wilkins C; Frank, Michael
2015-07-01
Attending evaluations are commonly used to evaluate residents. Evaluate the quality of written feedback of internal medicine residents. Retrospective. Internal medicine residents and faculty at the Medical College of Wisconsin from 2004 to 2012. From monthly evaluations of residents by attendings, a randomly selected sample of 500 written comments by attendings were qualitatively coded and rated as high-, moderate-, or low-quality feedback by two independent coders with good inter-rater reliability (kappa: 0.94). Small group exercises with residents and attendings also coded the utterances as high, moderate, or low quality and developed criteria for this categorization. In-service examination scores were correlated with written feedback. There were 228 internal medicine residents who had 6,603 evaluations by 334 attendings. Among 500 randomly selected written comments, there were 2,056 unique utterances: 29% were coded as nonspecific statements, 20% were comments about resident personality, 16% about patient care, 14% interpersonal communication, 7% medical knowledge, 6% professionalism, and 4% each on practice-based learning and systems-based practice. Based on criteria developed by group exercises, the majority of written comments were rated as moderate quality (65%); 22% were rated as high quality and 13% as low quality. Attendings who provided high-quality feedback rated residents significantly lower in all six of the Accreditation Council for Graduate Medical Education (ACGME) competencies (p <0.0005 for all), and had a greater range of scores. Negative comments on medical knowledge were associated with lower in-service examination scores. Most attending written evaluation was of moderate or low quality. Attendings who provided high-quality feedback appeared to be more discriminating, providing significantly lower ratings of residents in all six ACGME core competencies, and across a greater range. Attendings' negative written comments on medical knowledge correlated with lower in-service training scores.
Yücel, Zeynep; Brščić, Dražen; Kanda, Takayuki; Hagita, Norihiro
2017-01-01
Being determined by human social behaviour, pedestrian group dynamics may depend on “intrinsic properties” such as the purpose of the pedestrians, their personal relation, gender, age, and body size. In this work we investigate the dynamical properties of pedestrian dyads (distance, spatial formation and velocity) by analysing a large data set of automatically tracked pedestrian trajectories in an unconstrained “ecological” setting (a shopping mall), whose apparent physical and social group properties have been analysed by three different human coders. We observed that females walk slower and closer than males, that workers walk faster, at a larger distance and more abreast than leisure oriented people, and that inter-group relation has a strong effect on group structure, with couples walking very close and abreast, colleagues walking at a larger distance, and friends walking more abreast than family members. Pedestrian height (obtained automatically through our tracking system) influences velocity and abreast distance, both growing functions of the average group height. Results regarding pedestrian age show that elderly people walk slowly, while active age adults walk at the maximum velocity. Groups with children have a strong tendency to walk in a non-abreast formation, with a large distance (despite a low abreast distance). A cross-analysis of the interplay between these intrinsic features, taking in account also the effect of an “extrinsic property” such as crowd density, confirms these major results but reveals also a richer structure. An interesting and unexpected result, for example, is that the velocity of groups with children increases with density, at least in the low-medium density range found under normal conditions in shopping malls. Children also appear to behave differently according to the gender of the parent. PMID:29095913
Zanlungo, Francesco; Yücel, Zeynep; Brščić, Dražen; Kanda, Takayuki; Hagita, Norihiro
2017-01-01
Being determined by human social behaviour, pedestrian group dynamics may depend on "intrinsic properties" such as the purpose of the pedestrians, their personal relation, gender, age, and body size. In this work we investigate the dynamical properties of pedestrian dyads (distance, spatial formation and velocity) by analysing a large data set of automatically tracked pedestrian trajectories in an unconstrained "ecological" setting (a shopping mall), whose apparent physical and social group properties have been analysed by three different human coders. We observed that females walk slower and closer than males, that workers walk faster, at a larger distance and more abreast than leisure oriented people, and that inter-group relation has a strong effect on group structure, with couples walking very close and abreast, colleagues walking at a larger distance, and friends walking more abreast than family members. Pedestrian height (obtained automatically through our tracking system) influences velocity and abreast distance, both growing functions of the average group height. Results regarding pedestrian age show that elderly people walk slowly, while active age adults walk at the maximum velocity. Groups with children have a strong tendency to walk in a non-abreast formation, with a large distance (despite a low abreast distance). A cross-analysis of the interplay between these intrinsic features, taking in account also the effect of an "extrinsic property" such as crowd density, confirms these major results but reveals also a richer structure. An interesting and unexpected result, for example, is that the velocity of groups with children increases with density, at least in the low-medium density range found under normal conditions in shopping malls. Children also appear to behave differently according to the gender of the parent.
Telescope for x ray and gamma ray studies in astrophysics
NASA Technical Reports Server (NTRS)
Weaver, W. D.; Desai, Upendra D.
1993-01-01
Imaging of x-rays has been achieved by various methods in astrophysics, nuclear physics, medicine, and material science. A new method for imaging x-ray and gamma-ray sources avoids the limitations of previously used imaging devices. Images are formed in optical wavelengths by using mirrors or lenses to reflect and refract the incoming photons. High energy x-ray and gamma-ray photons cannot be reflected except at grazing angles and pass through lenses without being refracted. Therefore, different methods must be used to image x-ray and gamma-ray sources. Techniques using total absorption, or shadow casting, can provide images in x-rays and gamma-rays. This new method uses a coder made of a pair of Fresnel zone plates and a detector consisting of a matrix of CsI scintillators and photodiodes. The Fresnel zone plates produce Moire patterns when illuminated by an off-axis source. These Moire patterns are deconvolved using a stepped sine wave fitting or an inverse Fourier transform. This type of coder provides the capability of an instantaneous image with sub-arcminute resolution while using a detector with only a coarse position-sensitivity. A matrix of the CsI/photodiode detector elements provides the necessary coarse position-sensitivity. The CsI/photodiode detector also allows good energy resolution. This imaging system provides advantages over previously used imaging devices in both performance and efficiency.
Harmonising Nursing Terminologies Using a Conceptual Framework.
Jansen, Kay; Kim, Tae Youn; Coenen, Amy; Saba, Virginia; Hardiker, Nicholas
2016-01-01
The International Classification for Nursing Practice (ICNP®) and the Clinical Care Classification (CCC) System are standardised nursing terminologies that identify discrete elements of nursing practice, including nursing diagnoses, interventions, and outcomes. While CCC uses a conceptual framework or model with 21 Care Components to classify these elements, ICNP, built on a formal Web Ontology Language (OWL) description logic foundation, uses a logical hierarchical framework that is useful for computing and maintenance of ICNP. Since the logical framework of ICNP may not always align with the needs of nursing practice, an informal framework may be a more useful organisational tool to represent nursing content. The purpose of this study was to classify ICNP nursing diagnoses using the 21 Care Components of the CCC as a conceptual framework to facilitate usability and inter-operability of nursing diagnoses in electronic health records. Findings resulted in all 521 ICNP diagnoses being assigned to one of the 21 CCC Care Components. Further research is needed to validate the resulting product of this study with practitioners and develop recommendations for improvement of both terminologies.
NASA Astrophysics Data System (ADS)
Gavrielides, Marios A.; Ronnett, Brigitte M.; Vang, Russell; Seidman, Jeffrey D.
2015-03-01
Studies have shown that different cell types of ovarian carcinoma have different molecular profiles, exhibit different behavior, and that patients could benefit from typespecific treatment. Different cell types display different histopathology features, and different criteria are used for each cell type classification. Inter-observer variability for the task of classifying ovarian cancer cell types is an under-examined area of research. This study served as a pilot study to quantify observer variability related to the classification of ovarian cancer cell types and to extract valuable data for designing a validation study of digital pathology (DP) for this task. Three observers with expertise in gynecologic pathology reviewed 114 cases of ovarian cancer with optical microscopy, with specific guidelines for classifications into distinct cell types. For 93 cases all 3 pathologists agreed on the same cell type, for 18 cases 2 out of 3 agreed, and for 3 cases there was no agreement. Across cell types with a minimum sample size of 10 cases, agreement between all three observers was {91.1%, 80.0%, 90.0%, 78.6%, 100.0%, 61.5%} for the high grade serous carcinoma, low grade serous carcinoma, endometrioid, mucinous, clear cell, and carcinosarcoma cell types respectively. These results indicate that unanimous agreement varied over a fairly wide range. However, additional research is needed to determine the importance of these differences in comparison studies. These results will be used to aid in the design and sizing of such a study comparing optical and digital pathology. In addition, the results will help in understanding the potential role computer-aided diagnosis has in helping to improve the agreement of pathologists for this task.
A Quantitative Analysis of Pulsed Signals Emitted by Wild Bottlenose Dolphins.
Luís, Ana Rita; Couchinho, Miguel N; Dos Santos, Manuel E
2016-01-01
Common bottlenose dolphins (Tursiops truncatus), produce a wide variety of vocal emissions for communication and echolocation, of which the pulsed repertoire has been the most difficult to categorize. Packets of high repetition, broadband pulses are still largely reported under a general designation of burst-pulses, and traditional attempts to classify these emissions rely mainly in their aural characteristics and in graphical aspects of spectrograms. Here, we present a quantitative analysis of pulsed signals emitted by wild bottlenose dolphins, in the Sado estuary, Portugal (2011-2014), and test the reliability of a traditional classification approach. Acoustic parameters (minimum frequency, maximum frequency, peak frequency, duration, repetition rate and inter-click-interval) were extracted from 930 pulsed signals, previously categorized using a traditional approach. Discriminant function analysis revealed a high reliability of the traditional classification approach (93.5% of pulsed signals were consistently assigned to their aurally based categories). According to the discriminant function analysis (Wilk's Λ = 0.11, F3, 2.41 = 282.75, P < 0.001), repetition rate is the feature that best enables the discrimination of different pulsed signals (structure coefficient = 0.98). Classification using hierarchical cluster analysis led to a similar categorization pattern: two main signal types with distinct magnitudes of repetition rate were clustered into five groups. The pulsed signals, here described, present significant differences in their time-frequency features, especially repetition rate (P < 0.001), inter-click-interval (P < 0.001) and duration (P < 0.001). We document the occurrence of a distinct signal type-short burst-pulses, and highlight the existence of a diverse repertoire of pulsed vocalizations emitted in graded sequences. The use of quantitative analysis of pulsed signals is essential to improve classifications and to better assess the contexts of emission, geographic variation and the functional significance of pulsed signals.
Pedestrian Detection in Far-Infrared Daytime Images Using a Hierarchical Codebook of SURF
Besbes, Bassem; Rogozan, Alexandrina; Rus, Adela-Maria; Bensrhair, Abdelaziz; Broggi, Alberto
2015-01-01
One of the main challenges in intelligent vehicles concerns pedestrian detection for driving assistance. Recent experiments have showed that state-of-the-art descriptors provide better performances on the far-infrared (FIR) spectrum than on the visible one, even in daytime conditions, for pedestrian classification. In this paper, we propose a pedestrian detector with on-board FIR camera. Our main contribution is the exploitation of the specific characteristics of FIR images to design a fast, scale-invariant and robust pedestrian detector. Our system consists of three modules, each based on speeded-up robust feature (SURF) matching. The first module allows generating regions-of-interest (ROI), since in FIR images of the pedestrian shapes may vary in large scales, but heads appear usually as light regions. ROI are detected with a high recall rate with the hierarchical codebook of SURF features located in head regions. The second module consists of pedestrian full-body classification by using SVM. This module allows one to enhance the precision with low computational cost. In the third module, we combine the mean shift algorithm with inter-frame scale-invariant SURF feature tracking to enhance the robustness of our system. The experimental evaluation shows that our system outperforms, in the FIR domain, the state-of-the-art Haar-like Adaboost-cascade, histogram of oriented gradients (HOG)/linear SVM (linSVM) and MultiFtrpedestrian detectors, trained on the FIR images. PMID:25871724
Convolutional Neural Network for Histopathological Analysis of Osteosarcoma.
Mishra, Rashika; Daescu, Ovidiu; Leavey, Patrick; Rakheja, Dinesh; Sengupta, Anita
2018-03-01
Pathologists often deal with high complexity and sometimes disagreement over osteosarcoma tumor classification due to cellular heterogeneity in the dataset. Segmentation and classification of histology tissue in H&E stained tumor image datasets is a challenging task because of intra-class variations, inter-class similarity, crowded context, and noisy data. In recent years, deep learning approaches have led to encouraging results in breast cancer and prostate cancer analysis. In this article, we propose convolutional neural network (CNN) as a tool to improve efficiency and accuracy of osteosarcoma tumor classification into tumor classes (viable tumor, necrosis) versus nontumor. The proposed CNN architecture contains eight learned layers: three sets of stacked two convolutional layers interspersed with max pooling layers for feature extraction and two fully connected layers with data augmentation strategies to boost performance. The use of a neural network results in higher accuracy of average 92% for the classification. We compare the proposed architecture with three existing and proven CNN architectures for image classification: AlexNet, LeNet, and VGGNet. We also provide a pipeline to calculate percentage necrosis in a given whole slide image. We conclude that the use of neural networks can assure both high accuracy and efficiency in osteosarcoma classification.
NASA Astrophysics Data System (ADS)
Kung, Wei-Ying; Kim, Chang-Su; Kuo, C.-C. Jay
2004-10-01
A multi-hypothesis motion compensated prediction (MHMCP) scheme, which predicts a block from a weighted superposition of more than one reference blocks in the frame buffer, is proposed and analyzed for error resilient visual communication in this research. By combining these reference blocks effectively, MHMCP can enhance the error resilient capability of compressed video as well as achieve a coding gain. In particular, we investigate the error propagation effect in the MHMCP coder and analyze the rate-distortion performance in terms of the hypothesis number and hypothesis coefficients. It is shown that MHMCP suppresses the short-term effect of error propagation more effectively than the intra refreshing scheme. Simulation results are given to confirm the analysis. Finally, several design principles for the MHMCP coder are derived based on the analytical and experimental results.
Haliasos, N; Rezajooi, K; O'neill, K S; Van Dellen, J; Hudovsky, Anita; Nouraei, Sar
2010-04-01
Clinical coding is the translation of documented clinical activities during an admission to a codified language. Healthcare Resource Groupings (HRGs) are derived from coding data and are used to calculate payment to hospitals in England, Wales and Scotland and to conduct national audit and benchmarking exercises. Coding is an error-prone process and an understanding of its accuracy within neurosurgery is critical for financial, organizational and clinical governance purposes. We undertook a multidisciplinary audit of neurosurgical clinical coding accuracy. Neurosurgeons trained in coding assessed the accuracy of 386 patient episodes. Where clinicians felt a coding error was present, the case was discussed with an experienced clinical coder. Concordance between the initial coder-only clinical coding and the final clinician-coder multidisciplinary coding was assessed. At least one coding error occurred in 71/386 patients (18.4%). There were 36 diagnosis and 93 procedure errors and in 40 cases, the initial HRG changed (10.4%). Financially, this translated to pound111 revenue-loss per patient episode and projected to pound171,452 of annual loss to the department. 85% of all coding errors were due to accumulation of coding changes that occurred only once in the whole data set. Neurosurgical clinical coding is error-prone. This is financially disadvantageous and with the coding data being the source of comparisons within and between departments, coding inaccuracies paint a distorted picture of departmental activity and subspecialism in audit and benchmarking. Clinical engagement improves accuracy and is encouraged within a clinical governance framework.
Lucyk, Kelsey; Tang, Karen; Quan, Hude
2017-11-22
Administrative health data are increasingly used for research and surveillance to inform decision-making because of its large sample sizes, geographic coverage, comprehensivity, and possibility for longitudinal follow-up. Within Canadian provinces, individuals are assigned unique personal health numbers that allow for linkage of administrative health records in that jurisdiction. It is therefore necessary to ensure that these data are of high quality, and that chart information is accurately coded to meet this end. Our objective is to explore the potential barriers that exist for high quality data coding through qualitative inquiry into the roles and responsibilities of medical chart coders. We conducted semi-structured interviews with 28 medical chart coders from Alberta, Canada. We used thematic analysis and open-coded each transcript to understand the process of administrative health data generation and identify barriers to its quality. The process of generating administrative health data is highly complex and involves a diverse workforce. As such, there are multiple points in this process that introduce challenges for high quality data. For coders, the main barriers to data quality occurred around chart documentation, variability in the interpretation of chart information, and high quota expectations. This study illustrates the complex nature of barriers to high quality coding, in the context of administrative data generation. The findings from this study may be of use to data users, researchers, and decision-makers who wish to better understand the limitations of their data or pursue interventions to improve data quality.
Polytopic vector analysis in igneous petrology: Application to lunar petrogenesis
NASA Technical Reports Server (NTRS)
Shervais, John W.; Ehrlich, R.
1993-01-01
Lunar samples represent a heterogeneous assemblage of rocks with complex inter-relationships that are difficult to decipher using standard petrogenetic approaches. These inter-relationships reflect several distinct petrogenetic trends as well as thermomechanical mixing of distinct components. Additional complications arise from the unequal quality of chemical analyses and from the fact that many samples (e.g., breccia clasts) are too small to be representative of the system from which they derived. Polytopic vector analysis (PVA) is a multi-variate procedure used as a tool for exploratory data analysis. PVA allows the analyst to classify samples and clarifies relationships among heterogenous samples with complex petrogenetic histories. It differs from orthogonal factor analysis in that it uses non-orthogonal multivariate sample vectors to extract sample endmember compositions. The output from a Q-mode (sample based) factor analysis is the initial step in PVA. The Q-mode analysis, using criteria established by Miesch and Klovan and Miesch, is used to determine the number of endmembers in the data system. The second step involves determination of endmembers and mixing proportions with all output expressed in the same geochemical variable as the input. The composition of endmembers is derived by analysis of the variability of the data set. Endmembers need not be present in the data set, nor is it necessary for their composition to be known a priori. A set of any endmembers defines a 'polytope' or classification figure (triangle for a three component system, tetrahedron for a four component system, a 'five-tope' in four dimensions for five component system, et cetera).
Bayesian decision support for coding occupational injury data.
Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R
2016-06-01
Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.
Iglesias-Parra, Maria Rosa; García-Guerrero, Alfonso; García-Mayor, Silvia; Kaknani-Uttumchandani, Shakira; León-Campos, Álvaro; Morales-Asencio, José Miguel
2015-07-01
To develop an evaluation system of clinical competencies for the practicum of nursing students based on the Nursing Interventions Classification (NIC). Psychometric validation study: the first two phases addressed definition and content validation, and the third phase consisted of a cross-sectional study for analyzing reliability. The study population was undergraduate nursing students and clinical tutors. Through the Delphi technique, 26 competencies and 91 interventions were isolated. Cronbach's α was 0.96. Factor analysis yielded 18 factors that explained 68.82% of the variance. Overall inter-item correlation was 0.26, and total-item correlation ranged between 0.66 and 0.19. A competency system for the nursing practicum, structured on the NIC, is a reliable method for assessing and evaluating clinical competencies. Further evaluations in other contexts are needed. The availability of standardized language systems in the nursing discipline supposes an ideal framework to develop the nursing curricula. © 2015 Sigma Theta Tau International.
Nissim, Nir; Shahar, Yuval; Boland, Mary Regina; Tatonetti, Nicholas P; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert
2018-01-01
Background and Objectives Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers’ learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. Methods We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. Results The AL methods produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p = 0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275 to 0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers’ different models during the training phase, compared to the variance of the induced models’ AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods. The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p = 0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p = 0.29), as was the difference between the Combination_XA and Exploitation methods (p = 0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p = 0.014), but not when using any of the three AL methods. Conclusions The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group’s individual labelers. Finally, using the AL methods when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. PMID:28456512
Nissim, Nir; Shahar, Yuval; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert
2017-09-01
Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers' learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. The AL methods: produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p=0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275-0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers' different models during the training phase, compared to the variance of the induced models' AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p=0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p=0.29), as was the difference between the Combination_XA and Exploitation methods (p=0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p=0.014), but not when using any of the three AL methods. The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group's individual labelers. Finally, using the AL methods: when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. Copyright © 2017 Elsevier B.V. All rights reserved.
Modernism in Belgrade: Classification of Modernist Housing Buildings 1919-1980
NASA Astrophysics Data System (ADS)
Dragutinovic, Anica; Pottgiesser, Uta; De Vos, Els; Melenhorst, Michel
2017-10-01
Yugoslavian Modernist Architecture, although part of a larger cultural phenomenon, received hardly any international attention, since there are only a few internationally published studies about it. Nevertheless, Modernist Architecture of the Inter-war Yugoslavia (Kingdom of Yugoslavia), and specially Modernist Architecture of the Post-war Yugoslavia (Socialist Federal Republic of Yugoslavia under the “reign” of Tito), represents the most important architectural heritage of the 20th century in former Yugoslavian countries. Belgrade, as the capital city of both newly founded Yugoslavia(s), experienced an immediate economic, political and cultural expansion after the both wars, as well as a large population increase. The construction of sufficient and appropriate new housing was a major undertaking in both periods (1919-1940 and 1948-1980), however conceived and realized with deeply diverging views. The transition from villas and modest apartment buildings, as main housing typologies in the Inter-war period, to the mass housing of the Post-war period, was not only a result of the different socio-political context of the two Yugoslavia(s), but also the country’s industrialization, modernization and technological development. Through the classification of Modernist housing buildings in Belgrade, this paper will investigate on relations between the transformations of the main housing typologies executed under different socio-political contexts on the one side, and development of building technologies, construction systems and materials applied on those buildings on the other side. The paper wants to shed light on the Yugoslavian Modernist Architecture in order to increase the international awareness on its architectural and heritage values. The aim is an integrated re-evaluation of the buildings, presentation of their current condition and potentials for future (re)use with a specific focus on building envelopes and construction.
ARG-based genome-wide analysis of cacao cultivars.
Utro, Filippo; Cornejo, Omar Eduardo; Livingstone, Donald; Motamayor, Juan Carlos; Parida, Laxmi
2012-01-01
Ancestral recombinations graph (ARG) is a topological structure that captures the relationship between the extant genomic sequences in terms of genetic events including recombinations. IRiS is a system that estimates the ARG on sequences of individuals, at genomic scales, capturing the relationship between these individuals of the species. Recently, this system was used to estimate the ARG of the recombining X Chromosome of a collection of human populations using relatively dense, bi-allelic SNP data. While the ARG is a natural model for capturing the inter-relationship between a single chromosome of the individuals of a species, it is not immediately apparent how the model can utilize whole-genome (across chromosomes) diploid data. Also, the sheer complexity of an ARG structure presents a challenge to graph visualization techniques. In this paper we examine the ARG reconstruction for (1) genome-wide or multiple chromosomes, (2) multi-allelic and (3) extremely sparse data. To aid in the visualization of the results of the reconstructed ARG, we additionally construct a much simplified topology, a classification tree, suggested by the ARG.As the test case, we study the problem of extracting the relationship between populations of Theobroma cacao. The chocolate tree is an outcrossing species in the wild, due to self-incompatibility mechanisms at play. Thus a principled approach to understanding the inter-relationships between the different populations must take the shuffling of the genomic segments into account. The polymorphisms in the test data are short tandem repeats (STR) and are multi-allelic (sometimes as high as 30 distinct possible values at a locus). Each is at a genomic location that is bilaterally transmitted, hence the ARG is a natural model for this data. Another characteristic of this plant data set is that while it is genome-wide, across 10 linkage groups or chromosomes, it is very sparse, i.e., only 96 loci from a genome of approximately 400 megabases. The results are visualized both as MDS plots and as classification trees. To evaluate the accuracy of the ARG approach, we compare the results with those available in literature. We have extended the ARG model to incorporate genome-wide (ensemble of multiple chromosomes) data in a natural way. We present a simple scheme to implement this in practice. Finally, this is the first time that a plant population data set is being studied by estimating its underlying ARG. We demonstrate an overall precision of 0.92 and an overall recall of 0.93 of the ARG-based classification, with respect to the gold standard. While we have corroborated the classification of the samples with that in literature, this opens the door to other potential studies that can be made on the ARG.
ARG-based genome-wide analysis of cacao cultivars
2012-01-01
Background Ancestral recombinations graph (ARG) is a topological structure that captures the relationship between the extant genomic sequences in terms of genetic events including recombinations. IRiS is a system that estimates the ARG on sequences of individuals, at genomic scales, capturing the relationship between these individuals of the species. Recently, this system was used to estimate the ARG of the recombining X Chromosome of a collection of human populations using relatively dense, bi-allelic SNP data. Results While the ARG is a natural model for capturing the inter-relationship between a single chromosome of the individuals of a species, it is not immediately apparent how the model can utilize whole-genome (across chromosomes) diploid data. Also, the sheer complexity of an ARG structure presents a challenge to graph visualization techniques. In this paper we examine the ARG reconstruction for (1) genome-wide or multiple chromosomes, (2) multi-allelic and (3) extremely sparse data. To aid in the visualization of the results of the reconstructed ARG, we additionally construct a much simplified topology, a classification tree, suggested by the ARG. As the test case, we study the problem of extracting the relationship between populations of Theobroma cacao. The chocolate tree is an outcrossing species in the wild, due to self-incompatibility mechanisms at play. Thus a principled approach to understanding the inter-relationships between the different populations must take the shuffling of the genomic segments into account. The polymorphisms in the test data are short tandem repeats (STR) and are multi-allelic (sometimes as high as 30 distinct possible values at a locus). Each is at a genomic location that is bilaterally transmitted, hence the ARG is a natural model for this data. Another characteristic of this plant data set is that while it is genome-wide, across 10 linkage groups or chromosomes, it is very sparse, i.e., only 96 loci from a genome of approximately 400 megabases. The results are visualized both as MDS plots and as classification trees. To evaluate the accuracy of the ARG approach, we compare the results with those available in literature. Conclusions We have extended the ARG model to incorporate genome-wide (ensemble of multiple chromosomes) data in a natural way. We present a simple scheme to implement this in practice. Finally, this is the first time that a plant population data set is being studied by estimating its underlying ARG. We demonstrate an overall precision of 0.92 and an overall recall of 0.93 of the ARG-based classification, with respect to the gold standard. While we have corroborated the classification of the samples with that in literature, this opens the door to other potential studies that can be made on the ARG. PMID:23281769
During the last decade, a number of initiatives have been undertaken to create systematic national and global data sets of processed satellite imagery. An important application of these data is the derivation of large area (i.e. multi-scene) land cover products. Such products, ho...
Epileptic seizure detection in EEG signal with GModPCA and support vector machine.
Jaiswal, Abeg Kumar; Banka, Haider
2017-01-01
Epilepsy is one of the most common neurological disorders caused by recurrent seizures. Electroencephalograms (EEGs) record neural activity and can detect epilepsy. Visual inspection of an EEG signal for epileptic seizure detection is a time-consuming process and may lead to human error; therefore, recently, a number of automated seizure detection frameworks were proposed to replace these traditional methods. Feature extraction and classification are two important steps in these procedures. Feature extraction focuses on finding the informative features that could be used for classification and correct decision-making. Therefore, proposing effective feature extraction techniques for seizure detection is of great significance. Principal Component Analysis (PCA) is a dimensionality reduction technique used in different fields of pattern recognition including EEG signal classification. Global modular PCA (GModPCA) is a variation of PCA. In this paper, an effective framework with GModPCA and Support Vector Machine (SVM) is presented for epileptic seizure detection in EEG signals. The feature extraction is performed with GModPCA, whereas SVM trained with radial basis function kernel performed the classification between seizure and nonseizure EEG signals. Seven different experimental cases were conducted on the benchmark epilepsy EEG dataset. The system performance was evaluated using 10-fold cross-validation. In addition, we prove analytically that GModPCA has less time and space complexities as compared to PCA. The experimental results show that EEG signals have strong inter-sub-pattern correlations. GModPCA and SVM have been able to achieve 100% accuracy for the classification between normal and epileptic signals. Along with this, seven different experimental cases were tested. The classification results of the proposed approach were better than were compared the results of some of the existing methods proposed in literature. It is also found that the time and space complexities of GModPCA are less as compared to PCA. This study suggests that GModPCA and SVM could be used for automated epileptic seizure detection in EEG signal.
Classification Algorithms for Big Data Analysis, a Map Reduce Approach
NASA Astrophysics Data System (ADS)
Ayma, V. A.; Ferreira, R. S.; Happ, P.; Oliveira, D.; Feitosa, R.; Costa, G.; Plaza, A.; Gamba, P.
2015-03-01
Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA's machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.
Classification of multiple sclerosis lesions using adaptive dictionary learning.
Deshpande, Hrishikesh; Maurel, Pierre; Barillot, Christian
2015-12-01
This paper presents a sparse representation and an adaptive dictionary learning based method for automated classification of multiple sclerosis (MS) lesions in magnetic resonance (MR) images. Manual delineation of MS lesions is a time-consuming task, requiring neuroradiology experts to analyze huge volume of MR data. This, in addition to the high intra- and inter-observer variability necessitates the requirement of automated MS lesion classification methods. Among many image representation models and classification methods that can be used for such purpose, we investigate the use of sparse modeling. In the recent years, sparse representation has evolved as a tool in modeling data using a few basis elements of an over-complete dictionary and has found applications in many image processing tasks including classification. We propose a supervised classification approach by learning dictionaries specific to the lesions and individual healthy brain tissues, which include white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF). The size of the dictionaries learned for each class plays a major role in data representation but it is an even more crucial element in the case of competitive classification. Our approach adapts the size of the dictionary for each class, depending on the complexity of the underlying data. The algorithm is validated using 52 multi-sequence MR images acquired from 13 MS patients. The results demonstrate the effectiveness of our approach in MS lesion classification. Copyright © 2015 Elsevier Ltd. All rights reserved.
An adaptable binary entropy coder
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.
NASA Technical Reports Server (NTRS)
Birch, J. N.; French, R. H.
1972-01-01
An investigation was made to define experiments for collection of RFI and multipath data for application to a synchronous relay satellite/low orbiting satellite configuration. A survey of analytical models of the multipath signal was conducted. Data has been gathered concerning the existing RFI and other noise sources in various bands at VHF and UHF. Additionally, designs are presented for equipment to combat the effects of RFI and multipath: an adaptive delta mod voice system, a forward error control coder/decoder, a PN transmission system, and a wideband FM system. The performance of these systems was then evaluated. Techniques are discussed for measuring multipath and RFI. Finally, recommended data collection experiments are presented. An extensive tabulation is included of theoretical predictions of the amount of signal reflected from a rough, spherical earth.
Czodrowski, Paul
2014-11-01
In the 1960s, the kappa statistic was introduced for the estimation of chance agreement in inter- and intra-rater reliability studies. The kappa statistic was strongly pushed by the medical field where it could be successfully applied via analyzing diagnoses of identical patient groups. Kappa is well suited for classification tasks where ranking is not considered. The main advantage of kappa is its simplicity and the general applicability to multi-class problems which is the major difference to receiver operating characteristic area under the curve. In this manuscript, I will outline the usage of kappa for classification tasks, and I will evaluate the role and uses of kappa in specifically machine learning and cheminformatics.
21 CFR 866.5890 - Inter-alpha trypsin inhibitor immunological test system.
Code of Federal Regulations, 2010 CFR
2010-04-01
... HUMAN SERVICES (CONTINUED) MEDICAL DEVICES IMMUNOLOGY AND MICROBIOLOGY DEVICES Immunological Test Systems § 866.5890 Inter-alpha trypsin inhibitor immunological test system. (a) Identification. An inter... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Inter-alpha trypsin inhibitor immunological test...
Vemulakonda, V M; Wilcox, D T; Torok, M R; Hou, A; Campbell, J B; Kempe, A
2015-09-01
The most common measurements of hydronephrosis are the anterior-posterior (AP) diameter and the Society for Fetal Urology (SFU) grading systems. To date, the inter-rater reliability (IRR) of these measures has not been compared in the postnatal period. The objectives of this study were to compare the IRR of the AP diameter and the SFU grading system in infants and to determine whether ultrasound findings other than pelvicalyceal dilation are associated with higher SFU grades. Initial postnatal ultrasounds of infants seen from February 1, 2011, to January 31, 2012, with a primary diagnosis of congenital hydronephrosis were included for review. Ultrasound images were de-identified and reviewed by four pediatric urologists. IRR was calculated using the intraclass correlation (ICC) measure. A paired t test was used to compare ICCs. Associations between SFU grade and other ultrasound findings were tested using Chi-square or Fisher's exact tests. A total of 112 kidneys in 56 patients were reviewed. IRR of the SFU grading system was high (right kidney ICC = 0.83, left kidney ICC = 0.85); however, IRR of AP diameter measurement was higher (right kidney ICC = 00.97, left kidney ICC = 0.98; p < 0.001). Renal asymmetry (p < 0.001), echogenicity (p < 0.001), and parenchymal thinning (p < 0.001) were significantly associated with SFU grade 4 hydronephrosis on bivariable and multivariable analysis. The SFU grading system is associated with excellent IRR, although the AP diameter appears to have higher IRR. Physicians may consider ultrasound findings that are not explicitly included in the SFU system when assigning hydronephrosis grade, which may lead to variability in use of this classification system.
Personal Network Recovery Enablers and Relapse Risks for Women With Substance Dependence
Brown, Suzanne; Tracy, Elizabeth M.; Jun, MinKyoung; Park, Hyunyong; Min, Meeyoung O.
2015-01-01
We examined the experiences of women in treatment for substance dependence and their treatment providers about personal networks and recovery. We conducted six focus groups at three women’s intensive substance abuse treatment programs. Four coders used thematic analysis to guide the data coding and an iterative process to identify major themes. Coders identified social network characteristics that enabled and impeded recovery and a reciprocal relationship between internal states, relationship management, and recovery. Although women described adding individuals to their networks, they also described managing existing relationships through distancing from or isolating some members to diminish their negative impact on recovery. Treatment providers identified similar themes but focused more on contextual barriers than the women. The focus of interventions with this population should be on both internal barriers to personal network change such as mistrust and fear, and helping women develop skills for managing enduring network relationships. PMID:25231945
Evaluation of a ''CMOS'' Imager for Shadow Mask Hard X-ray Telescope
NASA Technical Reports Server (NTRS)
Desai, Upendra D.; Orwig, Larry E.; Oergerle, William R. (Technical Monitor)
2002-01-01
We have developed a hard x-ray coder that provides high angular resolution imaging capability using a coarse position sensitive image plane detector. The coder consists of two Fresnel zone plates. (FZP) Two such 'FZP's generate Moire fringe patterns whose frequency and orientation define the arrival direction of a beam with respect to telescope axis. The image plane detector needs to resolve the Moire fringe pattern. Pixilated detectors can be used as an image plane detector. The recently available 'CMOS' imager could provide a very low power large area image plane detector for hard x-rays. We have looked into a unit made by Rad-Icon Imaging Corp. The Shadow-Box 1024 x-ray camera is a high resolution 1024xl024 pixel detector of 50x50 mm area. It is a very low power, stand alone camera. We present some preliminary results of our investigation of evaluation of such camera.
Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.
Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting
2012-09-01
In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.
Predicting phonetic transcription agreement: Insights from research in infant vocalizations
RAMSDELL, HEATHER L.; OLLER, D. KIMBROUGH; ETHINGTON, CORINNA A.
2010-01-01
The purpose of this study is to provide new perspectives on correlates of phonetic transcription agreement. Our research focuses on phonetic transcription and coding of infant vocalizations. The findings are presumed to be broadly applicable to other difficult cases of transcription, such as found in severe disorders of speech, which similarly result in low reliability for a variety of reasons. We evaluated the predictiveness of two factors not previously documented in the literature as influencing transcription agreement: canonicity and coder confidence. Transcribers coded samples of infant vocalizations, judging both canonicity and confidence. Correlation results showed that canonicity and confidence were strongly related to agreement levels, and regression results showed that canonicity and confidence both contributed significantly to explanation of variance. Specifically, the results suggest that canonicity plays a major role in transcription agreement when utterances involve supraglottal articulation, with coder confidence offering additional power in predicting transcription agreement. PMID:17882695
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
Hasan, Mehedi; Kotov, Alexander; Carcone, April; Dong, Ming; Naar, Sylvie; Hartlieb, Kathryn Brogan
2016-08-01
This study examines the effectiveness of state-of-the-art supervised machine learning methods in conjunction with different feature types for the task of automatic annotation of fragments of clinical text based on codebooks with a large number of categories. We used a collection of motivational interview transcripts consisting of 11,353 utterances, which were manually annotated by two human coders as the gold standard, and experimented with state-of-art classifiers, including Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), Random Forest (RF), AdaBoost, DiscLDA, Conditional Random Fields (CRF) and Convolutional Neural Network (CNN) in conjunction with lexical, contextual (label of the previous utterance) and semantic (distribution of words in the utterance across the Linguistic Inquiry and Word Count dictionaries) features. We found out that, when the number of classes is large, the performance of CNN and CRF is inferior to SVM. When only lexical features were used, interview transcripts were automatically annotated by SVM with the highest classification accuracy among all classifiers of 70.8%, 61% and 53.7% based on the codebooks consisting of 17, 20 and 41 codes, respectively. Using contextual and semantic features, as well as their combination, in addition to lexical ones, improved the accuracy of SVM for annotation of utterances in motivational interview transcripts with a codebook consisting of 17 classes to 71.5%, 74.2%, and 75.1%, respectively. Our results demonstrate the potential of using machine learning methods in conjunction with lexical, semantic and contextual features for automatic annotation of clinical interview transcripts with near-human accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.
2012-01-01
Background Automated classification of histopathology involves identification of multiple classes, including benign, cancerous, and confounder categories. The confounder tissue classes can often mimic and share attributes with both the diseased and normal tissue classes, and can be particularly difficult to identify, both manually and by automated classifiers. In the case of prostate cancer, they may be several confounding tissue types present in a biopsy sample, posing as major sources of diagnostic error for pathologists. Two common multi-class approaches are one-shot classification (OSC), where all classes are identified simultaneously, and one-versus-all (OVA), where a “target” class is distinguished from all “non-target” classes. OSC is typically unable to handle discrimination of classes of varying similarity (e.g. with images of prostate atrophy and high grade cancer), while OVA forces several heterogeneous classes into a single “non-target” class. In this work, we present a cascaded (CAS) approach to classifying prostate biopsy tissue samples, where images from different classes are grouped to maximize intra-group homogeneity while maximizing inter-group heterogeneity. Results We apply the CAS approach to categorize 2000 tissue samples taken from 214 patient studies into seven classes: epithelium, stroma, atrophy, prostatic intraepithelial neoplasia (PIN), and prostate cancer Gleason grades 3, 4, and 5. A series of increasingly granular binary classifiers are used to split the different tissue classes until the images have been categorized into a single unique class. Our automatically-extracted image feature set includes architectural features based on location of the nuclei within the tissue sample as well as texture features extracted on a per-pixel level. The CAS strategy yields a positive predictive value (PPV) of 0.86 in classifying the 2000 tissue images into one of 7 classes, compared with the OVA (0.77 PPV) and OSC approaches (0.76 PPV). Conclusions Use of the CAS strategy increases the PPV for a multi-category classification system over two common alternative strategies. In classification problems such as histopathology, where multiple class groups exist with varying degrees of heterogeneity, the CAS system can intelligently assign class labels to objects by performing multiple binary classifications according to domain knowledge. PMID:23110677
ERIC Educational Resources Information Center
Bakian, Amanda V.; Bilder, Deborah A.; Carbone, Paul S.; Hunt, Tyler D.; Petersen, Brent; Rice, Catherine E.
2015-01-01
An independent validation was conducted of the Utah Autism and Developmental Disabilities Monitoring Network's (UT-ADDM) classification of children with autism spectrum disorder (ASD). UT-ADDM final case status (n = 90) was compared with final case status as determined by independent external expert reviewers (EERs). Inter-rater reliability…
Gangaputra, Sapna; Pak, Jeong Won; Peng, Qian; Hubbard, Larry D.; Thayer, Dennis; Krason, Zbigniew; Joyce, Jeff; Danis, Ronald P.
2014-01-01
Purpose To describe the transition to digital imaging and assess any impact on ocular disease classification. Methods Film and digital images, acquired by certified photographers, were evaluated independently according to standard procedures for the following: image quality, presence of cytomegalovirus (CMV) retinitis lesions, their extent, and proximity from disc and macula. Inter-grader agreement within the digital medium was also assessed. Results Among the fifteen eyes with CMV retinitis, the mean difference between film and digital images for linear distance of lesion edge to disc was 0.02 disc diameters (DD), for distance to center of macula was −0.04 DD and area covered by CMV retinitis was 0.95 disc area (DA). There was no statistically significant difference in distance and area measurements between media. Inter grader agreement in measurements of digital images was excellent for distance and area estimated. Conclusion Our results suggest that digital grading of CMV retinitis in LSOCA is comparable to that from film with respect to disease classification, measurements, and reproducibility. These findings provide support for continuity of grading data, despite the necessary transition in imaging media. PMID:21857393
McKenzie, Kirsten; Mitchell, Rebecca; Scott, Deborah Anne; Harrison, James Edward; McClure, Roderick John
2009-08-01
To examine the reliability of work-related activity coding for injury-related hospitalisations in Australia. A random sample of 4,373 injury-related hospital separations from 1 July 2002 to 30 June 2004 were obtained from a stratified random sample of 50 hospitals across four states in Australia. From this sample, cases were identified as work-related if they contained an ICD-10-AM work-related activity code (U73) allocated by either: (i) the original coder; (ii) an independent auditor, blinded to the original code; or (iii) a research assistant, blinded to both the original and auditor codes, who reviewed narrative text extracted from the medical record. The concordance of activity coding and number of cases identified as work-related using each method were compared. Of the 4,373 cases sampled, 318 cases were identified as being work-related using any of the three methods for identification. The original coder identified 217 and the auditor identified 266 work-related cases (68.2% and 83.6% of the total cases identified, respectively). Around 10% of cases were only identified through the text description review. The original coder and auditor agreed on the assignment of work-relatedness for 68.9% of cases. The best estimates of the frequency of hospital admissions for occupational injury underestimate the burden by around 32%. This is a substantial underestimate that has major implications for public policy, and highlights the need for further work on improving the quality and completeness of routine, administrative data sources for a more complete identification of work-related injuries.
Murphy, S F; Lenihan, L; Orefuwa, F; Colohan, G; Hynes, I; Collins, C G
2017-05-01
The discharge letter is a key component of the communication pathway between the hospital and primary care. Accuracy and timeliness of delivery are crucial to ensure continuity of patient care. Electronic discharge summaries (EDS) and prescriptions have been shown to improve quality of discharge information for general practitioners (GPs). The aim of this study was to evaluate the effect of a new EDS on GP satisfaction levels and accuracy of discharge diagnosis. A GP survey was carried out whereby semi-structured interviews were conducted with 13 GPs from three primary care centres who receive a high volume of discharge letters from the hospital. A chart review was carried out on 90 charts to compare accuracy of ICD-10 coding of Non-Consultant Hospital Doctors (NCHDs) with that of trained Hopital In-Patient Enquiry (HIPE) coders. GP satisfaction levels were over 90 % with most aspects of the EDS, including amount of information (97 %), accuracy (95 %), GP information and follow-up (97 %) and medications (91 %). 70 % of GPs received the EDS within 2 weeks. ICD-10 coding of discharge diagnosis by NCHDs had an accuracy of 33 %, compared with 95.6 % when done by trained coders (p < 0.00001). The introduction of the EDS and prescription has led to improved quality of timeliness of communication with primary care. It has led to a very high satisfaction rating with GPs. ICD-10 coding was found to be grossly inaccurate when carried out by NCHDs and it is more appropriate for this task to be carried out by trained coders.
The Challenges of Identifying and Classifying Child Sexual Abuse Material.
Kloess, Juliane A; Woodhams, Jessica; Whittle, Helen; Grant, Tim; Hamilton-Giachritsis, Catherine E
2018-02-01
The aim of the present study was to (a) assess the reliability with which indecent images of children (IIOC) are classified as being of an indecent versus nonindecent nature, and (b) examine in detail the decision-making process engaged in by law enforcement personnel who undertake the difficult task of identifying and classifying IIOC as per the current legislative offense categories. One experienced researcher and four employees from a police force in the United Kingdom coded an extensive amount of IIOC ( n = 1,212-2,233) to determine if they (a) were deemed to be of an indecent nature, and (b) depicted a child. Interrater reliability analyses revealed both considerable agreement and disagreement across coders, which were followed up with two focus groups involving the four employees. The first entailed a general discussion of the aspects that made such material more or less difficult to identify; the second focused around images where there had been either agreement ( n = 20) or disagreement ( n = 36) across coders that the images were of an indecent nature. Using thematic analysis, a number of factors apparent within IIOC were revealed to make the determination of youthfulness and indecency significantly more challenging for coders, with most relating to the developmental stage of the victim and the ambiguity of the context of an image. Findings are discussed in light of their implications for the identification of victims of ongoing sexual exploitation/abuse, the assessment and treatment of individuals in possession of IIOC, as well as the practice of policing and sentencing this type of offending behavior.
Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja
2012-10-10
The feasibility of 2D-intensity and 3D-topography images from a non-invasive Chromatic White Light (CWL) sensor for the age determination of latent fingerprints is investigated. The proposed method might provide the means to solve the so far unresolved issue of determining a fingerprints age in forensics. Conducting numerous experiments for an indoor crime scene using selected surfaces, different influences on the aging of fingerprints are investigated and the resulting aging variability is determined in terms of inter-person, intra-person, inter-finger and intra-finger variation. Main influence factors are shown to be the sweat composition, temperature, humidity, wind, UV-radiation, surface type, contamination of the finger with water-containing substances, resolution and measured area size, whereas contact time, contact pressure and smearing of the print seem to be of minor importance. Such influences lead to a certain experimental variability in inter-person and intra-person variation, which is higher than the inter-finger and intra-finger variation. Comparing the aging behavior of 17 different features using 1490 time series with a total of 41,520 fingerprint images, the great potential of the CWL technique in combination with the binary pixel feature from prior work is shown. Performing three different experiments for the classification of fingerprints into the two time classes [0, 5 h] and [5, 24 h], a maximum classification performance of 79.29% (kappa=0.46) is achieved for a general case, which is further improved for special cases. The statistical significance of the two best-performing features (both binary pixel versions based on 2D-intensity images) is manually shown and a feature fusion is performed, highlighting the strong dependency of the features on each other. It is concluded that such method might be combined with additional capturing devices, such as microscopes or spectroscopes, to a very promising age estimation scheme. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Integrated feature extraction and selection for neuroimage classification
NASA Astrophysics Data System (ADS)
Fan, Yong; Shen, Dinggang
2009-02-01
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
Aktaruzzaman, M; Migliorini, M; Tenhunen, M; Himanen, S L; Bianchi, A M; Sassi, R
2015-05-01
The work considers automatic sleep stage classification, based on heart rate variability (HRV) analysis, with a focus on the distinction of wakefulness (WAKE) from sleep and rapid eye movement (REM) from non-REM (NREM) sleep. A set of 20 automatically annotated one-night polysomnographic recordings was considered, and artificial neural networks were selected for classification. For each inter-heartbeat (RR) series, beside features previously presented in literature, we introduced a set of four parameters related to signal regularity. RR series of three different lengths were considered (corresponding to 2, 6, and 10 successive epochs, 30 s each, in the same sleep stage). Two sets of only four features captured 99 % of the data variance in each classification problem, and both of them contained one of the new regularity features proposed. The accuracy of classification for REM versus NREM (68.4 %, 2 epochs; 83.8 %, 10 epochs) was higher than when distinguishing WAKE versus SLEEP (67.6 %, 2 epochs; 71.3 %, 10 epochs). Also, the reliability parameter (Cohens's Kappa) was higher (0.68 and 0.45, respectively). Sleep staging classification based on HRV was still less precise than other staging methods, employing a larger variety of signals collected during polysomnographic studies. However, cheap and unobtrusive HRV-only sleep classification proved sufficiently precise for a wide range of applications.
Information quality measurement of medical encoding support based on usability.
Puentes, John; Montagner, Julien; Lecornu, Laurent; Cauvin, Jean-Michel
2013-12-01
Medical encoding support systems for diagnoses and medical procedures are an emerging technology that begins to play a key role in billing, reimbursement, and health policies decisions. A significant problem to exploit these systems is how to measure the appropriateness of any automatically generated list of codes, in terms of fitness for use, i.e. their quality. Until now, only information retrieval performance measurements have been applied to estimate the accuracy of codes lists as quality indicator. Such measurements do not give the value of codes lists for practical medical encoding, and cannot be used to globally compare the quality of multiple codes lists. This paper defines and validates a new encoding information quality measure that addresses the problem of measuring medical codes lists quality. It is based on a usability study of how expert coders and physicians apply computer-assisted medical encoding. The proposed measure, named ADN, evaluates codes Accuracy, Dispersion and Noise, and is adapted to the variable length and content of generated codes lists, coping with limitations of previous measures. According to the ADN measure, the information quality of a codes list is fully represented by a single point, within a suitably constrained feature space. Using one scheme, our approach is reliable to measure and compare the information quality of hundreds of codes lists, showing their practical value for medical encoding. Its pertinence is demonstrated by simulation and application to real data corresponding to 502 inpatient stays in four clinic departments. Results are compared to the consensus of three expert coders who also coded this anonymized database of discharge summaries, and to five information retrieval measures. Information quality assessment applying the ADN measure showed the degree of encoding-support system variability from one clinic department to another, providing a global evaluation of quality measurement trends. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Characteristics of Successful and Unsuccessful Mental Health Referrals of Refugees
Shannon, Patricia J.; Vinson, Gregory A.; Cook, Tonya; Lennon, Evelyn
2018-01-01
In this community based participatory research study, we explored key characteristics of mental health referrals of refugees using stories of providers collected through an on-line survey. Ten coders sorted 60 stories of successful referrals and 34 stories of unsuccessful referrals into domains using the critical incident technique. Principal Components Analysis yielded categories of successful referrals that included: active care coordination, proactive resolution of barriers, establishment of trust, and culturally responsive care. Unsuccessful referrals were characterized by cultural barriers, lack of care coordination, language barriers, system barriers, providers being unwilling to see refugees. Recommendations for training and policy are discussed. PMID:25735618
Qualities of dental chart recording and coding.
Chantravekin, Yosananda; Tasananutree, Munchulika; Santaphongse, Supitcha; Aittiwarapoj, Anchisa
2013-01-01
Chart recording and coding are the important processes in the healthcare informatics system, but there were only a few reports in the dentistry field. The objectives of this study are to study the qualities of dental chart recording and coding, as well as the achievement of lecture/workshop on this topic. The study was performed by auditing the patient's charts at the TU Dental Student Clinic from July 2011-August 2012. The chart recording mean scores ranged from 51.0-55.7%, whereas the errors in the coding process were presented in the coder part more than the doctor part. The lecture/workshop could improve the scores only in some topics.
NASA Technical Reports Server (NTRS)
Mcaulay, Robert J.; Quatieri, Thomas F.
1988-01-01
It has been shown that an analysis/synthesis system based on a sinusoidal representation of speech leads to synthetic speech that is essentially perceptually indistinguishable from the original. Strategies for coding the amplitudes, frequencies and phases of the sine waves have been developed that have led to a multirate coder operating at rates from 2400 to 9600 bps. The encoded speech is highly intelligible at all rates with a uniformly improving quality as the data rate is increased. A real-time fixed-point implementation has been developed using two ADSP2100 DSP chips. The methods used for coding and quantizing the sine-wave parameters for operation at the various frame rates are described.
Ivanov, Iliya V; Leitritz, Martin A; Norrenberg, Lars A; Völker, Michael; Dynowski, Marek; Ueffing, Marius; Dietter, Johannes
2016-02-01
Abnormalities of blood vessel anatomy, morphology, and ratio can serve as important diagnostic markers for retinal diseases such as AMD or diabetic retinopathy. Large cohort studies demand automated and quantitative image analysis of vascular abnormalities. Therefore, we developed an analytical software tool to enable automated standardized classification of blood vessels supporting clinical reading. A dataset of 61 images was collected from a total of 33 women and 8 men with a median age of 38 years. The pupils were not dilated, and images were taken after dark adaption. In contrast to current methods in which classification is based on vessel profile intensity averages, and similar to human vision, local color contrast was chosen as a discriminator to allow artery vein discrimination and arterial-venous ratio (AVR) calculation without vessel tracking. With 83% ± 1 standard error of the mean for our dataset, we achieved best classification for weighted lightness information from a combination of the red, green, and blue channels. Tested on an independent dataset, our method reached 89% correct classification, which, when benchmarked against conventional ophthalmologic classification, shows significantly improved classification scores. Our study demonstrates that vessel classification based on local color contrast can cope with inter- or intraimage lightness variability and allows consistent AVR calculation. We offer an open-source implementation of this method upon request, which can be integrated into existing tool sets and applied to general diagnostic exams.
Maclean, Donald; Younes, Hakim Ben; Forrest, Margaret; Towers, Hazel K
2012-03-01
Accurate and timely clinical data are required for clinical and organisational purposes and is especially important for patient management, audit of surgical performance and the electronic health record. The recent introduction of computerised theatre management systems has enabled real-time (point-of-care) operative procedure coding by clinical staff. However the accuracy of these data is unknown. The aim of this Scottish study was to compare the accuracy of theatre nurses' real-time coding on the local theatre management system with the central Scottish Morbidity Record (SMR01). Paired procedural codes were recorded, qualitatively graded for precision and compared (n = 1038). In this study, real-time, point-of-care coding by theatre nurses resulted in significant coding errors compared with the central SMR01 database. Improved collaboration between full-time coders and clinical staff using computerised decision support systems is suggested.
Cloud, Aerosol, and Volcanic Ash Retrievals Using ASTR and SLSTR with ORAC
NASA Astrophysics Data System (ADS)
McGarragh, Gregory; Poulsen, Caroline; Povey, Adam; Thomas, Gareth; Christensen, Matt; Sus, Oliver; Schlundt, Cornelia; Stapelberg, Stefan; Stengel, Martin; Grainger, Don
2015-12-01
The Optimal Retrieval of Aerosol and Cloud (ORAC) is a generalized optimal estimation system that retrieves cloud, aerosol and volcanic ash parameters using satellite imager measurements in the visible to infrared. Use of the same algorithm for different sensors and parameters leads to consistency that facilitates inter-comparison and interaction studies. ORAC currently supports ATSR, AVHRR, MODIS and SEVIRI. In this proceeding we discuss the ORAC retrieval algorithm applied to ATSR data including the retrieval methodology, the forward model, uncertainty characterization and discrimination/classification techniques. Application of ORAC to SLSTR data is discussed including the additional features that SLSTR provides relative to the ATSR heritage. The ORAC level 2 and level 3 results are discussed and an application of level 3 results to the study of cloud/aerosol interactions is presented.
Reiner, Bruce I
2018-02-01
One method for addressing existing peer review limitations is the assignment of peer review cases on a completely blinded basis, in which the peer reviewer would create an independent report which can then be cross-referenced with the primary reader report of record. By leveraging existing computerized data mining techniques, one could in theory automate and objectify the process of report data extraction, classification, and analysis, while reducing time and resource requirements intrinsic to manual peer review report analysis. Once inter-report analysis has been performed, resulting inter-report discrepancies can be presented to the radiologist of record for review, along with the option to directly communicate with the peer reviewer through an electronic data reconciliation tool aimed at collaboratively resolving inter-report discrepancies and improving report accuracy. All associated report and reconciled data could in turn be recorded in a referenceable peer review database, which provides opportunity for context and user-specific education and decision support.
NASA Astrophysics Data System (ADS)
Jeong, Jeong-Won; Kim, Tae-Seong; Shin, Dae-Chul; Do, Synho; Marmarelis, Vasilis Z.
2004-04-01
Recently it was shown that soft tissue can be differentiated with spectral unmixing and detection methods that utilize multi-band information obtained from a High-Resolution Ultrasonic Transmission Tomography (HUTT) system. In this study, we focus on tissue differentiation using the spectral target detection method based on Constrained Energy Minimization (CEM). We have developed a new tissue differentiation method called "CEM filter bank". Statistical inference on the output of each CEM filter of a filter bank is used to make a decision based on the maximum statistical significance rather than the magnitude of each CEM filter output. We validate this method through 3-D inter/intra-phantom soft tissue classification where target profiles obtained from an arbitrary single slice are used for differentiation in multiple tomographic slices. Also spectral coherence between target and object profiles of an identical tissue at different slices and phantoms is evaluated by conventional cross-correlation analysis. The performance of the proposed classifier is assessed using Receiver Operating Characteristic (ROC) analysis. Finally we apply our method to classify tiny structures inside a beef kidney such as Styrofoam balls (~1mm), chicken tissue (~5mm), and vessel-duct structures.
PANDORA: keyword-based analysis of protein sets by integration of annotation sources.
Kaplan, Noam; Vaaknin, Avishay; Linial, Michal
2003-10-01
Recent advances in high-throughput methods and the application of computational tools for automatic classification of proteins have made it possible to carry out large-scale proteomic analyses. Biological analysis and interpretation of sets of proteins is a time-consuming undertaking carried out manually by experts. We have developed PANDORA (Protein ANnotation Diagram ORiented Analysis), a web-based tool that provides an automatic representation of the biological knowledge associated with any set of proteins. PANDORA uses a unique approach of keyword-based graphical analysis that focuses on detecting subsets of proteins that share unique biological properties and the intersections of such sets. PANDORA currently supports SwissProt keywords, NCBI Taxonomy, InterPro entries and the hierarchical classification terms from ENZYME, SCOP and GO databases. The integrated study of several annotation sources simultaneously allows a representation of biological relations of structure, function, cellular location, taxonomy, domains and motifs. PANDORA is also integrated into the ProtoNet system, thus allowing testing thousands of automatically generated clusters. We illustrate how PANDORA enhances the biological understanding of large, non-uniform sets of proteins originating from experimental and computational sources, without the need for prior biological knowledge on individual proteins.
Clayton, Margaret F; Latimer, Seth; Dunn, Todd W; Haas, Leonard
2011-09-01
This study evaluated variables thought to influence patient's perceptions of patient-centeredness. We also compared results from two coding schemes that purport to evaluate patient-centeredness, the Measure of Patient-Centered Communication (MPCC) and the 4 Habits Coding Scheme (4HCS). 174 videotaped family practice office visits, and patient self-report measures were analyzed. Patient factors contributing to positive perceptions of patient-centeredness were successful negotiation of decision-making roles and lower post-visit uncertainty. MPCC coding found visits were on average 59% patient-centered (range 12-85%). 4HCS coding showed an average of 83 points (maximum possible 115). However, patients felt their visits were highly patient-centered (mean 3.7, range 1.9-4; maximum possible 4). There was a weak correlation between coding schemes, but no association between coding results and patient variables (number of pre-visit concerns, attainment of desired decision-making role, post-visit uncertainty, patients' perception of patient-centeredness). Coder inter-rater reliability was lower than expected; convergent and divergent validity were not supported. The 4HCS and MPCC operationalize patient-centeredness differently, illustrating a lack of conceptual clarity. The patient's perspective is important. Family practice providers can facilitate a more positive patient perception of patient-centeredness by addressing patient concerns to help reduce patient uncertainty, and by negotiating decision-making roles. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Automated Classification of Thermal Infrared Spectra Using Self-organizing Maps
NASA Technical Reports Server (NTRS)
Roush, Ted L.; Hogan, Robert
2006-01-01
Existing and planned space missions to a variety of planetary and satellite surfaces produce an ever increasing volume of spectral data. Understanding the scientific informational content in this large data volume is a daunting task. Fortunately various statistical approaches are available to assess such data sets. Here we discuss an automated classification scheme based on Kohonen Self-organizing maps (SOM) we have developed. The SUM process produces an output layer were spectra having similar properties lie in close proximity to each other. One major effort is partitioning this output layer into appropriate regions. This is prefonned by defining dosed regions based upon the strength of the boundaries between adjacent cells in the SOM output layer. We use the Davies-Bouldin index as a measure of the inter-class similarities and intra-class dissimilarities that determines the optimum partition of the output layer, and hence number of SOM clusters. This allows us to identify the natural number of clusters formed from the spectral data. Mineral spectral libraries prepared at Arizona State University (ASU) and John Hopkins University (JHU) are used to test and evaluate the classification scheme. We label the library sample spectra in a hierarchical scheme with class, subclass, and mineral group names. We use a portion of the spectra to train the SOM, i.e. produce the output layer, while the remaining spectra are used to test the SOM. The test spectra are presented to the SOM output layer and assigned membership to the appropriate cluster. We then evaluate these assignments to assess the scientific meaning and accuracy of the derived SOM classes as they relate to the labels. We demonstrate that unsupervised classification by SOMs can be a useful component in autonomous systems designed to identify mineral species from reflectance and emissivity spectra in the therrnal IR.
NASA Astrophysics Data System (ADS)
McCann, C.; Repasky, K. S.; Morin, M.; Lawrence, R. L.; Powell, S. L.
2016-12-01
Compact, cost-effective, flight-based hyperspectral imaging systems can provide scientifically relevant data over large areas for a variety of applications such as ecosystem studies, precision agriculture, and land management. To fully realize this capability, unsupervised classification techniques based on radiometrically-calibrated data that cluster based on biophysical similarity rather than simply spectral similarity are needed. An automated technique to produce high-resolution, large-area, radiometrically-calibrated hyperspectral data sets based on the Landsat surface reflectance data product as a calibration target was developed and applied to three subsequent years of data covering approximately 1850 hectares. The radiometrically-calibrated data allows inter-comparison of the temporal series. Advantages of the radiometric calibration technique include the need for minimal site access, no ancillary instrumentation, and automated processing. Fitting the reflectance spectra of each pixel using a set of biophysically relevant basis functions reduces the data from 80 spectral bands to 9 parameters providing noise reduction and data compression. Examination of histograms of these parameters allows for determination of natural splitting into biophysical similar clusters. This method creates clusters that are similar in terms of biophysical parameters, not simply spectral proximity. Furthermore, this method can be applied to other data sets, such as urban scenes, by developing other physically meaningful basis functions. The ability to use hyperspectral imaging for a variety of important applications requires the development of data processing techniques that can be automated. The radiometric-calibration combined with the histogram based unsupervised classification technique presented here provide one potential avenue for managing big-data associated with hyperspectral imaging.
Tomizawa, Yutaka; Iyer, Prasad G; Wongkeesong, Louis M; Buttar, Navtej S; Lutzke, Lori S; Wu, Tsung-Teh; Wang, Kenneth K
2013-01-01
AIM: To investigate a classification of endocytoscopy (ECS) images in Barrett’s esophagus (BE) and evaluate its diagnostic performance and interobserver variability. METHODS: ECS was applied to surveillance endoscopic mucosal resection (EMR) specimens of BE ex-vivo. The mucosal surface of specimen was stained with 1% methylene blue and surveyed with a catheter-type endocytoscope. We selected still images that were most representative of the endoscopically suspect lesion and matched with the final histopathological diagnosis to accomplish accurate correlation. The diagnostic performance and inter-observer variability of the new classification scheme were assessed in a blinded fashion by physicians with expertise in both BE and ECS and inexperienced physicians with no prior exposure to ECS. RESULTS: Three staff physicians and 22 gastroenterology fellows classified eight randomly assigned unknown still ECS pictures (two images per each classification) into one of four histopathologic categories as follows: (1) BEC1-squamous epithelium; (2) BEC2-BE without dysplasia; (3) BEC3-BE with dysplasia; and (4) BEC4-esophageal adenocarcinoma (EAC) in BE. Accuracy of diagnosis in staff physicians and clinical fellows were, respectively, 100% and 99.4% for BEC1, 95.8% and 83.0% for BEC2, 91.7% and 83.0% for BEC3, and 95.8% and 98.3% for BEC4. Interobserver agreement of the faculty physicians and fellows in classifying each category were 0.932 and 0.897, respectively. CONCLUSION: This is the first study to investigate classification system of ECS in BE. This ex-vivo pilot study demonstrated acceptable diagnostic accuracy and excellent interobserver agreement. PMID:24379583
Exposure of US Adolescents to Extremely Violent Movies
Worth, Keilah A.; Chambers, Jennifer Gibson; Nassau, Daniel H.; Rakhra, Balvinder K.; Sargent, James D.
2009-01-01
Objective Despite concerns about exposure to violent media, there are few data on youth exposure to violent movies. In this study we examined such exposure among young US adolescents. Methods We used a random-digit-dial survey of 6522 US adolescents aged 10 to 14 years fielded in 2003. Using previously validated methods, we determined the percentage and number of US adolescents who had seen each of 534 recently released movies. We report results for the 40 that were rated R for violence by the Motion Picture Association of America, UK 18 by the British Board of Film Classification and coded for extreme violence by trained content coders. Results The 40 violent movies were seen by a median of 12.5% of an estimated 22 million US adolescents aged 10 to 14 years. The most popular violent movie, Scary Movie, was seen by >10 million (48.1%) children, 1 million of whom were 10 years of age. Watching extremely violent movies was associated with being male, older, nonwhite, having less-educated parents, and doing poorly in school. Black male adolescents were at particularly high risk for seeing these movies; for example Blade, Training Day, and Scary Movie were seen, respectively, by 37.4%, 27.3%, and 48.1% of the sample overall versus 82.0%, 81.0%, and 80.8% of black male adolescents. Violent movie exposure was also associated with measures of media parenting, with high-exposure adolescents being significantly more likely to have a television in their bedroom and to report that their parents allowed them to watch R-rated movies. Conclusions This study documents widespread exposure of young US adolescents to movies with extreme graphic violence from movies rated R for violence and raises important questions about the effectiveness of the current movie-rating system. PMID:18676548
Exposure of US adolescents to extremely violent movies.
Worth, Keilah A; Gibson Chambers, Jennifer; Nassau, Daniel H; Rakhra, Balvinder K; Sargent, James D
2008-08-01
Despite concerns about exposure to violent media, there are few data on youth exposure to violent movies. In this study we examined such exposure among young US adolescents. We used a random-digit-dial survey of 6522 US adolescents aged 10 to 14 years fielded in 2003. Using previously validated methods, we determined the percentage and number of US adolescents who had seen each of 534 recently released movies. We report results for the 40 that were rated R for violence by the Motion Picture Association of America, UK 18 by the British Board of Film Classification and coded for extreme violence by trained content coders. The 40 violent movies were seen by a median of 12.5% of an estimated 22 million US adolescents aged 10 to 14 years. The most popular violent movie, Scary Movie, was seen by >10 million (48.1%) children, 1 million of whom were 10 years of age. Watching extremely violent movies was associated with being male, older, nonwhite, having less-educated parents, and doing poorly in school. Black male adolescents were at particularly high risk for seeing these movies; for example Blade, Training Day, and Scary Movie were seen, respectively, by 37.4%, 27.3%, and 48.1% of the sample overall versus 82.0%, 81.0%, and 80.8% of black male adolescents. Violent movie exposure was also associated with measures of media parenting, with high-exposure adolescents being significantly more likely to have a television in their bedroom and to report that their parents allowed them to watch R-rated movies. This study documents widespread exposure of young US adolescents to movies with extreme graphic violence from movies rated R for violence and raises important questions about the effectiveness of the current movie-rating system.
Stephens, C. R.; Juliano, S. A.
2012-01-01
Estimating a mosquito’s vector competence, or likelihood of transmitting disease, if it takes an infectious blood meal, is an important aspect of predicting when and where outbreaks of infectious diseases will occur. Vector competence can be affected by rearing temperature and inter- and intraspecific competition experienced by the individual mosquito during its larval development. This research investigates whether a new morphological indicator of larval rearing conditions, wing shape, can be used to distinguish reliably temperature and competitive conditions experienced during larval stages. Aedes albopictus and Aedes aegypti larvae were reared in low intra-specific, high intra-specific, or high inter-specific competition treatments at either 22°C or 32°C. The right wing of each dried female was removed and photographed. Nineteen landmarks and twenty semilandmarks were digitized on each wing. Shape variables were calculated using geometric morphometric software. Canonical variate analysis, randomization multivariate analysis of variance, and visualization of landmark movement using deformation grids provided evidence that although semilandmark position was significantly affected by larval competition and temperature for both species, the differences in position did not translate into differences in wing shape, as shown in deformation grids. Two classification procedures yielded success rates of 26–49%. Accounting for wing size produced no increase in classification success. There appeared to be a significant relationship between shape and size. These results, particularly the low success rate of classification based on wing shape, show that shape is unlikely to be a reliable indicator of larval rearing competition and temperature conditions for Aedes albopictus and Aedes aegypti. PMID:22897054
Using reconstructed IVUS images for coronary plaque classification.
Caballero, Karla L; Barajas, Joel; Pujol, Oriol; Rodriguez, Oriol; Radeva, Petia
2007-01-01
Coronary plaque rupture is one of the principal causes of sudden death in western societies. Reliable diagnostic of the different plaque types are of great interest for the medical community the predicting their evolution and applying an effective treatment. To achieve this, a tissue classification must be performed. Intravascular Ultrasound (IVUS) represents a technique to explore the vessel walls and to observe its histological properties. In this paper, a method to reconstruct IVUS images from the raw Radio Frequency (RF) data coming from ultrasound catheter is proposed. This framework offers a normalization scheme to compare accurately different patient studies. The automatic tissue classification is based on texture analysis and Adapting Boosting (Adaboost) learning technique combined with Error Correcting Output Codes (ECOC). In this study, 9 in-vivo cases are reconstructed with 7 different parameter set. This method improves the classification rate based on images, yielding a 91% of well-detected tissue using the best parameter set. It also reduces the inter-patient variability compared with the analysis of DICOM images, which are obtained from the commercial equipment.
Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Simpson, Robert; Smith, Arfon; Packer, Craig
2015-01-01
Camera traps can be used to address large-scale questions in community ecology by providing systematic data on an array of wide-ranging species. We deployed 225 camera traps across 1,125 km2 in Serengeti National Park, Tanzania, to evaluate spatial and temporal inter-species dynamics. The cameras have operated continuously since 2010 and had accumulated 99,241 camera-trap days and produced 1.2 million sets of pictures by 2013. Members of the general public classified the images via the citizen-science website www.snapshotserengeti.org. Multiple users viewed each image and recorded the species, number of individuals, associated behaviours, and presence of young. Over 28,000 registered users contributed 10.8 million classifications. We applied a simple algorithm to aggregate these individual classifications into a final ‘consensus’ dataset, yielding a final classification for each image and a measure of agreement among individual answers. The consensus classifications and raw imagery provide an unparalleled opportunity to investigate multi-species dynamics in an intact ecosystem and a valuable resource for machine-learning and computer-vision research. PMID:26097743
Analysis of space telescope data collection systems
NASA Technical Reports Server (NTRS)
Ingels, F. M.
1984-01-01
The Multiple Access (MA) communication link of the Space Telescope (ST) is described. An expected performance bit error rate is presented. The historical perspective and rationale behind the ESTL space shuttle end-to-end tests are given. The concatenated coding scheme using a convolutional encoder for the outer coder is developed. The ESTL end-to-end tests on the space shuttle communication link are described. Most important is how a concatenated coding system will perform. This is a go-no-go system with respect to received signal-to-noise ratio. A discussion of the verification requirements and Specification document is presented, and those sections that apply to Space Telescope data and communications system are discussed. The Space Telescope System consists of the Space Telescope Orbiting Observatory (ST), the Space Telescope Science Institute, and the Space Telescope Operation Control Center. The MA system consists of the ST, the return link from the ST via the Tracking and Delay Relay Satellite system to White Sands, and from White Sands via the Domestic Communications Satellite to the STOCC.
McRoy, Susan; Rastegar-Mojarad, Majid; Wang, Yanshan; Ruddy, Kathryn J; Haddad, Tufia C; Liu, Hongfang
2018-05-15
Patient education materials given to breast cancer survivors may not be a good fit for their information needs. Needs may change over time, be forgotten, or be misreported, for a variety of reasons. An automated content analysis of survivors' postings to online health forums can identify expressed information needs over a span of time and be repeated regularly at low cost. Identifying these unmet needs can guide improvements to existing education materials and the creation of new resources. The primary goals of this project are to assess the unmet information needs of breast cancer survivors from their own perspectives and to identify gaps between information needs and current education materials. This approach employs computational methods for content modeling and supervised text classification to data from online health forums to identify explicit and implicit requests for health-related information. Potential gaps between needs and education materials are identified using techniques from information retrieval. We provide a new taxonomy for the classification of sentences in online health forum data. 260 postings from two online health forums were selected, yielding 4179 sentences for coding. After annotation of data and training alternative one-versus-others classifiers, a random forest-based approach achieved F1 scores from 66% (Other, dataset2) to 90% (Medical, dataset1) on the primary information types. 136 expressions of need were used to generate queries to indexed education materials. Upon examination of the best two pages retrieved for each query, 12% (17/136) of queries were found to have relevant content by all coders, and 33% (45/136) were judged to have relevant content by at least one. Text from online health forums can be analyzed effectively using automated methods. Our analysis confirms that breast cancer survivors have many information needs that are not covered by the written documents they typically receive, as our results suggest that at most a third of breast cancer survivors' questions would be addressed by the materials currently provided to them. ©Susan McRoy, Majid Rastegar-Mojarad, Yanshan Wang, Kathryn J. Ruddy, Tufia C. Haddad, Hongfang Liu. Originally published in JMIR Cancer (http://cancer.jmir.org), 15.05.2018.
Rastegar-Mojarad, Majid; Wang, Yanshan; Ruddy, Kathryn J; Haddad, Tufia C; Liu, Hongfang
2018-01-01
Background Patient education materials given to breast cancer survivors may not be a good fit for their information needs. Needs may change over time, be forgotten, or be misreported, for a variety of reasons. An automated content analysis of survivors' postings to online health forums can identify expressed information needs over a span of time and be repeated regularly at low cost. Identifying these unmet needs can guide improvements to existing education materials and the creation of new resources. Objective The primary goals of this project are to assess the unmet information needs of breast cancer survivors from their own perspectives and to identify gaps between information needs and current education materials. Methods This approach employs computational methods for content modeling and supervised text classification to data from online health forums to identify explicit and implicit requests for health-related information. Potential gaps between needs and education materials are identified using techniques from information retrieval. Results We provide a new taxonomy for the classification of sentences in online health forum data. 260 postings from two online health forums were selected, yielding 4179 sentences for coding. After annotation of data and training alternative one-versus-others classifiers, a random forest-based approach achieved F1 scores from 66% (Other, dataset2) to 90% (Medical, dataset1) on the primary information types. 136 expressions of need were used to generate queries to indexed education materials. Upon examination of the best two pages retrieved for each query, 12% (17/136) of queries were found to have relevant content by all coders, and 33% (45/136) were judged to have relevant content by at least one. Conclusions Text from online health forums can be analyzed effectively using automated methods. Our analysis confirms that breast cancer survivors have many information needs that are not covered by the written documents they typically receive, as our results suggest that at most a third of breast cancer survivors’ questions would be addressed by the materials currently provided to them. PMID:29764801
Heerkens, Yvonne F; de Weerd, Marjolein; Huber, Machteld; de Brouwer, Carin P M; van der Veen, Sabina; Perenboom, Rom J M; van Gool, Coen H; Ten Napel, Huib; van Bon-Martens, Marja; Stallinga, Hillegonda A; van Meeteren, Nico L U
2018-03-01
The ICF (International Classification of Functioning, Disability and Health) framework (used worldwide to describe 'functioning' and 'disability'), including the ICF scheme (visualization of functioning as result of interaction with health condition and contextual factors), needs reconsideration. The purpose of this article is to discuss alternative ICF schemes. Reconsideration of ICF via literature review and discussions with 23 Dutch ICF experts. Twenty-six experts were invited to rank the three resulting alternative schemes. The literature review provided five themes: 1) societal developments; 2) health and research influences; 3) conceptualization of health; 4) models/frameworks of health and disability; and 5) ICF-criticism (e.g. position of 'health condition' at the top and role of 'contextual factors'). Experts concluded that the ICF scheme gives the impression that the medical perspective is dominant instead of the biopsychosocial perspective. Three alternative ICF schemes were ranked by 16 (62%) experts, resulting in one preferred scheme. There is a need for a new ICF scheme, better reflecting the ICF framework, for further (inter)national consideration. These Dutch schemes should be reviewed on a global scale, to develop a scheme that is more consistent with current and foreseen developments and changing ideas on health. Implications for Rehabilitation We propose policy makers on community, regional and (inter)national level to consider the use of the alternative schemes of the International Classification of Functioning, Disability and Health within their plans to promote functioning and health of their citizens and researchers and teachers to incorporate the alternative schemes into their research and education to emphasize the biopsychosocial paradigm. We propose to set up an international Delphi procedure involving citizens (including patients), experts in healthcare, occupational care, research, education and policy, and planning to get consensus on an alternative scheme of the International Classification of Functioning, Disability and Health. We recommend to discuss the alternatives for the present scheme of the International Classification of Functioning, Disability and Health in the present update and revision process within the World Health Organization as a part of the discussion on the future of the International Classification of Functioning, Disability and Health framework (including ontology, title and relation with the International Classification of Diseases). We recommend to revise the definition of personal factors and to draft a list of personal factors that can be used in policy making, clinical practice, research, and education and to put effort in the revision of the present list of environmental factors to make it more useful in, e.g., occupational health care.
ERIC Educational Resources Information Center
Day, A. C.
1975-01-01
ALLC members are divided here into pure linguists, pure programmers, and linguist programmers. Five computer languages and the use of packages and coders are discussed briefly. It is suggested that the pure programmers are best able to help the pure linguists with their programming problems. (RM)
Texture analysis based on the Hermite transform for image classification and segmentation
NASA Astrophysics Data System (ADS)
Estudillo-Romero, Alfonso; Escalante-Ramirez, Boris; Savage-Carmona, Jesus
2012-06-01
Texture analysis has become an important task in image processing because it is used as a preprocessing stage in different research areas including medical image analysis, industrial inspection, segmentation of remote sensed imaginary, multimedia indexing and retrieval. In order to extract visual texture features a texture image analysis technique is presented based on the Hermite transform. Psychovisual evidence suggests that the Gaussian derivatives fit the receptive field profiles of mammalian visual systems. The Hermite transform describes locally basic texture features in terms of Gaussian derivatives. Multiresolution combined with several analysis orders provides detection of patterns that characterizes every texture class. The analysis of the local maximum energy direction and steering of the transformation coefficients increase the method robustness against the texture orientation. This method presents an advantage over classical filter bank design because in the latter a fixed number of orientations for the analysis has to be selected. During the training stage, a subset of the Hermite analysis filters is chosen in order to improve the inter-class separability, reduce dimensionality of the feature vectors and computational cost during the classification stage. We exhaustively evaluated the correct classification rate of real randomly selected training and testing texture subsets using several kinds of common used texture features. A comparison between different distance measurements is also presented. Results of the unsupervised real texture segmentation using this approach and comparison with previous approaches showed the benefits of our proposal.
Classification of postural profiles among mouth-breathing children by learning vector quantization.
Mancini, F; Sousa, F S; Hummel, A D; Falcão, A E J; Yi, L C; Ortolani, C F; Sigulem, D; Pisa, I T
2011-01-01
Mouth breathing is a chronic syndrome that may bring about postural changes. Finding characteristic patterns of changes occurring in the complex musculoskeletal system of mouth-breathing children has been a challenge. Learning vector quantization (LVQ) is an artificial neural network model that can be applied for this purpose. The aim of the present study was to apply LVQ to determine the characteristic postural profiles shown by mouth-breathing children, in order to further understand abnormal posture among mouth breathers. Postural training data on 52 children (30 mouth breathers and 22 nose breathers) and postural validation data on 32 children (22 mouth breathers and 10 nose breathers) were used. The performance of LVQ and other classification models was compared in relation to self-organizing maps, back-propagation applied to multilayer perceptrons, Bayesian networks, naive Bayes, J48 decision trees, k, and k-nearest-neighbor classifiers. Classifier accuracy was assessed by means of leave-one-out cross-validation, area under ROC curve (AUC), and inter-rater agreement (Kappa statistics). By using the LVQ model, five postural profiles for mouth-breathing children could be determined. LVQ showed satisfactory results for mouth-breathing and nose-breathing classification: sensitivity and specificity rates of 0.90 and 0.95, respectively, when using the training dataset, and 0.95 and 0.90, respectively, when using the validation dataset. The five postural profiles for mouth-breathing children suggested by LVQ were incorporated into application software for classifying the severity of mouth breathers' abnormal posture.
Understanding the local public health workforce: labels versus substance.
Merrill, Jacqueline A; Keeling, Jonathan W
2014-11-01
The workforce is a key component of the nation's public health (PH) infrastructure, but little is known about the skills of local health department (LHD) workers to guide policy and planning. To profile a sample of LHD workers using classification schemes for PH work (the substance of what is done) and PH job titles (the labeling of what is done) to determine if work content is consistent with job classifications. A secondary analysis was conducted on data collected from 2,734 employees from 19 LHDs using a taxonomy of 151 essential tasks performed, knowledge possessed, and resources available. Each employee was classified by job title using a schema developed by PH experts. The inter-rater agreement was calculated within job classes and congruence on tasks, knowledge, and resources for five exemplar classes was examined. The average response rate was 89%. Overall, workers exhibited moderate agreement on tasks and poor agreement on knowledge and resources. Job classes with higher agreement included agency directors and community workers; those with lower agreement were mid-level managers such as program directors. Findings suggest that local PH workers within a job class perform similar tasks but vary in training and access to resources. Job classes that are specific and focused have higher agreement whereas job classes that perform in many roles show less agreement. The PH worker classification may not match employees' skill sets or how LHDs allocate resources, which may be a contributor to unexplained fluctuation in public health system performance. Copyright © 2014. Published by Elsevier Inc.
Tumor Heterogeneity in Breast Cancer
Turashvili, Gulisa; Brogi, Edi
2017-01-01
Breast cancer is a heterogeneous disease and differs greatly among different patients (intertumor heterogeneity) and even within each individual tumor (intratumor heterogeneity). Clinical and morphologic intertumor heterogeneity is reflected by staging systems and histopathologic classification of breast cancer. Heterogeneity in the expression of established prognostic and predictive biomarkers, hormone receptors, and human epidermal growth factor receptor 2 oncoprotein is the basis for targeted treatment. Molecular classifications are indicators of genetic tumor heterogeneity, which is probed with multigene assays and can lead to improved stratification into low- and high-risk groups for personalized therapy. Intratumor heterogeneity occurs at the morphologic, genomic, transcriptomic, and proteomic levels, creating diagnostic and therapeutic challenges. Understanding the molecular and cellular mechanisms of tumor heterogeneity that are relevant to the development of treatment resistance is a major area of research. Despite the improved knowledge of the complex genetic and phenotypic features underpinning tumor heterogeneity, there has been only limited advancement in diagnostic, prognostic, or predictive strategies for breast cancer. The current guidelines for reporting of biomarkers aim to maximize patient eligibility for targeted therapy, but do not take into account intratumor heterogeneity. The molecular classification of breast cancer is not implemented in routine clinical practice. Additional studies and in-depth analysis are required to understand the clinical significance of rapidly accumulating data. This review highlights inter- and intratumor heterogeneity of breast carcinoma with special emphasis on pathologic findings, and provides insights into the clinical significance of molecular and cellular mechanisms of heterogeneity. PMID:29276709
Automated EEG sleep staging in the term-age baby using a generative modelling approach.
Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten
2018-06-01
We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording's feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen's kappa agreement calculated between the estimates and clinicians' visual labels. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.
Automated EEG sleep staging in the term-age baby using a generative modelling approach
NASA Astrophysics Data System (ADS)
Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten
2018-06-01
Objective. We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. Approach. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording’s feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen’s kappa agreement calculated between the estimates and clinicians’ visual labels. Main results. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. Significance. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.
NASA Astrophysics Data System (ADS)
Shyu, Mei-Ling; Huang, Zifang; Luo, Hongli
In recent years, pervasive computing infrastructures have greatly improved the interaction between human and system. As we put more reliance on these computing infrastructures, we also face threats of network intrusion and/or any new forms of undesirable IT-based activities. Hence, network security has become an extremely important issue, which is closely connected with homeland security, business transactions, and people's daily life. Accurate and efficient intrusion detection technologies are required to safeguard the network systems and the critical information transmitted in the network systems. In this chapter, a novel network intrusion detection framework for mining and detecting sequential intrusion patterns is proposed. The proposed framework consists of a Collateral Representative Subspace Projection Modeling (C-RSPM) component for supervised classification, and an inter-transactional association rule mining method based on Layer Divided Modeling (LDM) for temporal pattern analysis. Experiments on the KDD99 data set and the traffic data set generated by a private LAN testbed show promising results with high detection rates, low processing time, and low false alarm rates in mining and detecting sequential intrusion detections.
Tanana, Michael; Hallgren, Kevin A; Imel, Zac E; Atkins, David C; Srikumar, Vivek
2016-06-01
Motivational interviewing (MI) is an efficacious treatment for substance use disorders and other problem behaviors. Studies on MI fidelity and mechanisms of change typically use human raters to code therapy sessions, which requires considerable time, training, and financial costs. Natural language processing techniques have recently been utilized for coding MI sessions using machine learning techniques, rather than human coders, and preliminary results have suggested these methods hold promise. The current study extends this previous work by introducing two natural language processing models for automatically coding MI sessions via computer. The two models differ in the way they semantically represent session content, utilizing either 1) simple discrete sentence features (DSF model) and 2) more complex recursive neural networks (RNN model). Utterance- and session-level predictions from these models were compared to ratings provided by human coders using a large sample of MI sessions (N=341 sessions; 78,977 clinician and client talk turns) from 6 MI studies. Results show that the DSF model generally had slightly better performance compared to the RNN model. The DSF model had "good" or higher utterance-level agreement with human coders (Cohen's kappa>0.60) for open and closed questions, affirm, giving information, and follow/neutral (all therapist codes); considerably higher agreement was obtained for session-level indices, and many estimates were competitive with human-to-human agreement. However, there was poor agreement for client change talk, client sustain talk, and therapist MI-inconsistent behaviors. Natural language processing methods provide accurate representations of human derived behavioral codes and could offer substantial improvements to the efficiency and scale in which MI mechanisms of change research and fidelity monitoring are conducted. Copyright © 2016 Elsevier Inc. All rights reserved.
Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding
Xiao, Rui; Gao, Junbin; Bossomaier, Terry
2016-01-01
A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102
Liu, Charles; Kayima, Peter; Riesel, Johanna; Situma, Martin; Chang, David; Firth, Paul
2017-11-01
The lack of a classification system for surgical procedures in resource-limited settings hinders outcomes measurement and reporting. Existing procedure coding systems are prohibitively large and expensive to implement. We describe the creation and prospective validation of 3 brief procedure code lists applicable in low-resource settings, based on analysis of surgical procedures performed at Mbarara Regional Referral Hospital, Uganda's second largest public hospital. We reviewed operating room logbooks to identify all surgical operations performed at Mbarara Regional Referral Hospital during 2014. Based on the documented indication for surgery and procedure(s) performed, we assigned each operation up to 4 procedure codes from the International Classification of Diseases, 9th Revision, Clinical Modification. Coding of procedures was performed by 2 investigators, and a random 20% of procedures were coded by both investigators. These codes were aggregated to generate procedure code lists. During 2014, 6,464 surgical procedures were performed at Mbarara Regional Referral Hospital, to which we assigned 435 unique procedure codes. Substantial inter-rater reliability was achieved (κ = 0.7037). The 111 most common procedure codes accounted for 90% of all codes assigned, 180 accounted for 95%, and 278 accounted for 98%. We considered these sets of codes as 3 procedure code lists. In a prospective validation, we found that these lists described 83.2%, 89.2%, and 92.6% of surgical procedures performed at Mbarara Regional Referral Hospital during August to September of 2015, respectively. Empirically generated brief procedure code lists based on International Classification of Diseases, 9th Revision, Clinical Modification can be used to classify almost all surgical procedures performed at a Ugandan referral hospital. Such a standardized procedure coding system may enable better surgical data collection for administration, research, and quality improvement in resource-limited settings. Copyright © 2017 Elsevier Inc. All rights reserved.
Mastering cognitive development theory in computer science education
NASA Astrophysics Data System (ADS)
Gluga, Richard; Kay, Judy; Lister, Raymond; Simon; Kleitman, Sabina
2013-03-01
To design an effective computer science curriculum, educators require a systematic method of classifying the difficulty level of learning activities and assessment tasks. This is important for curriculum design and implementation and for communication between educators. Different educators must be able to use the method consistently, so that classified activities and assessments are comparable across the subjects of a degree, and, ideally, comparable across institutions. One widespread approach to supporting this is to write learning objects in terms of Bloom's Taxonomy. This, or other such classifications, is likely to be more effective if educators can use them consistently, in the way experts would use them. To this end, we present the design and evaluation of our online interactive web-based tutorial system, which can be configured and used to offer training in different classification schemes. We report on results from three evaluations. First, 17 computer science educators complete a tutorial on using Bloom's Taxonomy to classify programming examination questions. Second, 20 computer science educators complete a Neo-Piagetian tutorial. Third evaluation was a comparison of inter-rater reliability scores of computer science educators classifying programming questions using Bloom's Taxonomy, before and after taking our tutorial. Based on the results from these evaluations, we discuss the effectiveness of our tutorial system design for teaching computer science educators how to systematically and consistently classify programming examination questions. We also discuss the suitability of Bloom's Taxonomy and Neo-Piagetian theory for achieving this goal. The Bloom's and Neo-Piagetian tutorials are made available as a community resource. The contributions of this paper are the following: the tutorial system for learning classification schemes for the purpose of coding the difficulty of computing learning materials; its evaluation; new insights into the consistency that computing educators can achieve using Bloom; and first insights into the use of Neo-Piagetian theory by a group of classifiers.
A system framework of inter-enterprise machining quality control based on fractal theory
NASA Astrophysics Data System (ADS)
Zhao, Liping; Qin, Yongtao; Yao, Yiyong; Yan, Peng
2014-03-01
In order to meet the quality control requirement of dynamic and complicated product machining processes among enterprises, a system framework of inter-enterprise machining quality control based on fractal was proposed. In this system framework, the fractal-specific characteristic of inter-enterprise machining quality control function was analysed, and the model of inter-enterprise machining quality control was constructed by the nature of fractal structures. Furthermore, the goal-driven strategy of inter-enterprise quality control and the dynamic organisation strategy of inter-enterprise quality improvement were constructed by the characteristic analysis on this model. In addition, the architecture of inter-enterprise machining quality control based on fractal was established by means of Web service. Finally, a case study for application was presented. The result showed that the proposed method was available, and could provide guidance for quality control and support for product reliability in inter-enterprise machining processes.
A Round Robin evaluation of AMSR-E soil moisture retrievals
NASA Astrophysics Data System (ADS)
Mittelbach, Heidi; Hirschi, Martin; Nicolai-Shaw, Nadine; Gruber, Alexander; Dorigo, Wouter; de Jeu, Richard; Parinussa, Robert; Jones, Lucas A.; Wagner, Wolfgang; Seneviratne, Sonia I.
2014-05-01
Large-scale and long-term soil moisture observations based on remote sensing are promising data sets to investigate and understand various processes of the climate system including the water and biochemical cycles. Currently, the ESA Climate Change Initiative for soil moisture develops and evaluates a consistent global long-term soil moisture data set, which is based on merging passive and active remotely sensed soil moisture. Within this project an inter-comparison of algorithms for AMSR-E and ASCAT Level 2 products was conducted separately to assess the performance of different retrieval algorithms. Here we present the inter-comparison of AMSR-E Level 2 soil moisture products. These include the public data sets from University of Montana (UMT), Japan Aerospace and Space Exploration Agency (JAXA), VU University of Amsterdam (VUA; two algorithms) and National Aeronautics and Space Administration (NASA). All participating algorithms are applied to the same AMSR-E Level 1 data set. Ascending and descending paths of scaled surface soil moisture are considered and evaluated separately in daily and monthly resolution over the 2007-2011 time period. Absolute values of soil moisture as well as their long-term anomalies (i.e. removing the mean seasonal cycle) and short-term anomalies (i.e. removing a five weeks moving average) are evaluated. The evaluation is based on conventional measures like correlation and unbiased root-mean-square differences as well as on the application of the triple collocation method. As reference data set, surface soil moisture of 75 quality controlled soil moisture sites from the International Soil Moisture Network (ISMN) are used, which cover a wide range of vegetation density and climate conditions. For the application of the triple collocation method, surface soil moisture estimates from the Global Land Data Assimilation System are used as third independent data set. We find that the participating algorithms generally display a better performance for the descending compared to the ascending paths. A first classification of the sites defined by geographical locations show that the algorithms have a very similar average performance. Further classifications of the sites by land cover types and climate regions will be conducted which might result in a more diverse performance of the algorithms.
iDEAS: A web-based system for dry eye assessment.
Remeseiro, Beatriz; Barreira, Noelia; García-Resúa, Carlos; Lira, Madalena; Giráldez, María J; Yebra-Pimentel, Eva; Penedo, Manuel G
2016-07-01
Dry eye disease is a public health problem, whose multifactorial etiology challenges clinicians and researchers making necessary the collaboration between different experts and centers. The evaluation of the interference patterns observed in the tear film lipid layer is a common clinical test used for dry eye diagnosis. However, it is a time-consuming task with a high degree of intra- as well as inter-observer variability, which makes the use of a computer-based analysis system highly desirable. This work introduces iDEAS (Dry Eye Assessment System), a web-based application to support dry eye diagnosis. iDEAS provides a framework for eye care experts to collaboratively work using image-based services in a distributed environment. It is composed of three main components: the web client for user interaction, the web application server for request processing, and the service module for image analysis. Specifically, this manuscript presents two automatic services: tear film classification, which classifies an image into one interference pattern; and tear film map, which illustrates the distribution of the patterns over the entire tear film. iDEAS has been evaluated by specialists from different institutions to test its performance. Both services have been evaluated in terms of a set of performance metrics using the annotations of different experts. Note that the processing time of both services has been also measured for efficiency purposes. iDEAS is a web-based application which provides a fast, reliable environment for dry eye assessment. The system allows practitioners to share images, clinical information and automatic assessments between remote computers. Additionally, it save time for experts, diminish the inter-expert variability and can be used in both clinical and research settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Classification of Tree Species in Overstorey Canopy of Subtropical Forest Using QuickBird Images.
Lin, Chinsu; Popescu, Sorin C; Thomson, Gavin; Tsogt, Khongor; Chang, Chein-I
2015-01-01
This paper proposes a supervised classification scheme to identify 40 tree species (2 coniferous, 38 broadleaf) belonging to 22 families and 36 genera in high spatial resolution QuickBird multispectral images (HMS). Overall kappa coefficient (OKC) and species conditional kappa coefficients (SCKC) were used to evaluate classification performance in training samples and estimate accuracy and uncertainty in test samples. Baseline classification performance using HMS images and vegetation index (VI) images were evaluated with an OKC value of 0.58 and 0.48 respectively, but performance improved significantly (up to 0.99) when used in combination with an HMS spectral-spatial texture image (SpecTex). One of the 40 species had very high conditional kappa coefficient performance (SCKC ≥ 0.95) using 4-band HMS and 5-band VIs images, but, only five species had lower performance (0.68 ≤ SCKC ≤ 0.94) using the SpecTex images. When SpecTex images were combined with a Visible Atmospherically Resistant Index (VARI), there was a significant improvement in performance in the training samples. The same level of improvement could not be replicated in the test samples indicating that a high degree of uncertainty exists in species classification accuracy which may be due to individual tree crown density, leaf greenness (inter-canopy gaps), and noise in the background environment (intra-canopy gaps). These factors increase uncertainty in the spectral texture features and therefore represent potential problems when using pixel-based classification techniques for multi-species classification.
Subjective quality evaluation of low-bit-rate video
NASA Astrophysics Data System (ADS)
Masry, Mark; Hemami, Sheila S.; Osberger, Wilfried M.; Rohaly, Ann M.
2001-06-01
A subjective quality evaluation was performed to qualify vie4wre responses to visual defects that appear in low bit rate video at full and reduced frame rates. The stimuli were eight sequences compressed by three motion compensated encoders - Sorenson Video, H.263+ and a Wavelet based coder - operating at five bit/frame rate combinations. The stimulus sequences exhibited obvious coding artifacts whose nature differed across the three coders. The subjective evaluation was performed using the Single Stimulus Continuos Quality Evaluation method of UTI-R Rec. BT.500-8. Viewers watched concatenated coded test sequences and continuously registered the perceived quality using a slider device. Data form 19 viewers was colleted. An analysis of their responses to the presence of various artifacts across the range of possible coding conditions and content is presented. The effects of blockiness and blurriness on perceived quality are examined. The effects of changes in frame rate on perceived quality are found to be related to the nature of the motion in the sequence.
Coronary angiogram video compression for remote browsing and archiving applications.
Ouled Zaid, Azza; Fradj, Bilel Ben
2010-12-01
In this paper, we propose a H.264/AVC based compression technique adapted to coronary angiograms. H.264/AVC coder has proven to use the most advanced and accurate motion compensation process, but, at the cost of high computational complexity. On the other hand, analysis of coronary X-ray images reveals large areas containing no diagnostically important information. Our contribution is to exploit the energy characteristics in slice equal size regions to determine the regions with relevant information content, to be encoded using the H.264 coding paradigm. The others regions, are compressed using fixed block motion compensation and conventional hard-decision quantization. Experiments have shown that at the same bitrate, this procedure reduces the H.264 coder computing time of about 25% while attaining the same visual quality. A subjective assessment, based on the consensus approach leads to a compression ratio of 30:1 which insures both a diagnostic adequacy and a sufficient compression in regards to storage and transmission requirements. Copyright © 2010 Elsevier Ltd. All rights reserved.
Segmentation-driven compound document coding based on H.264/AVC-INTRA.
Zaghetto, Alexandre; de Queiroz, Ricardo L
2007-07-01
In this paper, we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e., composed of text, graphics, and pictures. Even though mixed contents (compound) documents usually require the use of multiple compressors, we apply a single compressor for both text and pictures. For that, distortion is taken into account differently between text and picture regions. Our approach is to use a segmentation-driven adaptation strategy to change the H.264/AVC quantization parameter on a macroblock by macroblock basis, i.e., we deviate bits from pictorial regions to text in order to keep text edges sharp. We show results of a segmentation driven quantizer adaptation method applied to compress documents. Our reconstructed images have better text sharpness compared to straight unadapted coding, at negligible visual losses on pictorial regions. Our results also highlight the fact that H.264/AVC-INTRA outperforms coders such as JPEG-2000 as a single coder for compound images.
Kubota, Yoshie; Yano, Yoshitaka; Seki, Susumu; Takada, Kaori; Sakuma, Mio; Morimoto, Takeshi; Akaike, Akinori; Hiraide, Atsushi
2011-04-11
To determine the value of using the Roter Interaction Analysis System during objective structured clinical examinations (OSCEs) to assess pharmacy students' communication competence. As pharmacy students completed a clinical OSCE involving an interview with a simulated patient, 3 experts used a global rating scale to assess students' overall performance in the interview, and both the student's and patient's languages were coded using the Roter Interaction Analysis System (RIAS). The coders recorded the number of utterances (ie, units of spoken language) in each RIAS category. Correlations between the raters' scores and the number and types of utterances were examined. There was a significant correlation between students' global rating scores on the OSCE and the number of utterances in the RIAS socio-emotional category but not the RIAS business category. The RIAS proved to be a useful tool for assessing the socio-emotional aspect of students' interview skills.
Real-time demonstration hardware for enhanced DPCM video compression algorithm
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.
1992-01-01
The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).
Inter-Sensor Comparison of Satellite Ocean Color Products from GOCI and MODIS
2013-02-26
current map for this region. However the NOCOM modeled and GOCI measured data need to be validate using in-situ measurements. ...collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION...Ocean Model (NCOM). 15. SUBJECT TERMS satellite ocean color products, GOCI, MODIS, phytoplankton 16. SECURITY CLASSIFICATION OF: a. REPORT
Multi-fractal detrended texture feature for brain tumor classification
NASA Astrophysics Data System (ADS)
Reza, Syed M. S.; Mays, Randall; Iftekharuddin, Khan M.
2015-03-01
We propose a novel non-invasive brain tumor type classification using Multi-fractal Detrended Fluctuation Analysis (MFDFA) [1] in structural magnetic resonance (MR) images. This preliminary work investigates the efficacy of the MFDFA features along with our novel texture feature known as multifractional Brownian motion (mBm) [2] in classifying (grading) brain tumors as High Grade (HG) and Low Grade (LG). Based on prior performance, Random Forest (RF) [3] is employed for tumor grading using two different datasets such as BRATS-2013 [4] and BRATS-2014 [5]. Quantitative scores such as precision, recall, accuracy are obtained using the confusion matrix. On an average 90% precision and 85% recall from the inter-dataset cross-validation confirm the efficacy of the proposed method.
A comparison of four measures of moral reasoning.
Wilmoth, G H; McFarland, S G
1977-08-01
Kohlberg's Moral Judgment Scale, Gilligan et al.'s Sexual Moral Judgment Scale, Maitland and Goldman's Objective Moral Judgment Scale, and Hogan's Maturity of Moral Judgment Scale, were examined for reliability and inter-scale relationships. All measures except the Objective Moral Judgment Scale had good reliabilities. The obtained relations between the Moral Judgment Scale and the Sexual Moral Judgment Scale replicated previous research. The Objective Moral Judgment Scale was not found to validly assess the Kohlberg stages. The Maturity of Moral Judgment Scale scores were strongly related to the subjects's classification on the Kohlberg stages, and the scale appears to offer a reliable, quickly scored, and valid index of mature thought, although the scale's continuous scores do not permit clear stage classification.
Combining spatial and spectral information to improve crop/weed discrimination algorithms
NASA Astrophysics Data System (ADS)
Yan, L.; Jones, G.; Villette, S.; Paoli, J. N.; Gée, C.
2012-01-01
Reduction of herbicide spraying is an important key to environmentally and economically improve weed management. To achieve this, remote sensors such as imaging systems are commonly used to detect weed plants. We developed spatial algorithms that detect the crop rows to discriminate crop from weeds. These algorithms have been thoroughly tested and provide robust and accurate results without learning process but their detection is limited to inter-row areas. Crop/Weed discrimination using spectral information is able to detect intra-row weeds but generally needs a prior learning process. We propose a method based on spatial and spectral information to enhance the discrimination and overcome the limitations of both algorithms. The classification from the spatial algorithm is used to build the training set for the spectral discrimination method. With this approach we are able to improve the range of weed detection in the entire field (inter and intra-row). To test the efficiency of these algorithms, a relevant database of virtual images issued from SimAField model has been used and combined to LOPEX93 spectral database. The developed method based is evaluated and compared with the initial method in this paper and shows an important enhancement from 86% of weed detection to more than 95%.
Fundamental movement skills testing in children with cerebral palsy.
Capio, Catherine M; Sit, Cindy H P; Abernethy, Bruce
2011-01-01
To examine the inter-rater reliability and comparative validity of product-oriented and process-oriented measures of fundamental movement skills among children with cerebral palsy (CP). In total, 30 children with CP aged 6 to 14 years (Mean = 9.83, SD = 2.5) and classified in Gross Motor Function Classification System (GMFCS) levels I-III performed tasks of catching, throwing, kicking, horizontal jumping and running. Process-oriented assessment was undertaken using a number of components of the Test of Gross Motor Development (TGMD-2), while product-oriented assessment included measures of time taken, distance covered and number of successful task completions. Cohen's kappa, Spearman's rank correlation coefficient and tests to compare correlated correlation coefficients were performed. Very good inter-rater reliability was found. Process-oriented measures for running and jumping had significant associations with GMFCS, as did seven product-oriented measures for catching, throwing, kicking, running and jumping. Product-oriented measures of catching, kicking and running had stronger associations with GMFCS than the corresponding process-oriented measures. Findings support the validity of process-oriented measures for running and jumping and of product-oriented measures of catching, throwing, kicking, running and jumping. However, product-oriented measures for catching, kicking and running appear to have stronger associations with functional abilities of children with CP, and are thus recommended for use in rehabilitation processes.
Inter-BSs virtual private network for privacy and security enhanced 60 GHz radio-over-fiber system
NASA Astrophysics Data System (ADS)
Zhang, Chongfu; Chen, Chen; Zhang, Wei; Jin, Wei; Qiu, Kun; Li, Changchun; Jiang, Ning
2013-06-01
A novel inter-basestations (inter-BSs) based virtual private network (VPN) for the privacy and security enhanced 60 GHz radio-over-fiber (RoF) system using optical code-division multiplexing (OCDM) is proposed and demonstrated experimentally. By establishing inter-BSs VPN overlaying the network structure of a 60 GHz RoF system, the express and private paths for the communication of end-users under different BSs can be offered. In order to effectively establish the inter-BSs VPN, the OCDM encoding/decoding technology is employed in the RoF system. In each BS, a 58 GHz millimeter-wave (MMW) is used as the inter-BSs VPN channel, while a 60 GHz MMW is used as the common central station (CS)-BSs communication channel. The optical carriers used for the downlink, uplink and VPN link transmissions are all simultaneously generated in a lightwave-centralized CS, by utilizing four-wave mixing (FWM) effect in a semiconductor optical amplifier (SOA). The obtained results properly verify the feasibility of our proposed configuration of the inter-BSs VPN in the 60 GHz RoF system.
NASA Astrophysics Data System (ADS)
Krappe, Sebastian; Benz, Michaela; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian
2015-03-01
The morphological analysis of bone marrow smears is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually with the use of bright field microscope. This is a time consuming, partly subjective and tedious process. Furthermore, repeated examinations of a slide yield intra- and inter-observer variances. For this reason an automation of morphological bone marrow analysis is pursued. This analysis comprises several steps: image acquisition and smear detection, cell localization and segmentation, feature extraction and cell classification. The automated classification of bone marrow cells is depending on the automated cell segmentation and the choice of adequate features extracted from different parts of the cell. In this work we focus on the evaluation of support vector machines (SVMs) and random forests (RFs) for the differentiation of bone marrow cells in 16 different classes, including immature and abnormal cell classes. Data sets of different segmentation quality are used to test the two approaches. Automated solutions for the morphological analysis for bone marrow smears could use such a classifier to pre-classify bone marrow cells and thereby shortening the examination duration.
NASA Astrophysics Data System (ADS)
Paul, Subir; Nagesh Kumar, D.
2018-04-01
Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.
An adaptive DPCM algorithm for predicting contours in NTSC composite video signals
NASA Astrophysics Data System (ADS)
Cox, N. R.
An adaptive DPCM algorithm is proposed for encoding digitized National Television Systems Committee (NTSC) color video signals. This algorithm essentially predicts picture contours in the composite signal without resorting to component separation. The contour parameters (slope thresholds) are optimized using four 'typical' television frames that have been sampled at three times the color subcarrier frequency. Three variations of the basic predictor are simulated and compared quantitatively with three non-adaptive predictors of similar complexity. By incorporating a dual-word-length coder and buffer memory, high quality color pictures can be encoded at 4.0 bits/pel or 42.95 Mbit/s. The effect of channel error propagation is also investigated.
Some practical aspects of lossless and nearly-lossless compression of AVHRR imagery
NASA Technical Reports Server (NTRS)
Hogan, David B.; Miller, Chris X.; Christensen, Than Lee; Moorti, Raj
1994-01-01
Compression of Advanced Very high Resolution Radiometers (AVHRR) imagery operating in a lossless or nearly-lossless mode is evaluated. Several practical issues are analyzed including: variability of compression over time and among channels, rate-smoothing buffer size, multi-spectral preprocessing of data, day/night handling, and impact on key operational data applications. This analysis is based on a DPCM algorithm employing the Universal Noiseless Coder, which is a candidate for inclusion in many future remote sensing systems. It is shown that compression rates of about 2:1 (daytime) can be achieved with modest buffer sizes (less than or equal to 2.5 Mbytes) and a relatively simple multi-spectral preprocessing step.
Some practical universal noiseless coding techniques, part 3, module PSl14,K+
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1991-01-01
The algorithmic definitions, performance characterizations, and application notes for a high-performance adaptive noiseless coding module are provided. Subsets of these algorithms are currently under development in custom very large scale integration (VLSI) at three NASA centers. The generality of coding algorithms recently reported is extended. The module incorporates a powerful adaptive noiseless coder for Standard Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers, where smaller integers are more likely than the larger ones). Coders can be specified to provide performance close to the data entropy over any desired dynamic range (of entropy) above 0.75 bit/sample. This is accomplished by adaptively choosing the best of many efficient variable-length coding options to use on each short block of data (e.g., 16 samples) All code options used for entropies above 1.5 bits/sample are 'Huffman Equivalent', but they require no table lookups to implement. The coding can be performed directly on data that have been preprocessed to exhibit the characteristics of a standard source. Alternatively, a built-in predictive preprocessor can be used where applicable. This built-in preprocessor includes the familiar 1-D predictor followed by a function that maps the prediction error sequences into the desired standard form. Additionally, an external prediction can be substituted if desired. A broad range of issues dealing with the interface between the coding module and the data systems it might serve are further addressed. These issues include: multidimensional prediction, archival access, sensor noise, rate control, code rate improvements outside the module, and the optimality of certain internal code options.
Cresswell, Kathrin; Morrison, Zoe; Kalra, Dipak; Sheikh, Aziz
2012-01-01
We sought to understand how clinical information relating to the management of depression is routinely coded in different clinical settings and the perspectives of and implications for different stakeholders with a view to understanding how these may be aligned. Qualitative investigation exploring the views of a purposefully selected range of healthcare professionals, managers, and clinical coders spanning primary and secondary care. Our dataset comprised 28 semi-structured interviews, a focus group, documents relating to clinical coding standards and participant observation of clinical coding activities. We identified a range of approaches to coding clinical information including templates and order entry systems. The challenges inherent in clearly establishing a diagnosis, identifying appropriate clinical codes and possible implications of diagnoses for patients were particularly prominent in primary care. Although a range of managerial and research benefits were identified, there were no direct benefits from coded clinical data for patients or professionals. Secondary care staff emphasized the role of clinical coders in ensuring data quality, which was at odds with the policy drive to increase real-time clinical coding. There was overall no evidence of clear-cut direct patient care benefits to inform immediate care decisions, even in primary care where data on patients with depression were more extensively coded. A number of important secondary uses were recognized by healthcare staff, but the coding of clinical data to serve these ends was often poorly aligned with clinical practice and patient-centered considerations. The current international drive to encourage clinical coding by healthcare professionals during the clinical encounter may need to be critically examined.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-24
... quotations into any inter-dealer quotation system that permits quotation updates on a real-time basis to... particular OTC Equity Security in any inter-dealer quotation system, including any system that the SEC has... Securities in which it displays market making interest via an inter-dealer quotation system.'' See FINRA Rule...
Study and simulation of low rate video coding schemes
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Yun-Chung; Kipp, G.
1992-01-01
The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.
Expert and Novice Fire Ground Command Decisions
1987-04-01
Christopher P. Brezovic, Marvin Thordsen, and Janet Taynor who served as Interviewers and coders and who made numerous contributions to the study...the police interview: Cognitive retrieval mn3monics versus hypnosis . Journal of Appli.ed Psychology, 70, 2, 401-412. Gettys, C. F. (1983). Research and
Diagnostic reliability of MMPI-2 computer-based test interpretations.
Pant, Hina; McCabe, Brian J; Deskovitz, Mark A; Weed, Nathan C; Williams, John E
2014-09-01
Reflecting the common use of the MMPI-2 to provide diagnostic considerations, computer-based test interpretations (CBTIs) also typically offer diagnostic suggestions. However, these diagnostic suggestions can sometimes be shown to vary widely across different CBTI programs even for identical MMPI-2 profiles. The present study evaluated the diagnostic reliability of 6 commercially available CBTIs using a 20-item Q-sort task developed for this study. Four raters each sorted diagnostic classifications based on these 6 CBTI reports for 20 MMPI-2 profiles. Two questions were addressed. First, do users of CBTIs understand the diagnostic information contained within the reports similarly? Overall, diagnostic sorts of the CBTIs showed moderate inter-interpreter diagnostic reliability (mean r = .56), with sorts for the 1/2/3 profile showing the highest inter-interpreter diagnostic reliability (mean r = .67). Second, do different CBTIs programs vary with respect to diagnostic suggestions? It was found that diagnostic sorts of the CBTIs had a mean inter-CBTI diagnostic reliability of r = .56, indicating moderate but not strong agreement across CBTIs in terms of diagnostic suggestions. The strongest inter-CBTI diagnostic agreement was found for sorts of the 1/2/3 profile CBTIs (mean r = .71). Limitations and future directions are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Martens, Jonas; Daly, Daniel; Deschamps, Kevin; Staes, Filip; Fernandes, Ricardo J
2016-12-01
Variability of electromyographic (EMG) recordings is a complex phenomenon rarely examined in swimming. Our purposes were to investigate inter-individual variability in muscle activation patterns during front crawl swimming and assess if there were clusters of sub patterns present. Bilateral muscle activity of rectus abdominis (RA) and deltoideus medialis (DM) was recorded using wireless surface EMG in 15 adult male competitive swimmers. The amplitude of the median EMG trial of six upper arm movement cycles was used for the inter-individual variability assessment, quantified with the coefficient of variation, coefficient of quartile variation, the variance ratio and mean deviation. Key features were selected based on qualitative and quantitative classification strategies to enter in a k-means cluster analysis to examine the presence of strong sub patterns. Such strong sub patterns were found when clustering in two, three and four clusters. Inter-individual variability in a group of highly skilled swimmers was higher compared to other cyclic movements which is in contrast to what has been reported in the previous 50years of EMG research in swimming. This leads to the conclusion that coaches should be careful in using overall reference EMG information to enhance the individual swimming technique of their athletes. Copyright © 2016 Elsevier Ltd. All rights reserved.