Sample records for accurate classification system

  1. A drone detection with aircraft classification based on a camera array

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Qu, Fangchao; Liu, Yingjian; Zhao, Wei; Chen, Yitong

    2018-03-01

    In recent years, because of the rapid popularity of drones, many people have begun to operate drones, bringing a range of security issues to sensitive areas such as airports and military locus. It is one of the important ways to solve these problems by realizing fine-grained classification and providing the fast and accurate detection of different models of drone. The main challenges of fine-grained classification are that: (1) there are various types of drones, and the models are more complex and diverse. (2) the recognition test is fast and accurate, in addition, the existing methods are not efficient. In this paper, we propose a fine-grained drone detection system based on the high resolution camera array. The system can quickly and accurately recognize the detection of fine grained drone based on hd camera.

  2. 76 FR 9541 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-18

    ....S. Census Bureau. Title: 2012 Economic Census General Classification Report. OMB Control Number... Business Register is that establishments are assigned an accurate economic classification, based on the North American Industry Classification System (NAICS). The primary purpose of the ``2012 Economic Census...

  3. Analysis of framelets for breast cancer diagnosis.

    PubMed

    Thivya, K S; Sakthivel, P; Venkata Sai, P M

    2016-01-01

    Breast cancer is the second threatening tumor among the women. The effective way of reducing breast cancer is its early detection which helps to improve the diagnosing process. Digital mammography plays a significant role in mammogram screening at earlier stage of breast carcinoma. Even though, it is very difficult to find accurate abnormality in prevalent screening by radiologists. But the possibility of precise breast cancer screening is encouraged by predicting the accurate type of abnormality through Computer Aided Diagnosis (CAD) systems. The two most important indicators of breast malignancy are microcalcifications and masses. In this study, framelet transform, a multiresolutional analysis is investigated for the classification of the above mentioned two indicators. The statistical and co-occurrence features are extracted from the framelet decomposed mammograms with different resolution levels and support vector machine is employed for classification with k-fold cross validation. This system achieves 94.82% and 100% accuracy in normal/abnormal classification (stage I) and benign/malignant classification (stage II) of mass classification system and 98.57% and 100% for microcalcification system when using the MIAS database.

  4. Types of Seizures Affecting Individuals with TSC

    MedlinePlus

    ... Cannabis you can review. *New Terms for Seizure Classifications The International League Against Epilepsy has approved a ... seizures. This new system will make diagnosis and classification of seizures easier and more accurate. Learn more ...

  5. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    NASA Astrophysics Data System (ADS)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  6. Distinguishing between the Permeability Relationships with Absorption and Metabolism To Improve BCS and BDDCS Predictions in Early Drug Discovery

    PubMed Central

    2015-01-01

    The biopharmaceutics classification system (BCS) and biopharmaceutics drug distribution classification system (BDDCS) are complementary classification systems that can improve, simplify, and accelerate drug discovery, development, and regulatory processes. Drug permeability has been widely accepted as a screening tool for determining intestinal absorption via the BCS during the drug development and regulatory approval processes. Currently, predicting clinically significant drug interactions during drug development is a known challenge for industry and regulatory agencies. The BDDCS, a modification of BCS that utilizes drug metabolism instead of intestinal permeability, predicts drug disposition and potential drug–drug interactions in the intestine, the liver, and most recently the brain. Although correlations between BCS and BDDCS have been observed with drug permeability rates, discrepancies have been noted in drug classifications between the two systems utilizing different permeability models, which are accepted as surrogate models for demonstrating human intestinal permeability by the FDA. Here, we recommend the most applicable permeability models for improving the prediction of BCS and BDDCS classifications. We demonstrate that the passive transcellular permeability rate, characterized by means of permeability models that are deficient in transporter expression and paracellular junctions (e.g., PAMPA and Caco-2), will most accurately predict BDDCS metabolism. These systems will inaccurately predict BCS classifications for drugs that particularly are substrates of highly expressed intestinal transporters. Moreover, in this latter case, a system more representative of complete human intestinal permeability is needed to accurately predict BCS absorption. PMID:24628254

  7. Distinguishing between the permeability relationships with absorption and metabolism to improve BCS and BDDCS predictions in early drug discovery.

    PubMed

    Larregieu, Caroline A; Benet, Leslie Z

    2014-04-07

    The biopharmaceutics classification system (BCS) and biopharmaceutics drug distribution classification system (BDDCS) are complementary classification systems that can improve, simplify, and accelerate drug discovery, development, and regulatory processes. Drug permeability has been widely accepted as a screening tool for determining intestinal absorption via the BCS during the drug development and regulatory approval processes. Currently, predicting clinically significant drug interactions during drug development is a known challenge for industry and regulatory agencies. The BDDCS, a modification of BCS that utilizes drug metabolism instead of intestinal permeability, predicts drug disposition and potential drug-drug interactions in the intestine, the liver, and most recently the brain. Although correlations between BCS and BDDCS have been observed with drug permeability rates, discrepancies have been noted in drug classifications between the two systems utilizing different permeability models, which are accepted as surrogate models for demonstrating human intestinal permeability by the FDA. Here, we recommend the most applicable permeability models for improving the prediction of BCS and BDDCS classifications. We demonstrate that the passive transcellular permeability rate, characterized by means of permeability models that are deficient in transporter expression and paracellular junctions (e.g., PAMPA and Caco-2), will most accurately predict BDDCS metabolism. These systems will inaccurately predict BCS classifications for drugs that particularly are substrates of highly expressed intestinal transporters. Moreover, in this latter case, a system more representative of complete human intestinal permeability is needed to accurately predict BCS absorption.

  8. Changing Patient Classification System for Hospital Reimbursement in Romania

    PubMed Central

    Radu, Ciprian-Paul; Chiriac, Delia Nona; Vladescu, Cristian

    2010-01-01

    Aim To evaluate the effects of the change in the diagnosis-related group (DRG) system on patient morbidity and hospital financial performance in the Romanian public health care system. Methods Three variables were assessed before and after the classification switch in July 2007: clinical outcomes, the case mix index, and hospital budgets, using the database of the National School of Public Health and Health Services Management, which contains data regularly received from hospitals reimbursed through the Romanian DRG scheme (291 in 2009). Results The lack of a Romanian system for the calculation of cost-weights imposed the necessity to use an imported system, which was criticized by some clinicians for not accurately reflecting resource consumption in Romanian hospitals. The new DRG classification system allowed a more accurate clinical classification. However, it also exposed a lack of physicians’ knowledge on diagnosing and coding procedures, which led to incorrect coding. Consequently, the reported hospital morbidity changed after the DRG switch, reflecting an increase in the national case mix index of 25% in 2009 (compared with 2007). Since hospitals received the same reimbursement over the first two years after the classification switch, the new DRG system led them sometimes to change patients' diagnoses in order to receive more funding. Conclusion Lack of oversight of hospital coding and reporting to the national reimbursement scheme allowed the increase in the case mix index. The complexity of the new classification system requires more resources (human and financial), better monitoring and evaluation, and improved legislation in order to achieve better hospital resource allocation and more efficient patient care. PMID:20564769

  9. Changing patient classification system for hospital reimbursement in Romania.

    PubMed

    Radu, Ciprian-Paul; Chiriac, Delia Nona; Vladescu, Cristian

    2010-06-01

    To evaluate the effects of the change in the diagnosis-related group (DRG) system on patient morbidity and hospital financial performance in the Romanian public health care system. Three variables were assessed before and after the classification switch in July 2007: clinical outcomes, the case mix index, and hospital budgets, using the database of the National School of Public Health and Health Services Management, which contains data regularly received from hospitals reimbursed through the Romanian DRG scheme (291 in 2009). The lack of a Romanian system for the calculation of cost-weights imposed the necessity to use an imported system, which was criticized by some clinicians for not accurately reflecting resource consumption in Romanian hospitals. The new DRG classification system allowed a more accurate clinical classification. However, it also exposed a lack of physicians' knowledge on diagnosing and coding procedures, which led to incorrect coding. Consequently, the reported hospital morbidity changed after the DRG switch, reflecting an increase in the national case-mix index of 25% in 2009 (compared with 2007). Since hospitals received the same reimbursement over the first two years after the classification switch, the new DRG system led them sometimes to change patients' diagnoses in order to receive more funding. Lack of oversight of hospital coding and reporting to the national reimbursement scheme allowed the increase in the case-mix index. The complexity of the new classification system requires more resources (human and financial), better monitoring and evaluation, and improved legislation in order to achieve better hospital resource allocation and more efficient patient care.

  10. A Three-Phase Decision Model of Computer-Aided Coding for the Iranian Classification of Health Interventions (IRCHI).

    PubMed

    Azadmanjir, Zahra; Safdari, Reza; Ghazisaeedi, Marjan; Mokhtaran, Mehrshad; Kameli, Mohammad Esmail

    2017-06-01

    Accurate coded data in the healthcare are critical. Computer-Assisted Coding (CAC) is an effective tool to improve clinical coding in particular when a new classification will be developed and implemented. But determine the appropriate method for development need to consider the specifications of existing CAC systems, requirements for each type, our infrastructure and also, the classification scheme. The aim of the study was the development of a decision model for determining accurate code of each medical intervention in Iranian Classification of Health Interventions (IRCHI) that can be implemented as a suitable CAC system. first, a sample of existing CAC systems was reviewed. Then feasibility of each one of CAC types was examined with regard to their prerequisites for their implementation. The next step, proper model was proposed according to the structure of the classification scheme and was implemented as an interactive system. There is a significant relationship between the level of assistance of a CAC system and integration of it with electronic medical documents. Implementation of fully automated CAC systems is impossible due to immature development of electronic medical record and problems in using language for medical documenting. So, a model was proposed to develop semi-automated CAC system based on hierarchical relationships between entities in the classification scheme and also the logic of decision making to specify the characters of code step by step through a web-based interactive user interface for CAC. It was composed of three phases to select Target, Action and Means respectively for an intervention. The proposed model was suitable the current status of clinical documentation and coding in Iran and also, the structure of new classification scheme. Our results show it was practical. However, the model needs to be evaluated in the next stage of the research.

  11. Synergy of airborne LiDAR and Worldview-2 satellite imagery for land cover and habitat mapping: A BIO_SOS-EODHaM case study for the Netherlands

    NASA Astrophysics Data System (ADS)

    Mücher, C. A.; Roupioz, L.; Kramer, H.; Bogers, M. M. B.; Jongman, R. H. G.; Lucas, R. M.; Kosmidou, V. E.; Petrou, Z.; Manakos, I.; Padoa-Schioppa, E.; Adamo, M.; Blonda, P.

    2015-05-01

    A major challenge is to develop a biodiversity observation system that is cost effective and applicable in any geographic region. Measuring and reliable reporting of trends and changes in biodiversity requires amongst others detailed and accurate land cover and habitat maps in a standard and comparable way. The objective of this paper is to assess the EODHaM (EO Data for Habitat Mapping) classification results for a Dutch case study. The EODHaM system was developed within the BIO_SOS (The BIOdiversity multi-SOurce monitoring System: from Space TO Species) project and contains the decision rules for each land cover and habitat class based on spectral and height information. One of the main findings is that canopy height models, as derived from LiDAR, in combination with very high resolution satellite imagery provides a powerful input for the EODHaM system for the purpose of generic land cover and habitat mapping for any location across the globe. The assessment of the EODHaM classification results based on field data showed an overall accuracy of 74% for the land cover classes as described according to the Food and Agricultural Organization (FAO) Land Cover Classification System (LCCS) taxonomy at level 3, while the overall accuracy was lower (69.0%) for the habitat map based on the General Habitat Category (GHC) system for habitat surveillance and monitoring. A GHC habitat class is determined for each mapping unit on the basis of the composition of the individual life forms and height measurements. The classification showed very good results for forest phanerophytes (FPH) when individual life forms were analyzed in terms of their percentage coverage estimates per mapping unit from the LCCS classification and validated with field surveys. Analysis for shrubby chamaephytes (SCH) showed less accurate results, but might also be due to less accurate field estimates of percentage coverage. Overall, the EODHaM classification results encouraged us to derive the heights of all vegetated objects in the Netherlands from LiDAR data, in preparation for new habitat classifications.

  12. Advanced eddy current test signal analysis for steam generator tube defect classification and characterization

    NASA Astrophysics Data System (ADS)

    McClanahan, James Patrick

    Eddy Current Testing (ECT) is a Non-Destructive Examination (NDE) technique that is widely used in power generating plants (both nuclear and fossil) to test the integrity of heat exchanger (HX) and steam generator (SG) tubing. Specifically for this research, laboratory-generated, flawed tubing data were examined. The purpose of this dissertation is to develop and implement an automated method for the classification and an advanced characterization of defects in HX and SG tubing. These two improvements enhanced the robustness of characterization as compared to traditional bobbin-coil ECT data analysis methods. A more robust classification and characterization of the tube flaw in-situ (while the SG is on-line but not when the plant is operating), should provide valuable information to the power industry. The following are the conclusions reached from this research. A feature extraction program acquiring relevant information from both the mixed, absolute and differential data was successfully implemented. The CWT was utilized to extract more information from the mixed, complex differential data. Image Processing techniques used to extract the information contained in the generated CWT, classified the data with a high success rate. The data were accurately classified, utilizing the compressed feature vector and using a Bayes classification system. An estimation of the upper bound for the probability of error, using the Bhattacharyya distance, was successfully applied to the Bayesian classification. The classified data were separated according to flaw-type (classification) to enhance characterization. The characterization routine used dedicated, flaw-type specific ANNs that made the characterization of the tube flaw more robust. The inclusion of outliers may help complete the feature space so that classification accuracy is increased. Given that the eddy current test signals appear very similar, there may not be sufficient information to make an extremely accurate (>95%) classification or an advanced characterization using this system. It is necessary to have a larger database fore more accurate system learning.

  13. Accurate Arabic Script Language/Dialect Classification

    DTIC Science & Technology

    2014-01-01

    Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification

  14. Overview of classification systems in peripheral artery disease.

    PubMed

    Hardman, Rulon L; Jazaeri, Omid; Yi, J; Smith, M; Gupta, Rajan

    2014-12-01

    Peripheral artery disease (PAD), secondary to atherosclerotic disease, is currently the leading cause of morbidity and mortality in the western world. While PAD is common, it is estimated that the majority of patients with PAD are undiagnosed and undertreated. The challenge to the treatment of PAD is to accurately diagnose the symptoms and determine treatment for each patient. The varied presentations of peripheral vascular disease have led to numerous classification schemes throughout the literature. Consistent grading of patients leads to both objective criteria for treating patients and a baseline for clinical follow-up. Reproducible classification systems are also important in clinical trials and when comparing medical, surgical, and endovascular treatment paradigms. This article reviews the various classification systems for PAD and advantages to each system.

  15. A Three-Phase Decision Model of Computer-Aided Coding for the Iranian Classification of Health Interventions (IRCHI)

    PubMed Central

    Azadmanjir, Zahra; Safdari, Reza; Ghazisaeedi, Marjan; Mokhtaran, Mehrshad; Kameli, Mohammad Esmail

    2017-01-01

    Introduction: Accurate coded data in the healthcare are critical. Computer-Assisted Coding (CAC) is an effective tool to improve clinical coding in particular when a new classification will be developed and implemented. But determine the appropriate method for development need to consider the specifications of existing CAC systems, requirements for each type, our infrastructure and also, the classification scheme. Aim: The aim of the study was the development of a decision model for determining accurate code of each medical intervention in Iranian Classification of Health Interventions (IRCHI) that can be implemented as a suitable CAC system. Methods: first, a sample of existing CAC systems was reviewed. Then feasibility of each one of CAC types was examined with regard to their prerequisites for their implementation. The next step, proper model was proposed according to the structure of the classification scheme and was implemented as an interactive system. Results: There is a significant relationship between the level of assistance of a CAC system and integration of it with electronic medical documents. Implementation of fully automated CAC systems is impossible due to immature development of electronic medical record and problems in using language for medical documenting. So, a model was proposed to develop semi-automated CAC system based on hierarchical relationships between entities in the classification scheme and also the logic of decision making to specify the characters of code step by step through a web-based interactive user interface for CAC. It was composed of three phases to select Target, Action and Means respectively for an intervention. Conclusion: The proposed model was suitable the current status of clinical documentation and coding in Iran and also, the structure of new classification scheme. Our results show it was practical. However, the model needs to be evaluated in the next stage of the research. PMID:28883671

  16. A domains-based taxonomy of supported accommodation for people with severe and persistent mental illness.

    PubMed

    Siskind, Dan; Harris, Meredith; Pirkis, Jane; Whiteford, Harvey

    2013-06-01

    A lack of definitional clarity in supported accommodation and the absence of a widely accepted system for classifying supported accommodation models creates barriers to service planning and evaluation. We undertook a systematic review of existing supported accommodation classification systems. Using a structured system for qualitative data analysis, we reviewed the stratification features in these classification systems, identified the key elements of supported accommodation and arranged them into domains and dimensions to create a new taxonomy. The existing classification systems were mapped onto the new taxonomy to verify the domains and dimensions. Existing classification systems used either a service-level characteristic or programmatic approach. We proposed a taxonomy based around four domains: duration of tenure; patient characteristics; housing characteristics; and service characteristics. All of the domains in the taxonomy were drawn from the existing classification structures; however, none of the existing classification structures covered all of the domains in the taxonomy. Existing classification systems are regionally based, limited in scope and lack flexibility. A domains-based taxonomy can allow more accurate description of supported accommodation services, aid in identifying the service elements likely to improve outcomes for specific patient populations, and assist in service planning.

  17. The 7th lung cancer TNM classification and staging system: Review of the changes and implications.

    PubMed

    Mirsadraee, Saeed; Oswal, Dilip; Alizadeh, Yalda; Caulo, Andrea; van Beek, Edwin

    2012-04-28

    Lung cancer is the most common cause of death from cancer in males, accounting for more than 1.4 million deaths in 2008. It is a growing concern in China, Asia and Africa as well. Accurate staging of the disease is an important part of the management as it provides estimation of patient's prognosis and identifies treatment sterategies. It also helps to build a database for future staging projects. A major revision of lung cancer staging has been announced with effect from January 2010. The new classification is based on a larger surgical and non-surgical cohort of patients, and thus more accurate in terms of outcome prediction compared to the previous classification. There are several original papers regarding this new classification which give comprehensive description of the methodology, the changes in the staging and the statistical analysis. This overview is a simplified description of the changes in the new classification and their potential impact on patients' treatment and prognosis.

  18. Photometric brown-dwarf classification. I. A method to identify and accurately classify large samples of brown dwarfs without spectroscopy

    NASA Astrophysics Data System (ADS)

    Skrzypek, N.; Warren, S. J.; Faherty, J. K.; Mortlock, D. J.; Burgasser, A. J.; Hewett, P. C.

    2015-02-01

    Aims: We present a method, named photo-type, to identify and accurately classify L and T dwarfs onto the standard spectral classification system using photometry alone. This enables the creation of large and deep homogeneous samples of these objects efficiently, without the need for spectroscopy. Methods: We created a catalogue of point sources with photometry in 8 bands, ranging from 0.75 to 4.6 μm, selected from an area of 3344 deg2, by combining SDSS, UKIDSS LAS, and WISE data. Sources with 13.0 0.8, were then classified by comparison against template colours of quasars, stars, and brown dwarfs. The L and T templates, spectral types L0 to T8, were created by identifying previously known sources with spectroscopic classifications, and fitting polynomial relations between colour and spectral type. Results: Of the 192 known L and T dwarfs with reliable photometry in the surveyed area and magnitude range, 189 are recovered by our selection and classification method. We have quantified the accuracy of the classification method both externally, with spectroscopy, and internally, by creating synthetic catalogues and accounting for the uncertainties. We find that, brighter than J = 17.5, photo-type classifications are accurate to one spectral sub-type, and are therefore competitive with spectroscopic classifications. The resultant catalogue of 1157 L and T dwarfs will be presented in a companion paper.

  19. Meta-learning framework applied in bioinformatics inference system design.

    PubMed

    Arredondo, Tomás; Ormazábal, Wladimir

    2015-01-01

    This paper describes a meta-learner inference system development framework which is applied and tested in the implementation of bioinformatic inference systems. These inference systems are used for the systematic classification of the best candidates for inclusion in bacterial metabolic pathway maps. This meta-learner-based approach utilises a workflow where the user provides feedback with final classification decisions which are stored in conjunction with analysed genetic sequences for periodic inference system training. The inference systems were trained and tested with three different data sets related to the bacterial degradation of aromatic compounds. The analysis of the meta-learner-based framework involved contrasting several different optimisation methods with various different parameters. The obtained inference systems were also contrasted with other standard classification methods with accurate prediction capabilities observed.

  20. Spectrally based mapping of riverbed composition

    USGS Publications Warehouse

    Legleiter, Carl; Stegman, Tobin K.; Overstreet, Brandon T.

    2016-01-01

    Remote sensing methods provide an efficient means of characterizing fluvial systems. This study evaluated the potential to map riverbed composition based on in situ and/or remote measurements of reflectance. Field spectra and substrate photos from the Snake River, Wyoming, USA, were used to identify different sediment facies and degrees of algal development and to quantify their optical characteristics. We hypothesized that accounting for the effects of depth and water column attenuation to isolate the reflectance of the streambed would enhance distinctions among bottom types and facilitate substrate classification. A bottom reflectance retrieval algorithm adapted from coastal research yielded realistic spectra for the 450 to 700 nm range; but bottom reflectance-based substrate classifications, generated using a random forest technique, were no more accurate than classifications derived from above-water field spectra. Additional hypothesis testing indicated that a combination of reflectance magnitude (brightness) and indices of spectral shape provided the most accurate riverbed classifications. Convolving field spectra to the response functions of a multispectral satellite and a hyperspectral imaging system did not reduce classification accuracies, implying that high spectral resolution was not essential. Supervised classifications of algal density produced from hyperspectral data and an inferred bottom reflectance image were not highly accurate, but unsupervised classification of the bottom reflectance image revealed distinct spectrally based clusters, suggesting that such an image could provide additional river information. We attribute the failure of bottom reflectance retrieval to yield more reliable substrate maps to a latent correlation between depth and bottom type. Accounting for the effects of depth might have eliminated a key distinction among substrates and thus reduced discriminatory power. Although further, more systematic study across a broader range of fluvial environments is needed to substantiate our initial results, this case study suggests that bed composition in shallow, clear-flowing rivers potentially could be mapped remotely.

  1. Coal-cleaning plant refuse characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavalet, J.R.; Torak, E.R.

    1985-06-01

    This report describes a study performed for the Electric Power Research Institute's Coal Cleaning Test Facility in Homer City, Pennsylvania. The purpose of the study was to design a standard methods for chemically and physically classifying refuse generated by physical coal cleaning and to construct a matrix that will accurately predict how a particular refuse will react to particular disposal methods - based solely on raw-coal characteristics and the process used to clean the coal. The value of such a classification system (which has not existed to this point) is the ability to design efficient and economical systems for disposingmore » of specific coal cleaning refuse. The report describes the project's literature search and a four-tier classification system. It also provides designs for test piles, sampling procedures, and guidelines for a series of experiments to test the classfication system and create an accurate, reliable predictive matrix. 38 refs., 39 figs., 35 tabs.« less

  2. A novel risk classification system for 30-day mortality in children undergoing surgery

    PubMed Central

    Walter, Arianne I.; Jones, Tamekia L.; Huang, Eunice Y.; Davis, Robert L.

    2018-01-01

    A simple, objective and accurate way of grouping children undergoing surgery into clinically relevant risk groups is needed. The purpose of this study, is to develop and validate a preoperative risk classification system for postsurgical 30-day mortality for children undergoing a wide variety of operations. The National Surgical Quality Improvement Project-Pediatric participant use file data for calendar years 2012–2014 was analyzed to determine preoperative variables most associated with death within 30 days of operation (D30). Risk groups were created using classification tree analysis based on these preoperative variables. The resulting risk groups were validated using 2015 data, and applied to neonates and higher risk CPT codes to determine validity in high-risk subpopulations. A five-level risk classification was found to be most accurate. The preoperative need for ventilation, oxygen support, inotropic support, sepsis, the need for emergent surgery and a do not resuscitate order defined non-overlapping groups with observed rates of D30 that vary from 0.075% (Very Low Risk) to 38.6% (Very High Risk). When CPT codes where death was never observed are eliminated or when the system is applied to neonates, the groupings remained predictive of death in an ordinal manner. PMID:29351327

  3. Parallel processing implementations of a contextual classifier for multispectral remote sensing data

    NASA Technical Reports Server (NTRS)

    Siegel, H. J.; Swain, P. H.; Smith, B. W.

    1980-01-01

    Contextual classifiers are being developed as a method to exploit the spatial/spectral context of a pixel to achieve accurate classification. Classification algorithms such as the contextual classifier typically require large amounts of computation time. One way to reduce the execution time of these tasks is through the use of parallelism. The applicability of the CDC flexible processor system and of a proposed multimicroprocessor system (PASM) for implementing contextual classifiers is examined.

  4. A patch-based convolutional neural network for remote sensing image classification.

    PubMed

    Sharma, Atharva; Liu, Xiuwen; Yang, Xiaojun; Shi, Di

    2017-11-01

    Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A "TNM" classification system for cancer pain: the Edmonton Classification System for Cancer Pain (ECS-CP).

    PubMed

    Fainsinger, Robin L; Nekolaichuk, Cheryl L

    2008-06-01

    The purpose of this paper is to provide an overview of the development of a "TNM" cancer pain classification system for advanced cancer patients, the Edmonton Classification System for Cancer Pain (ECS-CP). Until we have a common international language to discuss cancer pain, understanding differences in clinical and research experience in opioid rotation and use remains problematic. The complexity of the cancer pain experience presents unique challenges for the classification of pain. To date, no universally accepted pain classification measure can accurately predict the complexity of pain management, particularly for patients with cancer pain that is difficult to treat. In response to this gap in clinical assessment, the Edmonton Staging System (ESS), a classification system for cancer pain, was developed. Difficulties in definitions and interpretation of some aspects of the ESS restricted acceptance and widespread use. Construct, inter-rater reliability, and predictive validity evidence have contributed to the development of the ECS-CP. The five features of the ECS-CP--Pain Mechanism, Incident Pain, Psychological Distress, Addictive Behavior and Cognitive Function--have demonstrated value in predicting pain management complexity. The development of a standardized classification system that is comprehensive, prognostic and simple to use could provide a common language for clinical management and research of cancer pain. An international study to assess the inter-rater reliability and predictive value of the ECS-CP is currently in progress.

  6. Characteristics of a global classification system for perinatal deaths: a Delphi consensus study.

    PubMed

    Wojcieszek, Aleena M; Reinebrant, Hanna E; Leisher, Susannah Hopkins; Allanson, Emma; Coory, Michael; Erwich, Jan Jaap; Frøen, J Frederik; Gardosi, Jason; Gordijn, Sanne; Gulmezoglu, Metin; Heazell, Alexander E P; Korteweg, Fleurisca J; McClure, Elizabeth; Pattinson, Robert; Silver, Robert M; Smith, Gordon; Teoh, Zheyi; Tunçalp, Özge; Flenady, Vicki

    2016-08-15

    Despite the global burden of perinatal deaths, there is currently no single, globally-acceptable classification system for perinatal deaths. Instead, multiple, disparate systems are in use world-wide. This inconsistency hinders accurate estimates of causes of death and impedes effective prevention strategies. The World Health Organisation (WHO) is developing a globally-acceptable classification approach for perinatal deaths. To inform this work, we sought to establish a consensus on the important characteristics of such a system. A group of international experts in the classification of perinatal deaths were identified and invited to join an expert panel to develop a list of important characteristics of a quality global classification system for perinatal death. A Delphi consensus methodology was used to reach agreement. Three rounds of consultation were undertaken using a purpose built on-line survey. Round one sought suggested characteristics for subsequent scoring and selection in rounds two and three. The panel of experts agreed on a total of 17 important characteristics for a globally-acceptable perinatal death classification system. Of these, 10 relate to the structural design of the system and 7 relate to the functional aspects and use of the system. This study serves as formative work towards the development of a globally-acceptable approach for the classification of the causes of perinatal deaths. The list of functional and structural characteristics identified should be taken into consideration when designing and developing such a system.

  7. Action Research on Dropouts.

    ERIC Educational Resources Information Center

    Parkin, Michael

    Dropout classification systems must be standardized, updated, and simplified to accurately reflect conditions of student departures from school; current, nonstandardized systems allow gathered data to be biased and of poor quality. Improvements will inform administrators of the specific causes behind students' early withdrawals--whether students…

  8. Starmind: A Fuzzy Logic Knowledge-Based System for the Automated Classification of Stars in the MK System

    NASA Astrophysics Data System (ADS)

    Manteiga, M.; Carricajo, I.; Rodríguez, A.; Dafonte, C.; Arcay, B.

    2009-02-01

    Astrophysics is evolving toward a more rational use of costly observational data by intelligently exploiting the large terrestrial and spatial astronomical databases. In this paper, we present a study showing the suitability of an expert system to perform the classification of stellar spectra in the Morgan and Keenan (MK) system. Using the formalism of artificial intelligence for the development of such a system, we propose a rules' base that contains classification criteria and confidence grades, all integrated in an inference engine that emulates human reasoning by means of a hierarchical decision rules tree that also considers the uncertainty factors associated with rules. Our main objective is to illustrate the formulation and development of such a system for an astrophysical classification problem. An extensive spectral database of MK standard spectra has been collected and used as a reference to determine the spectral indexes that are suitable for classification in the MK system. It is shown that by considering 30 spectral indexes and associating them with uncertainty factors, we can find an accurate diagnose in MK types of a particular spectrum. The system was evaluated against the NOAO-INDO-US spectral catalog.

  9. Property Specification Patterns for intelligence building software

    NASA Astrophysics Data System (ADS)

    Chun, Seungsu

    2018-03-01

    In this paper, through the property specification pattern research for Modal MU(μ) logical aspects present a single framework based on the pattern of intelligence building software. In this study, broken down by state property specification pattern classification of Dwyer (S) and action (A) and was subdivided into it again strong (A) and weaknesses (E). Through these means based on a hierarchical pattern classification of the property specification pattern analysis of logical aspects Mu(μ) was applied to the pattern classification of the examples used in the actual model checker. As a result, not only can a more accurate classification than the existing classification systems were easy to create and understand the attributes specified.

  10. Classification of Computer-Aided Design-Computer-Aided Manufacturing Applications for the Reconstruction of Cranio-Maxillo-Facial Defects.

    PubMed

    Wauters, Lauri D J; Miguel-Moragas, Joan San; Mommaerts, Maurice Y

    2015-11-01

    To gain insight into the methodology of different computer-aided design-computer-aided manufacturing (CAD-CAM) applications for the reconstruction of cranio-maxillo-facial (CMF) defects. We reviewed and analyzed the available literature pertaining to CAD-CAM for use in CMF reconstruction. We proposed a classification system of the techniques of implant and cutting, drilling, and/or guiding template design and manufacturing. The system consisted of 4 classes (I-IV). These classes combine techniques used for both the implant and template to most accurately describe the methodology used. Our classification system can be widely applied. It should facilitate communication and immediate understanding of the methodology of CAD-CAM applications for the reconstruction of CMF defects.

  11. The Transporter Classification Database: recent advances.

    PubMed

    Saier, Milton H; Yen, Ming Ren; Noto, Keith; Tamang, Dorjee G; Elkan, Charles

    2009-01-01

    The Transporter Classification Database (TCDB), freely accessible at http://www.tcdb.org, is a relational database containing sequence, structural, functional and evolutionary information about transport systems from a variety of living organisms, based on the International Union of Biochemistry and Molecular Biology-approved transporter classification (TC) system. It is a curated repository for factual information compiled largely from published references. It uses a functional/phylogenetic system of classification, and currently encompasses about 5000 representative transporters and putative transporters in more than 500 families. We here describe novel software designed to support and extend the usefulness of TCDB. Our recent efforts render it more user friendly, incorporate machine learning to input novel data in a semiautomatic fashion, and allow analyses that are more accurate and less time consuming. The availability of these tools has resulted in recognition of distant phylogenetic relationships and tremendous expansion of the information available to TCDB users.

  12. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems.

    PubMed

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  13. Using molt cycles to categorize the age of tropical birds: an integrative new system

    Treesearch

    Jared D. Wolfe; Thomas B. Ryder; Peter Pyle

    2010-01-01

    Accurately differentiating age classes is essential for the long-term monitoring of resident New World tropical bird species. Molt and plumage criteria have long been used to accurately age temperate birds, but application of temperate age-classification models to the Neotropics has been hindered because annual life-cycle events of tropical birds do not always...

  14. A detailed procedure for the use of small-scale photography in land use classification

    NASA Technical Reports Server (NTRS)

    Vegas, P. L.

    1974-01-01

    A procedure developed to produce accurate land use maps from available high-altitude, small-scale photography in a cost-effective manner is presented. An alternative procedure, for use when the capability for updating the resultant land use map is not required, is also presented. The technical approach is discussed in detail, and personnel and equipment needs are analyzed. Accuracy percentages are listed, and costs are cited. The experiment land use classification categories are explained, and a proposed national land use classification system is recommended.

  15. Refining Landsat classification results using digital terrain data

    USGS Publications Warehouse

    Miller, Wayne A.; Shasby, Mark

    1982-01-01

     Scientists at the U.S. Geological Survey's Earth Resources Observation systems (EROS) Data Center have recently completed two land-cover mapping projects in which digital terrain data were used to refine Landsat classification results. Digital ter rain data were incorporated into the Landsat classification process using two different procedures that required developing decision criteria either subjectively or quantitatively. The subjective procedure was used in a vegetation mapping project in Arizona, and the quantitative procedure was used in a forest-fuels mapping project in Montana. By incorporating digital terrain data into the Landsat classification process, more spatially accurate landcover maps were produced for both projects.

  16. Automated classification of articular cartilage surfaces based on surface texture.

    PubMed

    Stachowiak, G P; Stachowiak, G W; Podsiadlo, P

    2006-11-01

    In this study the automated classification system previously developed by the authors was used to classify articular cartilage surfaces with different degrees of wear. This automated system classifies surfaces based on their texture. Plug samples of sheep cartilage (pins) were run on stainless steel discs under various conditions using a pin-on-disc tribometer. Testing conditions were specifically designed to produce different severities of cartilage damage due to wear. Environmental scanning electron microscope (SEM) (ESEM) images of cartilage surfaces, that formed a database for pattern recognition analysis, were acquired. The ESEM images of cartilage were divided into five groups (classes), each class representing different wear conditions or wear severity. Each class was first examined and assessed visually. Next, the automated classification system (pattern recognition) was applied to all classes. The results of the automated surface texture classification were compared to those based on visual assessment of surface morphology. It was shown that the texture-based automated classification system was an efficient and accurate method of distinguishing between various cartilage surfaces generated under different wear conditions. It appears that the texture-based classification method has potential to become a useful tool in medical diagnostics.

  17. Classification of reflected signals from cavitated tooth surfaces using an artificial intelligence technique incorporating a fiber optic displacement sensor

    NASA Astrophysics Data System (ADS)

    Rahman, Husna Abdul; Harun, Sulaiman Wadi; Arof, Hamzah; Irawati, Ninik; Musirin, Ismail; Ibrahim, Fatimah; Ahmad, Harith

    2014-05-01

    An enhanced dental cavity diameter measurement mechanism using an intensity-modulated fiber optic displacement sensor (FODS) scanning and imaging system, fuzzy logic as well as a single-layer perceptron (SLP) neural network, is presented. The SLP network was employed for the classification of the reflected signals, which were obtained from the surfaces of teeth samples and captured using FODS. Two features were used for the classification of the reflected signals with one of them being the output of a fuzzy logic. The test results showed that the combined fuzzy logic and SLP network methodology contributed to a 100% classification accuracy of the network. The high-classification accuracy significantly demonstrates the suitability of the proposed features and classification using SLP networks for classifying the reflected signals from teeth surfaces, enabling the sensor to accurately measure small diameters of tooth cavity of up to 0.6 mm. The method remains simple enough to allow its easy integration in existing dental restoration support systems.

  18. VizieR Online Data Catalog: LAMOST-Kepler MKCLASS spectral classification (Gray+, 2016)

    NASA Astrophysics Data System (ADS)

    Gray, R. O.; Corbally, C. J.; De Cat, P.; Fu, J. N.; Ren, A. B.; Shi, J. R.; Luo, A. L.; Zhang, H. T.; Wu, Y.; Cao, Z.; Li, G.; Zhang, Y.; Hou, Y.; Wang, Y.

    2016-07-01

    The data for the LAMOST-Kepler project are supplied by the Large Sky Area Multi Object Fiber Spectroscopic Telescope (LAMOST, also known as the Guo Shou Jing Telescope). This unique astronomical instrument is located at the Xinglong observatory in China, and combines a large aperture (4 m) telescope with a 5° circular field of view (Wang et al. 1996ApOpt..35.5155W). Our role in this project is to supply accurate two-dimensional spectral types for the observed targets. The large number of spectra obtained for this project (101086) makes traditional visual classification techniques impractical, so we have utilized the MKCLASS code to perform these classifications. The MKCLASS code (Gray & Corbally 2014AJ....147...80G, v1.07 http://www.appstate.edu/~grayro/mkclass/), an expert system designed to classify blue-violet spectra on the MK Classification system, was employed to produce the spectral classifications reported in this paper. MKCLASS was designed to reproduce the steps skilled human classifiers employ in the classification process. (2 data files).

  19. Classification of reflected signals from cavitated tooth surfaces using an artificial intelligence technique incorporating a fiber optic displacement sensor.

    PubMed

    Rahman, Husna Abdul; Harun, Sulaiman Wadi; Arof, Hamzah; Irawati, Ninik; Musirin, Ismail; Ibrahim, Fatimah; Ahmad, Harith

    2014-05-01

    An enhanced dental cavity diameter measurement mechanism using an intensity-modulated fiber optic displacement sensor (FODS) scanning and imaging system, fuzzy logic as well as a single-layer perceptron (SLP) neural network, is presented. The SLP network was employed for the classification of the reflected signals, which were obtained from the surfaces of teeth samples and captured using FODS. Two features were used for the classification of the reflected signals with one of them being the output of a fuzzy logic. The test results showed that the combined fuzzy logic and SLP network methodology contributed to a 100% classification accuracy of the network. The high-classification accuracy significantly demonstrates the suitability of the proposed features and classification using SLP networks for classifying the reflected signals from teeth surfaces, enabling the sensor to accurately measure small diameters of tooth cavity of up to 0.6 mm. The method remains simple enough to allow its easy integration in existing dental restoration support systems.

  20. Classification of Children Intelligence with Fuzzy Logic Method

    NASA Astrophysics Data System (ADS)

    Syahminan; ika Hidayati, Permata

    2018-04-01

    Intelligence of children s An Important Thing To Know The Parents Early on. Typing Can be done With a Child’s intelligence Grouping Dominant Characteristics Of each Type of Intelligence. To Make it easier for Parents in Determining The type of Children’s intelligence And How to Overcome them, for It Created A Classification System Intelligence Grouping Children By Using Fuzzy logic method For determination Of a Child’s degree of intelligence type. From the analysis We concluded that The presence of Intelligence Classification systems Pendulum Children With Fuzzy Logic Method Of determining The type of The Child’s intelligence Can be Done in a way That is easier And The results More accurate Conclusions Than Manual tests.

  1. Centrifuge: rapid and sensitive classification of metagenomic sequences

    PubMed Central

    Song, Li; Breitwieser, Florian P.

    2016-01-01

    Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. PMID:27852649

  2. Classification of holter registers by dynamic clustering using multi-dimensional particle swarm optimization.

    PubMed

    Kiranyaz, Serkan; Ince, Turker; Pulkkinen, Jenni; Gabbouj, Moncef

    2010-01-01

    In this paper, we address dynamic clustering in high dimensional data or feature spaces as an optimization problem where multi-dimensional particle swarm optimization (MD PSO) is used to find out the true number of clusters, while fractional global best formation (FGBF) is applied to avoid local optima. Based on these techniques we then present a novel and personalized long-term ECG classification system, which addresses the problem of labeling the beats within a long-term ECG signal, known as Holter register, recorded from an individual patient. Due to the massive amount of ECG beats in a Holter register, visual inspection is quite difficult and cumbersome, if not impossible. Therefore the proposed system helps professionals to quickly and accurately diagnose any latent heart disease by examining only the representative beats (the so called master key-beats) each of which is representing a cluster of homogeneous (similar) beats. We tested the system on a benchmark database where the beats of each Holter register have been manually labeled by cardiologists. The selection of the right master key-beats is the key factor for achieving a highly accurate classification and the proposed systematic approach produced results that were consistent with the manual labels with 99.5% average accuracy, which basically shows the efficiency of the system.

  3. A proposal for the annotation of recurrent colorectal cancer: the 'Sheffield classification'.

    PubMed

    Majeed, A W; Shorthouse, A J; Blakeborough, A; Bird, N C

    2011-11-01

    Current classification systems of large bowel cancer only refer to metastatic disease as M0, M1 or Mx. Recurrent colorectal cancer primarily occurs in the liver, lungs, nodes or peritoneum. The management of each of these sites of recurrence has made significant advances and each is a subspecialty in its own right. The aim of this paper was to devise a classification system which accurately describes the site and extent of metastatic spread. An amendment of the current system is proposed in which liver, lung and peritoneal metastases are annotated by 'Liv 0,1', 'Pul 0,1' and 'Per 0,1' in describing the primary presentation. These are then subclassified, taking into account the chronology, size, number and geographical distribution of metastatic disease or logoregional recurrence and its K-Ras status. This discussion document proposes a classification system which is logical and simple to use. We plan to validate it prospectively. © 2011 The Authors. Colorectal Disease © 2011 The Association of Coloproctology of Great Britain and Ireland.

  4. Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems.

    PubMed

    Oh, Sang-Il; Kang, Hang-Bong

    2017-01-22

    To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226 × 370 image, whereas the original selective search method extracted approximately 10 6 × n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset.

  5. Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems

    PubMed Central

    Oh, Sang-Il; Kang, Hang-Bong

    2017-01-01

    To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226×370 image, whereas the original selective search method extracted approximately 106×n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset. PMID:28117742

  6. New decision support tool for acute lymphoblastic leukemia classification

    NASA Astrophysics Data System (ADS)

    Madhukar, Monica; Agaian, Sos; Chronopoulos, Anthony T.

    2012-03-01

    In this paper, we build up a new decision support tool to improve treatment intensity choice in childhood ALL. The developed system includes different methods to accurately measure furthermore cell properties in microscope blood film images. The blood images are exposed to series of pre-processing steps which include color correlation, and contrast enhancement. By performing K-means clustering on the resultant images, the nuclei of the cells under consideration are obtained. Shape features and texture features are then extracted for classification. The system is further tested on the classification of spectra measured from the cell nuclei in blood samples in order to distinguish normal cells from those affected by Acute Lymphoblastic Leukemia. The results show that the proposed system robustly segments and classifies acute lymphoblastic leukemia based on complete microscopic blood images.

  7. Integration of Chinese medicine with Western medicine could lead to future medicine: molecular module medicine.

    PubMed

    Zhang, Chi; Zhang, Ge; Chen, Ke-ji; Lu, Ai-ping

    2016-04-01

    The development of an effective classification method for human health conditions is essential for precise diagnosis and delivery of tailored therapy to individuals. Contemporary classification of disease systems has properties that limit its information content and usability. Chinese medicine pattern classification has been incorporated with disease classification, and this integrated classification method became more precise because of the increased understanding of the molecular mechanisms. However, we are still facing the complexity of diseases and patterns in the classification of health conditions. With continuing advances in omics methodologies and instrumentation, we are proposing a new classification approach: molecular module classification, which is applying molecular modules to classifying human health status. The initiative would be precisely defining the health status, providing accurate diagnoses, optimizing the therapeutics and improving new drug discovery strategy. Therefore, there would be no current disease diagnosis, no disease pattern classification, and in the future, a new medicine based on this classification, molecular module medicine, could redefine health statuses and reshape the clinical practice.

  8. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  9. Automatic analysis for neuron by confocal laser scanning microscope

    NASA Astrophysics Data System (ADS)

    Satou, Kouhei; Aoki, Yoshimitsu; Mataga, Nobuko; Hensh, Takao K.; Taki, Katuhiko

    2005-12-01

    The aim of this study is to develop a system that recognizes both the macro- and microscopic configurations of nerve cells and automatically performs the necessary 3-D measurements and functional classification of spines. The acquisition of 3-D images of cranial nerves has been enabled by the use of a confocal laser scanning microscope, although the highly accurate 3-D measurements of the microscopic structures of cranial nerves and their classification based on their configurations have not yet been accomplished. In this study, in order to obtain highly accurate measurements of the microscopic structures of cranial nerves, existing positions of spines were predicted by the 2-D image processing of tomographic images. Next, based on the positions that were predicted on the 2-D images, the positions and configurations of the spines were determined more accurately by 3-D image processing of the volume data. We report the successful construction of an automatic analysis system that uses a coarse-to-fine technique to analyze the microscopic structures of cranial nerves with high speed and accuracy by combining 2-D and 3-D image analyses.

  10. User intent prediction with a scaled conjugate gradient trained artificial neural network for lower limb amputees using a powered prosthesis.

    PubMed

    Woodward, Richard B; Spanias, John A; Hargrove, Levi J

    2016-08-01

    Powered lower limb prostheses have the ability to provide greater mobility for amputee patients. Such prostheses often have pre-programmed modes which can allow activities such as climbing stairs and descending ramps, something which many amputees struggle with when using non-powered limbs. Previous literature has shown how pattern classification can allow seamless transitions between modes with a high accuracy and without any user interaction. Although accurate, training and testing each subject with their own dependent data is time consuming. By using subject independent datasets, whereby a unique subject is tested against a pooled dataset of other subjects, we believe subject training time can be reduced while still achieving an accurate classification. We present here an intent recognition system using an artificial neural network (ANN) with a scaled conjugate gradient learning algorithm to classify gait intention with user-dependent and independent datasets for six unilateral lower limb amputees. We compare these results against a linear discriminant analysis (LDA) classifier. The ANN was found to have significantly lower classification error (P<;0.05) than LDA with all user-dependent step-types, as well as transitional steps for user-independent datasets. Both types of classifiers are capable of making fast decisions; 1.29 and 2.83 ms for the LDA and ANN respectively. These results suggest that ANNs can provide suitable and accurate offline classification in prosthesis gait prediction.

  11. Classification of Radiological Changes in Burst Fractures

    PubMed Central

    Şentürk, Salim; Öğrenci, Ahmet; Gürçay, Ahmet Gürhan; Abdioğlu, Ahmet Atilla; Yaman, Onur; Özer, Ali Fahir

    2018-01-01

    AIM: Burst fractures can occur with different radiological images after high energy. We aimed to simplify radiological staging of burst fractures. METHODS: Eighty patients whom exposed spinal trauma and had burst fracture were evaluated concerning age, sex, fracture segment, neurological deficit, secondary organ injury and radiological changes that occurred. RESULTS: We performed a new classification in burst fractures at radiological images. CONCLUSIONS: According to this classification system, secondary organ injury and neurological deficit can be an indicator of energy exposure. If energy is high, the clinical status will be worse. Thus, we can get an idea about the likelihood of neurological deficit and secondary organ injuries. This classification has simplified the radiological staging of burst fractures and is a classification that gives a very accurate idea about the neurological condition. PMID:29531604

  12. Cognitive-motivational deficits in ADHD: development of a classification system.

    PubMed

    Gupta, Rashmi; Kar, Bhoomika R; Srinivasan, Narayanan

    2011-01-01

    The classification systems developed so far to detect attention deficit/hyperactivity disorder (ADHD) do not have high sensitivity and specificity. We have developed a classification system based on several neuropsychological tests that measure cognitive-motivational functions that are specifically impaired in ADHD children. A total of 240 (120 ADHD children and 120 healthy controls) children in the age range of 6-9 years and 32 Oppositional Defiant Disorder (ODD) children (aged 9 years) participated in the study. Stop-Signal, Task-Switching, Attentional Network, and Choice Delay tests were administered to all the participants. Receiver operating characteristic (ROC) analysis indicated that percentage choice of long-delay reward best classified the ADHD children from healthy controls. Single parameters were not helpful in making a differential classification of ADHD with ODD. Multinominal logistic regression (MLR) was performed with multiple parameters (data fusion) that produced improved overall classification accuracy. A combination of stop-signal reaction time, posterror-slowing, mean delay, switch cost, and percentage choice of long-delay reward produced an overall classification accuracy of 97.8%; with internal validation, the overall accuracy was 92.2%. Combining parameters from different tests of control functions not only enabled us to accurately classify ADHD children from healthy controls but also in making a differential classification with ODD. These results have implications for the theories of ADHD.

  13. Automatically high accurate and efficient photomask defects management solution for advanced lithography manufacture

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Chen, Lijun; Ma, Lantao; Li, Dejian; Jiang, Wei; Pan, Lihong; Shen, Huiting; Jia, Hongmin; Hsiang, Chingyun; Cheng, Guojie; Ling, Li; Chen, Shijie; Wang, Jun; Liao, Wenkui; Zhang, Gary

    2014-04-01

    Defect review is a time consuming job. Human error makes result inconsistent. The defects located on don't care area would not hurt the yield and no need to review them such as defects on dark area. However, critical area defects can impact yield dramatically and need more attention to review them such as defects on clear area. With decrease in integrated circuit dimensions, mask defects are always thousands detected during inspection even more. Traditional manual or simple classification approaches are unable to meet efficient and accuracy requirement. This paper focuses on automatic defect management and classification solution using image output of Lasertec inspection equipment and Anchor pattern centric image process technology. The number of mask defect found during an inspection is always in the range of thousands or even more. This system can handle large number defects with quick and accurate defect classification result. Our experiment includes Die to Die and Single Die modes. The classification accuracy can reach 87.4% and 93.3%. No critical or printable defects are missing in our test cases. The missing classification defects are 0.25% and 0.24% in Die to Die mode and Single Die mode. This kind of missing rate is encouraging and acceptable to apply on production line. The result can be output and reloaded back to inspection machine to have further review. This step helps users to validate some unsure defects with clear and magnification images when captured images can't provide enough information to make judgment. This system effectively reduces expensive inline defect review time. As a fully inline automated defect management solution, the system could be compatible with current inspection approach and integrated with optical simulation even scoring function and guide wafer level defect inspection.

  14. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    PubMed

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the individual subjects, therefore, it can be used as a significant tool in clinical practice.

  15. Development of municipal solid waste classification in Korea based on fossil carbon fraction.

    PubMed

    Lee, Jeongwoo; Kang, Seongmin; Kim, Seungjin; Kim, Ki-Hyun; Jeon, Eui-Chan

    2015-10-01

    Environmental problems and climate change arising from waste incineration are taken quite seriously in the world. In Korea, the waste disposal methods are largely classified into landfill, incineration, recycling, etc. and the amount of incinerated waste has risen by 24.5% from 2002. In the analysis of CO₂emissions estimations of waste incinerators fossil carbon content are main factor by the IPCC. FCF differs depending on the characteristics of waste in each country, and a wide range of default values are proposed by the IPCC. This study conducted research on the existing classifications of the IPCC and Korean waste classification systems based on FCF for accurate greenhouse gas emissions estimation of waste incineration. The characteristics possible for sorting were classified according to FCF and form. The characteristics sorted according to fossil carbon fraction were paper, textiles, rubber, and leather. Paper was classified into pure paper and processed paper; textiles were classified into cotton and synthetic fibers; and rubber and leather were classified into artificial and natural. The analysis of FCF was implemented by collecting representative samples from each classification group, by applying the 14C method, and using AMS equipment. And the analysis values were compared with the default values proposed by the IPCC. In this study of garden and park waste and plastics, the differences were within the range of the IPCC default values or the differences were negligible. However, coated paper, synthetic textiles, natural rubber, synthetic rubber, artificial leather, and other wastes showed differences of over 10% in FCF content. IPCC is comprised of largely 9 types of qualitative classifications, in emissions estimation a great difference can occur from the combined characteristics according with the existing IPCC classification system by using the minutely classified waste characteristics as in this study. Fossil carbon fraction (FCF) differs depending on the characteristics of waste in each country; and a wide range of default values are proposed by the IPCC. This study conducted research on the existing classifications of the IPCC and Korean waste classification systems based on FCF for accurate greenhouse gas emissions estimation of waste incineration.

  16. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds.

    PubMed

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M; Bloom, Peter H; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  17. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    PubMed Central

    Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data. PMID:28403159

  18. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    USGS Publications Warehouse

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael J.; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  19. Accurate label-free 3-part leukocyte recognition with single cell lens-free imaging flow cytometry.

    PubMed

    Li, Yuqian; Cornelis, Bruno; Dusa, Alexandra; Vanmeerbeeck, Geert; Vercruysse, Dries; Sohn, Erik; Blaszkiewicz, Kamil; Prodanov, Dimiter; Schelkens, Peter; Lagae, Liesbet

    2018-05-01

    Three-part white blood cell differentials which are key to routine blood workups are typically performed in centralized laboratories on conventional hematology analyzers operated by highly trained staff. With the trend of developing miniaturized blood analysis tool for point-of-need in order to accelerate turnaround times and move routine blood testing away from centralized facilities on the rise, our group has developed a highly miniaturized holographic imaging system for generating lens-free images of white blood cells in suspension. Analysis and classification of its output data, constitutes the final crucial step ensuring appropriate accuracy of the system. In this work, we implement reference holographic images of single white blood cells in suspension, in order to establish an accurate ground truth to increase classification accuracy. We also automate the entire workflow for analyzing the output and demonstrate clear improvement in the accuracy of the 3-part classification. High-dimensional optical and morphological features are extracted from reconstructed digital holograms of single cells using the ground-truth images and advanced machine learning algorithms are investigated and implemented to obtain 99% classification accuracy. Representative features of the three white blood cell subtypes are selected and give comparable results, with a focus on rapid cell recognition and decreased computational cost. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. FOCIS: A forest classification and inventory system using LANDSAT and digital terrain data

    NASA Technical Reports Server (NTRS)

    Strahler, A. H.; Franklin, J.; Woodcook, C. E.; Logan, T. L.

    1981-01-01

    Accurate, cost-effective stratification of forest vegetation and timber inventory is the primary goal of a Forest Classification and Inventory System (FOCIS). Conventional timber stratification using photointerpretation can be time-consuming, costly, and inconsistent from analyst to analyst. FOCIS was designed to overcome these problems by using machine processing techniques to extract and process tonal, textural, and terrain information from registered LANDSAT multispectral and digital terrain data. Comparison of samples from timber strata identified by conventional procedures showed that both have about the same potential to reduce the variance of timber volume estimates over simple random sampling.

  1. Centrifuge: rapid and sensitive classification of metagenomic sequences.

    PubMed

    Kim, Daehwan; Song, Li; Breitwieser, Florian P; Salzberg, Steven L

    2016-12-01

    Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. © 2016 Kim et al.; Published by Cold Spring Harbor Laboratory Press.

  2. Adaptive sleep-wake discrimination for wearable devices.

    PubMed

    Karlen, Walter; Floreano, Dario

    2011-04-01

    Sleep/wake classification systems that rely on physiological signals suffer from intersubject differences that make accurate classification with a single, subject-independent model difficult. To overcome the limitations of intersubject variability, we suggest a novel online adaptation technique that updates the sleep/wake classifier in real time. The objective of the present study was to evaluate the performance of a newly developed adaptive classification algorithm that was embedded on a wearable sleep/wake classification system called SleePic. The algorithm processed ECG and respiratory effort signals for the classification task and applied behavioral measurements (obtained from accelerometer and press-button data) for the automatic adaptation task. When trained as a subject-independent classifier algorithm, the SleePic device was only able to correctly classify 74.94 ± 6.76% of the human-rated sleep/wake data. By using the suggested automatic adaptation method, the mean classification accuracy could be significantly improved to 92.98 ± 3.19%. A subject-independent classifier based on activity data only showed a comparable accuracy of 90.44 ± 3.57%. We demonstrated that subject-independent models used for online sleep-wake classification can successfully be adapted to previously unseen subjects without the intervention of human experts or off-line calibration.

  3. [CT morphometry for calcaneal fractures and comparison of the Zwipp and Sanders classifications].

    PubMed

    Andermahr, J; Jesch, A B; Helling, H J; Jubel, A; Fischbach, R; Rehm, K E

    2002-01-01

    The aim of the study is to correlate the CT-morphological changes of fractured calcaneus and the classifications of Zwipp and Sanders with the clinical outcome. In a retrospective clinical study, the preoperative CT scans of 75 calcaneal fractures were analysed. The morphometry of the fractures was determined by measuring height, length diameter and calcaneo-cuboidal angle in comparison to the intact contralateral side. At a mean of 38 months after trauma 44 patients were clinically followed-up. The data of CT image morphometry were correlated with the severity of fracture classified by Zwipp or Sanders as well as with the functional outcome. There was a good correlation between the fracture classifications and the morphometric data. Both fracture classifying systems have a predictive impact for functional outcome. The more exacting and accurate Zwipp classification considers the most important cofactors like involvement of the calcaneo-cuboidal joint, soft tissue damage, additional fractures etc. The Sanders classification is easier to use during clinical routine. The Zwipp classification includes more relevant cofactors (fracture of the calcaneo-cuboidal-joint, soft tissue swelling, etc.) and presents a higher correlation to the choice of therapy. Both classification systems present a prognostic impact concerning the clinical outcome.

  4. IRIS COLOUR CLASSIFICATION SCALES – THEN AND NOW

    PubMed Central

    Grigore, Mariana; Avram, Alina

    2015-01-01

    Eye colour is one of the most obvious phenotypic traits of an individual. Since the first documented classification scale developed in 1843, there have been numerous attempts to classify the iris colour. In the past centuries, iris colour classification scales has had various colour categories and mostly relied on comparison of an individual’s eye with painted glass eyes. Once photography techniques were refined, standard iris photographs replaced painted eyes, but this did not solve the problem of painted/ printed colour variability in time. Early clinical scales were easy to use, but lacked objectivity and were not standardised or statistically tested for reproducibility. The era of automated iris colour classification systems came with the technological development. Spectrophotometry, digital analysis of high-resolution iris images, hyper spectral analysis of the human real iris and the dedicated iris colour analysis software, all accomplished an objective, accurate iris colour classification, but are quite expensive and limited in use to research environment. Iris colour classification systems evolved continuously due to their use in a wide range of studies, especially in the fields of anthropology, epidemiology and genetics. Despite the wide range of the existing scales, up until present there has been no generally accepted iris colour classification scale. PMID:27373112

  5. IRIS COLOUR CLASSIFICATION SCALES--THEN AND NOW.

    PubMed

    Grigore, Mariana; Avram, Alina

    2015-01-01

    Eye colour is one of the most obvious phenotypic traits of an individual. Since the first documented classification scale developed in 1843, there have been numerous attempts to classify the iris colour. In the past centuries, iris colour classification scales has had various colour categories and mostly relied on comparison of an individual's eye with painted glass eyes. Once photography techniques were refined, standard iris photographs replaced painted eyes, but this did not solve the problem of painted/ printed colour variability in time. Early clinical scales were easy to use, but lacked objectivity and were not standardised or statistically tested for reproducibility. The era of automated iris colour classification systems came with the technological development. Spectrophotometry, digital analysis of high-resolution iris images, hyper spectral analysis of the human real iris and the dedicated iris colour analysis software, all accomplished an objective, accurate iris colour classification, but are quite expensive and limited in use to research environment. Iris colour classification systems evolved continuously due to their use in a wide range of studies, especially in the fields of anthropology, epidemiology and genetics. Despite the wide range of the existing scales, up until present there has been no generally accepted iris colour classification scale.

  6. A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture.

    PubMed

    Zhong, Yuanhong; Gao, Junyuan; Lei, Qilun; Zhou, Yao

    2018-05-09

    Rapid and accurate counting and recognition of flying insects are of great importance, especially for pest control. Traditional manual identification and counting of flying insects is labor intensive and inefficient. In this study, a vision-based counting and classification system for flying insects is designed and implemented. The system is constructed as follows: firstly, a yellow sticky trap is installed in the surveillance area to trap flying insects and a camera is set up to collect real-time images. Then the detection and coarse counting method based on You Only Look Once (YOLO) object detection, the classification method and fine counting based on Support Vector Machines (SVM) using global features are designed. Finally, the insect counting and recognition system is implemented on Raspberry PI. Six species of flying insects including bee, fly, mosquito, moth, chafer and fruit fly are selected to assess the effectiveness of the system. Compared with the conventional methods, the test results show promising performance. The average counting accuracy is 92.50% and average classifying accuracy is 90.18% on Raspberry PI. The proposed system is easy-to-use and provides efficient and accurate recognition data, therefore, it can be used for intelligent agriculture applications.

  7. A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture

    PubMed Central

    Zhong, Yuanhong; Gao, Junyuan; Lei, Qilun; Zhou, Yao

    2018-01-01

    Rapid and accurate counting and recognition of flying insects are of great importance, especially for pest control. Traditional manual identification and counting of flying insects is labor intensive and inefficient. In this study, a vision-based counting and classification system for flying insects is designed and implemented. The system is constructed as follows: firstly, a yellow sticky trap is installed in the surveillance area to trap flying insects and a camera is set up to collect real-time images. Then the detection and coarse counting method based on You Only Look Once (YOLO) object detection, the classification method and fine counting based on Support Vector Machines (SVM) using global features are designed. Finally, the insect counting and recognition system is implemented on Raspberry PI. Six species of flying insects including bee, fly, mosquito, moth, chafer and fruit fly are selected to assess the effectiveness of the system. Compared with the conventional methods, the test results show promising performance. The average counting accuracy is 92.50% and average classifying accuracy is 90.18% on Raspberry PI. The proposed system is easy-to-use and provides efficient and accurate recognition data, therefore, it can be used for intelligent agriculture applications. PMID:29747429

  8. Optimization of the ANFIS using a genetic algorithm for physical work rate classification.

    PubMed

    Habibi, Ehsanollah; Salehi, Mina; Yadegarfar, Ghasem; Taheri, Ali

    2018-03-13

    Recently, a new method was proposed for physical work rate classification based on an adaptive neuro-fuzzy inference system (ANFIS). This study aims to present a genetic algorithm (GA)-optimized ANFIS model for a highly accurate classification of physical work rate. Thirty healthy men participated in this study. Directly measured heart rate and oxygen consumption of the participants in the laboratory were used for training the ANFIS classifier model in MATLAB version 8.0.0 using a hybrid algorithm. A similar process was done using the GA as an optimization technique. The accuracy, sensitivity and specificity of the ANFIS classifier model were increased successfully. The mean accuracy of the model was increased from 92.95 to 97.92%. Also, the calculated root mean square error of the model was reduced from 5.4186 to 3.1882. The maximum estimation error of the optimized ANFIS during the network testing process was ± 5%. The GA can be effectively used for ANFIS optimization and leads to an accurate classification of physical work rate. In addition to high accuracy, simple implementation and inter-individual variability consideration are two other advantages of the presented model.

  9. The development of a classification system for maternity models of care.

    PubMed

    Donnolley, Natasha; Butler-Henderson, Kerryn; Chapman, Michael; Sullivan, Elizabeth

    2016-08-01

    A lack of standard terminology or means to identify and define models of maternity care in Australia has prevented accurate evaluations of outcomes for mothers and babies in different models of maternity care. As part of the Commonwealth-funded National Maternity Data Development Project, a classification system was developed utilising a data set specification that defines characteristics of models of maternity care. The Maternity Care Classification System or MaCCS was developed using a participatory action research design that built upon the published and grey literature. The study identified the characteristics that differentiate models of care and classifies models into eleven different Major Model Categories. The MaCCS will enable individual health services, local health districts (networks), jurisdictional and national health authorities to make better informed decisions for planning, policy development and delivery of maternity services in Australia. © The Author(s) 2016.

  10. Speech emotion recognition methods: A literature review

    NASA Astrophysics Data System (ADS)

    Basharirad, Babak; Moradhaseli, Mohammadreza

    2017-10-01

    Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.

  11. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  12. Raster Vs. Point Cloud LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    El-Ashmawy, N.; Shaker, A.

    2014-09-01

    Airborne Laser Scanning systems with light detection and ranging (LiDAR) technology is one of the fast and accurate 3D point data acquisition techniques. Generating accurate digital terrain and/or surface models (DTM/DSM) is the main application of collecting LiDAR range data. Recently, LiDAR range and intensity data have been used for land cover classification applications. Data range and Intensity, (strength of the backscattered signals measured by the LiDAR systems), are affected by the flying height, the ground elevation, scanning angle and the physical characteristics of the objects surface. These effects may lead to uneven distribution of point cloud or some gaps that may affect the classification process. Researchers have investigated the conversion of LiDAR range point data to raster image for terrain modelling. Interpolation techniques have been used to achieve the best representation of surfaces, and to fill the gaps between the LiDAR footprints. Interpolation methods are also investigated to generate LiDAR range and intensity image data for land cover classification applications. In this paper, different approach has been followed to classifying the LiDAR data (range and intensity) for land cover mapping. The methodology relies on the classification of the point cloud data based on their range and intensity and then converted the classified points into raster image. The gaps in the data are filled based on the classes of the nearest neighbour. Land cover maps are produced using two approaches using: (a) the conventional raster image data based on point interpolation; and (b) the proposed point data classification. A study area covering an urban district in Burnaby, British Colombia, Canada, is selected to compare the results of the two approaches. Five different land cover classes can be distinguished in that area: buildings, roads and parking areas, trees, low vegetation (grass), and bare soil. The results show that an improvement of around 10 % in the classification results can be achieved by using the proposed approach.

  13. Holographic Location of Distant Points (PREPRINT)

    DTIC Science & Technology

    2010-06-01

    respects and the nonimaging systems have significant advantages. This paper shows how to use holograms to construct a flat, solid, small, accurate, small... nonimaging point location system. 15. SUBJECT TERMS imagery, holographic 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT: SAR 18...respects and the nonimaging systems we have discussed earlier (1, 2) have significant advantages. This paper shows how to use holograms to construct a

  14. Automatic grade classification of Barretts Esophagus through feature enhancement

    NASA Astrophysics Data System (ADS)

    Ghatwary, Noha; Ahmed, Amr; Ye, Xujiong; Jalab, Hamid

    2017-03-01

    Barretts Esophagus (BE) is a precancerous condition that affects the esophagus tube and has the risk of developing esophageal adenocarcinoma. BE is the process of developing metaplastic intestinal epithelium and replacing the normal cells in the esophageal area. The detection of BE is considered difficult due to its appearance and properties. The diagnosis is usually done through both endoscopy and biopsy. Recently, Computer Aided Diagnosis systems have been developed to support physicians opinion when facing difficulty in detection/classification in different types of diseases. In this paper, an automatic classification of Barretts Esophagus condition is introduced. The presented method enhances the internal features of a Confocal Laser Endomicroscopy (CLE) image by utilizing a proposed enhancement filter. This filter depends on fractional differentiation and integration that improve the features in the discrete wavelet transform of an image. Later on, various features are extracted from each enhanced image on different levels for the multi-classification process. Our approach is validated on a dataset that consists of a group of 32 patients with 262 images with different histology grades. The experimental results demonstrated the efficiency of the proposed technique. Our method helps clinicians for more accurate classification. This potentially helps to reduce the need for biopsies needed for diagnosis, facilitate the regular monitoring of treatment/development of the patients case and can help train doctors with the new endoscopy technology. The accurate automatic classification is particularly important for the Intestinal Metaplasia (IM) type, which could turn into deadly cancerous. Hence, this work contributes to automatic classification that facilitates early intervention/treatment and decreasing biopsy samples needed.

  15. A pilot study to explore the feasibility of using theClinical Care Classification System for developing a reliable costing method for nursing services.

    PubMed

    Dykes, Patricia C; Wantland, Dean; Whittenburg, Luann; Lipsitz, Stuart; Saba, Virginia K

    2013-01-01

    While nursing activities represent a significant proportion of inpatient care, there are no reliable methods for determining nursing costs based on the actual services provided by the nursing staff. Capture of data to support accurate measurement and reporting on the cost of nursing services is fundamental to effective resource utilization. Adopting standard terminologies that support tracking both the quality and the cost of care could reduce the data entry burden on direct care providers. This pilot study evaluated the feasibility of using a standardized nursing terminology, the Clinical Care Classification System (CCC), for developing a reliable costing method for nursing services. Two different approaches are explored; the Relative Value Unit RVU and the simple cost-to-time methods. We found that the simple cost-to-time method was more accurate and more transparent in its derivation than the RVU method and may support a more consistent and reliable approach for costing nursing services.

  16. Computerized decision support system for mass identification in breast using digital mammogram: a study on GA-based neuro-fuzzy approaches.

    PubMed

    Das, Arpita; Bhattacharya, Mahua

    2011-01-01

    In the present work, authors have developed a treatment planning system implementing genetic based neuro-fuzzy approaches for accurate analysis of shape and margin of tumor masses appearing in breast using digital mammogram. It is obvious that a complicated structure invites the problem of over learning and misclassification. In proposed methodology, genetic algorithm (GA) has been used for searching of effective input feature vectors combined with adaptive neuro-fuzzy model for final classification of different boundaries of tumor masses. The study involves 200 digitized mammograms from MIAS and other databases and has shown 86% correct classification rate.

  17. An accelerated framework for the classification of biological targets from solid-state micropore data.

    PubMed

    Hanif, Madiha; Hafeez, Abdul; Suleman, Yusuf; Mustafa Rafique, M; Butt, Ali R; Iqbal, Samir M

    2016-10-01

    Micro- and nanoscale systems have provided means to detect biological targets, such as DNA, proteins, and human cells, at ultrahigh sensitivity. However, these devices suffer from noise in the raw data, which continues to be significant as newer and devices that are more sensitive produce an increasing amount of data that needs to be analyzed. An important dimension that is often discounted in these systems is the ability to quickly process the measured data for an instant feedback. Realizing and developing algorithms for the accurate detection and classification of biological targets in realtime is vital. Toward this end, we describe a supervised machine-learning approach that records single cell events (pulses), computes useful pulse features, and classifies the future patterns into their respective types, such as cancerous/non-cancerous cells based on the training data. The approach detects cells with an accuracy of 70% from the raw data followed by an accurate classification when larger training sets are employed. The parallel implementation of the algorithm on graphics processing unit (GPU) demonstrates a speedup of three to four folds as compared to a serial implementation on an Intel Core i7 processor. This incredibly efficient GPU system is an effort to streamline the analysis of pulse data in an academic setting. This paper presents for the first time ever, a non-commercial technique using a GPU system for realtime analysis, paired with biological cluster targeting analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system.

    PubMed

    Al-Masni, Mohammed A; Al-Antari, Mugahed A; Park, Jeong-Min; Gi, Geon; Kim, Tae-Yeon; Rivera, Patricio; Valarezo, Edwin; Choi, Mun-Taek; Han, Seung-Moo; Kim, Tae-Seong

    2018-04-01

    Automatic detection and classification of the masses in mammograms are still a big challenge and play a crucial role to assist radiologists for accurate diagnosis. In this paper, we propose a novel Computer-Aided Diagnosis (CAD) system based on one of the regional deep learning techniques, a ROI-based Convolutional Neural Network (CNN) which is called You Only Look Once (YOLO). Although most previous studies only deal with classification of masses, our proposed YOLO-based CAD system can handle detection and classification simultaneously in one framework. The proposed CAD system contains four main stages: preprocessing of mammograms, feature extraction utilizing deep convolutional networks, mass detection with confidence, and finally mass classification using Fully Connected Neural Networks (FC-NNs). In this study, we utilized original 600 mammograms from Digital Database for Screening Mammography (DDSM) and their augmented mammograms of 2,400 with the information of the masses and their types in training and testing our CAD. The trained YOLO-based CAD system detects the masses and then classifies their types into benign or malignant. Our results with five-fold cross validation tests show that the proposed CAD system detects the mass location with an overall accuracy of 99.7%. The system also distinguishes between benign and malignant lesions with an overall accuracy of 97%. Our proposed system even works on some challenging breast cancer cases where the masses exist over the pectoral muscles or dense regions. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. A Multiagent-based Intrusion Detection System with the Support of Multi-Class Supervised Classification

    NASA Astrophysics Data System (ADS)

    Shyu, Mei-Ling; Sainani, Varsha

    The increasing number of network security related incidents have made it necessary for the organizations to actively protect their sensitive data with network intrusion detection systems (IDSs). IDSs are expected to analyze a large volume of data while not placing a significantly added load on the monitoring systems and networks. This requires good data mining strategies which take less time and give accurate results. In this study, a novel data mining assisted multiagent-based intrusion detection system (DMAS-IDS) is proposed, particularly with the support of multiclass supervised classification. These agents can detect and take predefined actions against malicious activities, and data mining techniques can help detect them. Our proposed DMAS-IDS shows superior performance compared to central sniffing IDS techniques, and saves network resources compared to other distributed IDS with mobile agents that activate too many sniffers causing bottlenecks in the network. This is one of the major motivations to use a distributed model based on multiagent platform along with a supervised classification technique.

  20. An efficient abnormal cervical cell detection system based on multi-instance extreme learning machine

    NASA Astrophysics Data System (ADS)

    Zhao, Lili; Yin, Jianping; Yuan, Lihuan; Liu, Qiang; Li, Kuan; Qiu, Minghui

    2017-07-01

    Automatic detection of abnormal cells from cervical smear images is extremely demanded in annual diagnosis of women's cervical cancer. For this medical cell recognition problem, there are three different feature sections, namely cytology morphology, nuclear chromatin pathology and region intensity. The challenges of this problem come from feature combination s and classification accurately and efficiently. Thus, we propose an efficient abnormal cervical cell detection system based on multi-instance extreme learning machine (MI-ELM) to deal with above two questions in one unified framework. MI-ELM is one of the most promising supervised learning classifiers which can deal with several feature sections and realistic classification problems analytically. Experiment results over Herlev dataset demonstrate that the proposed method outperforms three traditional methods for two-class classification in terms of well accuracy and less time.

  1. Classification of Normal and Pathological Gait in Young Children Based on Foot Pressure Data.

    PubMed

    Guo, Guodong; Guffey, Keegan; Chen, Wenbin; Pergami, Paola

    2017-01-01

    Human gait recognition, an active research topic in computer vision, is generally based on data obtained from images/videos. We applied computer vision technology to classify pathology-related changes in gait in young children using a foot-pressure database collected using the GAITRite walkway system. As foot positioning changes with children's development, we also investigated the possibility of age estimation based on this data. Our results demonstrate that the data collected by the GAITRite system can be used for normal/pathological gait classification. Combining age information and normal/pathological gait classification increases the accuracy of the classifier. This novel approach could support the development of an accurate, real-time, and economic measure of gait abnormalities in children, able to provide important feedback to clinicians regarding the effect of rehabilitation interventions, and to support targeted treatment modifications.

  2. Flying insect detection and classification with inexpensive sensors.

    PubMed

    Chen, Yanping; Why, Adena; Batista, Gustavo; Mafra-Neto, Agenor; Keogh, Eamonn

    2014-10-15

    An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect's flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered.

  3. Classification of cardiac patient states using artificial neural networks

    PubMed Central

    Kannathal, N; Acharya, U Rajendra; Lim, Choo Min; Sadasivan, PK; Krishnan, SM

    2003-01-01

    Electrocardiogram (ECG) is a nonstationary signal; therefore, the disease indicators may occur at random in the time scale. This may require the patient be kept under observation for long intervals in the intensive care unit of hospitals for accurate diagnosis. The present study examined the classification of the states of patients with certain diseases in the intensive care unit using their ECG and an Artificial Neural Networks (ANN) classification system. The states were classified into normal, abnormal and life threatening. Seven significant features extracted from the ECG were fed as input parameters to the ANN for classification. Three neural network techniques, namely, back propagation, self-organizing maps and radial basis functions, were used for classification of the patient states. The ANN classifier in this case was observed to be correct in approximately 99% of the test cases. This result was further improved by taking 13 features of the ECG as input for the ANN classifier. PMID:19649222

  4. Flying Insect Detection and Classification with Inexpensive Sensors

    PubMed Central

    Chen, Yanping; Why, Adena; Batista, Gustavo; Mafra-Neto, Agenor; Keogh, Eamonn

    2014-01-01

    An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect’s flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered. PMID:25350921

  5. Rapid automated classification of anesthetic depth levels using GPU based parallelization of neural networks.

    PubMed

    Peker, Musa; Şen, Baha; Gürüler, Hüseyin

    2015-02-01

    The effect of anesthesia on the patient is referred to as depth of anesthesia. Rapid classification of appropriate depth level of anesthesia is a matter of great importance in surgical operations. Similarly, accelerating classification algorithms is important for the rapid solution of problems in the field of biomedical signal processing. However numerous, time-consuming mathematical operations are required when training and testing stages of the classification algorithms, especially in neural networks. In this study, to accelerate the process, parallel programming and computing platform (Nvidia CUDA) facilitates dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU) was utilized. The system was employed to detect anesthetic depth level on related electroencephalogram (EEG) data set. This dataset is rather complex and large. Moreover, the achieving more anesthetic levels with rapid response is critical in anesthesia. The proposed parallelization method yielded high accurate classification results in a faster time.

  6. Per-field crop classification in irrigated agricultural regions in middle Asia using random forest and support vector machine ensemble

    NASA Astrophysics Data System (ADS)

    Löw, Fabian; Schorcht, Gunther; Michel, Ulrich; Dech, Stefan; Conrad, Christopher

    2012-10-01

    Accurate crop identification and crop area estimation are important for studies on irrigated agricultural systems, yield and water demand modeling, and agrarian policy development. In this study a novel combination of Random Forest (RF) and Support Vector Machine (SVM) classifiers is presented that (i) enhances crop classification accuracy and (ii) provides spatial information on map uncertainty. The methodology was implemented over four distinct irrigated sites in Middle Asia using RapidEye time series data. The RF feature importance statistics was used as feature-selection strategy for the SVM to assess possible negative effects on classification accuracy caused by an oversized feature space. The results of the individual RF and SVM classifications were combined with rules based on posterior classification probability and estimates of classification probability entropy. SVM classification performance was increased by feature selection through RF. Further experimental results indicate that the hybrid classifier improves overall classification accuracy in comparison to the single classifiers as well as useŕs and produceŕs accuracy.

  7. Free classification of American English dialects by native and non-native listeners

    PubMed Central

    Clopper, Cynthia G.; Bradlow, Ann R.

    2009-01-01

    Most second language acquisition research focuses on linguistic structures, and less research has examined the acquisition of sociolinguistic patterns. The current study explored the perceptual classification of regional dialects of American English by native and non-native listeners using a free classification task. Results revealed similar classification strategies for the native and non-native listeners. However, the native listeners were more accurate overall than the non-native listeners. In addition, the non-native listeners were less able to make use of constellations of cues to accurately classify the talkers by dialect. However, the non-native listeners were able to attend to cues that were either phonologically or sociolinguistically relevant in their native language. These results suggest that non-native listeners can use information in the speech signal to classify talkers by regional dialect, but that their lack of signal-independent cultural knowledge about variation in the second language leads to less accurate classification performance. PMID:20161400

  8. A classification system for predicting pallet part quality from hardwood cants

    Treesearch

    E. Paul Craft; Kenneth R., Jr. Whitenack

    1982-01-01

    Producers who manufacture pallet parts from hardwood cants generally must purchase cants on the basis of existing structural timber grades that do not adequately reflect the quality of pallet parts produced from resawed cants. A system for classifying cants for pallet part production has been developed that more accurately reflects the parts grade mix that can be...

  9. An integrated healthcare system for personalized chronic disease care in home-hospital environments.

    PubMed

    Jeong, Sangjin; Youn, Chan-Hyun; Shim, Eun Bo; Kim, Moonjung; Cho, Young Min; Peng, Limei

    2012-07-01

    Facing the increasing demands and challenges in the area of chronic disease care, various studies on the healthcare system which can, whenever and wherever, extract and process patient data have been conducted. Chronic diseases are the long-term diseases and require the processes of the real-time monitoring, multidimensional quantitative analysis, and the classification of patients' diagnostic information. A healthcare system for chronic diseases is characterized as an at-hospital and at-home service according to a targeted environment. Both services basically aim to provide patients with accurate diagnoses of disease by monitoring a variety of physical states with a number of monitoring methods, but there are differences between home and hospital environments, and the different characteristics should be considered in order to provide more accurate diagnoses for patients, especially, patients having chronic diseases. In this paper, we propose a patient status classification method for effectively identifying and classifying chronic diseases and show the validity of the proposed method. Furthermore, we present a new healthcare system architecture that integrates the at-home and at-hospital environment and discuss the applicability of the architecture using practical target services.

  10. Monitoring the Depth of Anesthesia Using a New Adaptive Neurofuzzy System.

    PubMed

    Shalbaf, Ahmad; Saffar, Mohsen; Sleigh, Jamie W; Shalbaf, Reza

    2018-05-01

    Accurate and noninvasive monitoring of the depth of anesthesia (DoA) is highly desirable. Since the anesthetic drugs act mainly on the central nervous system, the analysis of brain activity using electroencephalogram (EEG) is very useful. This paper proposes a novel automated method for assessing the DoA using EEG. First, 11 features including spectral, fractal, and entropy are extracted from EEG signal and then, by applying an algorithm according to exhaustive search of all subsets of features, a combination of the best features (Beta-index, sample entropy, shannon permutation entropy, and detrended fluctuation analysis) is selected. Accordingly, we feed these extracted features to a new neurofuzzy classification algorithm, adaptive neurofuzzy inference system with linguistic hedges (ANFIS-LH). This structure can successfully model systems with nonlinear relationships between input and output, and also classify overlapped classes accurately. ANFIS-LH, which is based on modified classical fuzzy rules, reduces the effects of the insignificant features in input space, which causes overlapping and modifies the output layer structure. The presented method classifies EEG data into awake, light, general, and deep states during anesthesia with sevoflurane in 17 patients. Its accuracy is 92% compared to a commercial monitoring system (response entropy index) successfully. Moreover, this method reaches the classification accuracy of 93% to categorize EEG signal to awake and general anesthesia states by another database of propofol and volatile anesthesia in 50 patients. To sum up, this method is potentially applicable to a new real-time monitoring system to help the anesthesiologist with continuous assessment of DoA quickly and accurately.

  11. Classifying environmentally significant urban land uses with satellite imagery.

    PubMed

    Park, Mi-Hyun; Stenstrom, Michael K

    2008-01-01

    We investigated Bayesian networks to classify urban land use from satellite imagery. Landsat Enhanced Thematic Mapper Plus (ETM(+)) images were used for the classification in two study areas: (1) Marina del Rey and its vicinity in the Santa Monica Bay Watershed, CA and (2) drainage basins adjacent to the Sweetwater Reservoir in San Diego, CA. Bayesian networks provided 80-95% classification accuracy for urban land use using four different classification systems. The classifications were robust with small training data sets with normal and reduced radiometric resolution. The networks needed only 5% of the total data (i.e., 1500 pixels) for sample size and only 5- or 6-bit information for accurate classification. The network explicitly showed the relationship among variables from its structure and was also capable of utilizing information from non-spectral data. The classification can be used to provide timely and inexpensive land use information over large areas for environmental purposes such as estimating stormwater pollutant loads.

  12. Diagnosing tuberculosis with a novel support vector machine-based artificial immune recognition system.

    PubMed

    Saybani, Mahmoud Reza; Shamshirband, Shahaboddin; Golzari Hormozi, Shahram; Wah, Teh Ying; Aghabozorgi, Saeed; Pourhoseingholi, Mohamad Amin; Olariu, Teodora

    2015-04-01

    Tuberculosis (TB) is a major global health problem, which has been ranked as the second leading cause of death from an infectious disease worldwide. Diagnosis based on cultured specimens is the reference standard, however results take weeks to process. Scientists are looking for early detection strategies, which remain the cornerstone of tuberculosis control. Consequently there is a need to develop an expert system that helps medical professionals to accurately and quickly diagnose the disease. Artificial Immune Recognition System (AIRS) has been used successfully for diagnosing various diseases. However, little effort has been undertaken to improve its classification accuracy. In order to increase the classification accuracy of AIRS, this study introduces a new hybrid system that incorporates a support vector machine into AIRS for diagnosing tuberculosis. Patient epacris reports obtained from the Pasteur laboratory of Iran were used as the benchmark data set, with the sample size of 175 (114 positive samples for TB and 60 samples in the negative group). The strategy of this study was to ensure representativeness, thus it was important to have an adequate number of instances for both TB and non-TB cases. The classification performance was measured through 10-fold cross-validation, Root Mean Squared Error (RMSE), sensitivity and specificity, Youden's Index, and Area Under the Curve (AUC). Statistical analysis was done using the Waikato Environment for Knowledge Analysis (WEKA), a machine learning program for windows. With an accuracy of 100%, sensitivity of 100%, specificity of 100%, Youden's Index of 1, Area Under the Curve of 1, and RMSE of 0, the proposed method was able to successfully classify tuberculosis patients. There have been many researches that aimed at diagnosing tuberculosis faster and more accurately. Our results described a model for diagnosing tuberculosis with 100% sensitivity and 100% specificity. This model can be used as an additional tool for experts in medicine to diagnose TBC more accurately and quickly.

  13. Progressive Classification Using Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user can halt this reclassification process at any point, thereby obtaining the best possible result for a given amount of computation time. Alternatively, the results can be displayed as they are generated, providing the user with real-time feedback about the current accuracy of classification.

  14. Natural resources inventory and land evaluation in Switzerland

    NASA Technical Reports Server (NTRS)

    Haefner, H. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. Using MSS channels 5 and 7 and a supervised classification system with a PPD classification algorithm, it was possible to map the exact areal extent of the snow cover and of the transition zone with melting snow patches and snow free parts of various sizes over a large area under different aspects such as relief, exposure, shadows etc. A correlation of the data from ground control, areal underflights and earth resources satellites provided a very accurate interpretation of the melting procedure of snow in high mountains.

  15. Contextual classification of multispectral image data: Approximate algorithm

    NASA Technical Reports Server (NTRS)

    Tilton, J. C. (Principal Investigator)

    1980-01-01

    An approximation to a classification algorithm incorporating spatial context information in a general, statistical manner is presented which is computationally less intensive. Classifications that are nearly as accurate are produced.

  16. Characterizing fuels in the 21st century.

    Treesearch

    David Sandberg; Roger D. Ottmar; Geoffrey H. Cushon

    2001-01-01

    The ongoing development of sophisticated fire behavior and effects models has demonstrated the need for a comprehensive system of fuel classification that more accurately captures the structural complexity and geographic diversity of fuelbeds. The Fire and Environmental Research Applications Team (FERA) of the USD Forest Service, Pacific Northwest Research Station, is...

  17. 76 FR 51239 - North American Industry Classification System; Revision for 2012

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-17

    ... definitional and economic changes so that they can create continuous time series and accurately analyze data changes over time. The inclusion of revenues from FGP activities in manufacturing will effectively change...) to exclude production that occurs in a foreign country for historical consistency in time series...

  18. A Novel Feature Level Fusion for Heart Rate Variability Classification Using Correntropy and Cauchy-Schwarz Divergence.

    PubMed

    Goshvarpour, Ateke; Goshvarpour, Atefeh

    2018-04-30

    Heart rate variability (HRV) analysis has become a widely used tool for monitoring pathological and psychological states in medical applications. In a typical classification problem, information fusion is a process whereby the effective combination of the data can achieve a more accurate system. The purpose of this article was to provide an accurate algorithm for classifying HRV signals in various psychological states. Therefore, a novel feature level fusion approach was proposed. First, using the theory of information, two similarity indicators of the signal were extracted, including correntropy and Cauchy-Schwarz divergence. Applying probabilistic neural network (PNN) and k-nearest neighbor (kNN), the performance of each index in the classification of meditators and non-meditators HRV signals was appraised. Then, three fusion rules, including division, product, and weighted sum rules were used to combine the information of both similarity measures. For the first time, we propose an algorithm to define the weights of each feature based on the statistical p-values. The performance of HRV classification using combined features was compared with the non-combined features. Totally, the accuracy of 100% was obtained for discriminating all states. The results showed the strong ability and proficiency of division and weighted sum rules in the improvement of the classifier accuracies.

  19. Classifying visuomotor workload in a driving simulator using subject specific spatial brain patterns

    PubMed Central

    Dijksterhuis, Chris; de Waard, Dick; Brookhuis, Karel A.; Mulder, Ben L. J. M.; de Jong, Ritske

    2013-01-01

    A passive Brain Computer Interface (BCI) is a system that responds to the spontaneously produced brain activity of its user and could be used to develop interactive task support. A human-machine system that could benefit from brain-based task support is the driver-car interaction system. To investigate the feasibility of such a system to detect changes in visuomotor workload, 34 drivers were exposed to several levels of driving demand in a driving simulator. Driving demand was manipulated by varying driving speed and by asking the drivers to comply to individually set lane keeping performance targets. Differences in the individual driver's workload levels were classified by applying the Common Spatial Pattern (CSP) and Fisher's linear discriminant analysis to frequency filtered electroencephalogram (EEG) data during an off line classification study. Several frequency ranges, EEG cap configurations, and condition pairs were explored. It was found that classifications were most accurate when based on high frequencies, larger electrode sets, and the frontal electrodes. Depending on these factors, classification accuracies across participants reached about 95% on average. The association between high accuracies and high frequencies suggests that part of the underlying information did not originate directly from neuronal activity. Nonetheless, average classification accuracies up to 75–80% were obtained from the lower EEG ranges that are likely to reflect neuronal activity. For a system designer, this implies that a passive BCI system may use several frequency ranges for workload classifications. PMID:23970851

  20. IDM-PhyChm-Ens: intelligent decision-making ensemble methodology for classification of human breast cancer using physicochemical properties of amino acids.

    PubMed

    Ali, Safdar; Majid, Abdul; Khan, Asifullah

    2014-04-01

    Development of an accurate and reliable intelligent decision-making method for the construction of cancer diagnosis system is one of the fast growing research areas of health sciences. Such decision-making system can provide adequate information for cancer diagnosis and drug discovery. Descriptors derived from physicochemical properties of protein sequences are very useful for classifying cancerous proteins. Recently, several interesting research studies have been reported on breast cancer classification. To this end, we propose the exploitation of the physicochemical properties of amino acids in protein primary sequences such as hydrophobicity (Hd) and hydrophilicity (Hb) for breast cancer classification. Hd and Hb properties of amino acids, in recent literature, are reported to be quite effective in characterizing the constituent amino acids and are used to study protein foldings, interactions, structures, and sequence-order effects. Especially, using these physicochemical properties, we observed that proline, serine, tyrosine, cysteine, arginine, and asparagine amino acids offer high discrimination between cancerous and healthy proteins. In addition, unlike traditional ensemble classification approaches, the proposed 'IDM-PhyChm-Ens' method was developed by combining the decision spaces of a specific classifier trained on different feature spaces. The different feature spaces used were amino acid composition, split amino acid composition, and pseudo amino acid composition. Consequently, we have exploited different feature spaces using Hd and Hb properties of amino acids to develop an accurate method for classification of cancerous protein sequences. We developed ensemble classifiers using diverse learning algorithms such as random forest (RF), support vector machines (SVM), and K-nearest neighbor (KNN) trained on different feature spaces. We observed that ensemble-RF, in case of cancer classification, performed better than ensemble-SVM and ensemble-KNN. Our analysis demonstrates that ensemble-RF, ensemble-SVM and ensemble-KNN are more effective than their individual counterparts. The proposed 'IDM-PhyChm-Ens' method has shown improved performance compared to existing techniques.

  1. Influence of multi-source and multi-temporal remotely sensed and ancillary data on the accuracy of random forest classification of wetlands in northern Minnesota

    USGS Publications Warehouse

    Corcoran, Jennifer M.; Knight, Joseph F.; Gallant, Alisa L.

    2013-01-01

    Wetland mapping at the landscape scale using remotely sensed data requires both affordable data and an efficient accurate classification method. Random forest classification offers several advantages over traditional land cover classification techniques, including a bootstrapping technique to generate robust estimations of outliers in the training data, as well as the capability of measuring classification confidence. Though the random forest classifier can generate complex decision trees with a multitude of input data and still not run a high risk of over fitting, there is a great need to reduce computational and operational costs by including only key input data sets without sacrificing a significant level of accuracy. Our main questions for this study site in Northern Minnesota were: (1) how does classification accuracy and confidence of mapping wetlands compare using different remote sensing platforms and sets of input data; (2) what are the key input variables for accurate differentiation of upland, water, and wetlands, including wetland type; and (3) which datasets and seasonal imagery yield the best accuracy for wetland classification. Our results show the key input variables include terrain (elevation and curvature) and soils descriptors (hydric), along with an assortment of remotely sensed data collected in the spring (satellite visible, near infrared, and thermal bands; satellite normalized vegetation index and Tasseled Cap greenness and wetness; and horizontal-horizontal (HH) and horizontal-vertical (HV) polarization using L-band satellite radar). We undertook this exploratory analysis to inform decisions by natural resource managers charged with monitoring wetland ecosystems and to aid in designing a system for consistent operational mapping of wetlands across landscapes similar to those found in Northern Minnesota.

  2. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.

    PubMed

    Li, Linyi; Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  3. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    PubMed Central

    Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440

  4. New View on the Initial Development Site and Radiographic Classification System of Osteoarthritis of the Knee Based on Radiographic Analysis

    PubMed Central

    Moon, Ki-Ho

    2012-01-01

    Introduction: Radiographic pathology of severe osteoarthritis of the knee (OAK) such as severe osteophyte at tibial spine (TS), compartment narrowing, marginal osteophyte, and subchondral sclerosis is well known. Kellgren-Lawrence grading system, which is widely used to diagnose OAK, describes narrowing-marginal osteophyte in 4-grades but uses osteophyte at TS only as evidence of OAK without detailed-grading. However, kinematically the knee employs medial TS as an axis while medial and lateral compartments carry the load, suggesting that early OAK would occur sooner at TS than at compartment. Then, Kellgren-Lawrence system may be inadequate to diagnose early-stage OAK manifested as a subtle osteophyte at TS without narrowing-marginal osteophyte. This undiagnosed-OAK will deteriorate becoming a contributing factor in an increasing incidence of OAK. Methods: This study developed a radiographic OAK-marker based on both osteophyte at TS and compartment narrowing-marginal osteophyte and graded as normal, mild, moderate, and severe. With this marker, both knee radiographs of 1,728 patients with knee pain were analyzed. Results: Among 611 early-stage mild OAK, 562 or 92% started at TS and 49 or 8% at compartment. It suggests the initial development site of OAK, helping develop new site-specific radiographic classification system of OAK accurately to diagnose all severity of OAK at early, intermediate, or late-stage. It showed that Kellgren-Lawrence system missed 92.0% of early-stage mild OAK from diagnosis. Conclusions: A subtle osteophyte at TS is the earliest radiographic sign of OAK. A new radiographic classification system of OAK was suggested for accurate diagnosis of all OAK in severity and at stage. PMID:23675278

  5. New view on the initial development site and radiographic classification system of osteoarthritis of the knee based on radiographic analysis.

    PubMed

    Moon, Ki-Ho

    2012-12-01

    Radiographic pathology of severe osteoarthritis of the knee (OAK) such as severe osteophyte at tibial spine (TS), compartment narrowing, marginal osteophyte, and subchondral sclerosis is well known. Kellgren-Lawrence grading system, which is widely used to diagnose OAK, describes narrowing-marginal osteophyte in 4-grades but uses osteophyte at TS only as evidence of OAK without detailed-grading. However, kinematically the knee employs medial TS as an axis while medial and lateral compartments carry the load, suggesting that early OAK would occur sooner at TS than at compartment. Then, Kellgren-Lawrence system may be inadequate to diagnose early-stage OAK manifested as a subtle osteophyte at TS without narrowing-marginal osteophyte. This undiagnosed-OAK will deteriorate becoming a contributing factor in an increasing incidence of OAK. This study developed a radiographic OAK-marker based on both osteophyte at TS and compartment narrowing-marginal osteophyte and graded as normal, mild, moderate, and severe. With this marker, both knee radiographs of 1,728 patients with knee pain were analyzed. Among 611 early-stage mild OAK, 562 or 92% started at TS and 49 or 8% at compartment. It suggests the initial development site of OAK, helping develop new site-specific radiographic classification system of OAK accurately to diagnose all severity of OAK at early, intermediate, or late-stage. It showed that Kellgren-Lawrence system missed 92.0% of early-stage mild OAK from diagnosis. A subtle osteophyte at TS is the earliest radiographic sign of OAK. A new radiographic classification system of OAK was suggested for accurate diagnosis of all OAK in severity and at stage.

  6. A Temporal Pattern Mining Approach for Classifying Electronic Health Record Data

    PubMed Central

    Batal, Iyad; Valizadegan, Hamed; Cooper, Gregory F.; Hauskrecht, Milos

    2013-01-01

    We study the problem of learning classification models from complex multivariate temporal data encountered in electronic health record systems. The challenge is to define a good set of features that are able to represent well the temporal aspect of the data. Our method relies on temporal abstractions and temporal pattern mining to extract the classification features. Temporal pattern mining usually returns a large number of temporal patterns, most of which may be irrelevant to the classification task. To address this problem, we present the Minimal Predictive Temporal Patterns framework to generate a small set of predictive and non-spurious patterns. We apply our approach to the real-world clinical task of predicting patients who are at risk of developing heparin induced thrombocytopenia. The results demonstrate the benefit of our approach in efficiently learning accurate classifiers, which is a key step for developing intelligent clinical monitoring systems. PMID:25309815

  7. The Australian experience in dental classification.

    PubMed

    Mahoney, Greg

    2008-01-01

    The Australian Defence Health Service uses a disease-risk management strategy to achieve two goals: first, to identify Australian Defence Force (ADF) members who are at high risk of developing an adverse health event, and second, to deliver intervention strategies efficiently so that maximum benefits for health within the ADF are achieved with the least cost. The present dental classification system utilized by the ADF, while an excellent dental triage tool, has been found not to be predictive of an ADF member having an adverse dental event in the following 12-month period. Clearly, there is a need for further research to establish a predictive risk-based dental classification system. This risk assessment must be sensitive enough to accurately estimate the probability that an ADF member will experience dental pain, dysfunction, or other adverse dental events within a forthcoming period, typically 12 months. Furthermore, there needs to be better epidemiological data collected in the field to assist in the research.

  8. Analysis and application of classification methods of complex carbonate reservoirs

    NASA Astrophysics Data System (ADS)

    Li, Xiongyan; Qin, Ruibao; Ping, Haitao; Wei, Dan; Liu, Xiaomei

    2018-06-01

    There are abundant carbonate reservoirs from the Cenozoic to Mesozoic era in the Middle East. Due to variation in sedimentary environment and diagenetic process of carbonate reservoirs, several porosity types coexist in carbonate reservoirs. As a result, because of the complex lithologies and pore types as well as the impact of microfractures, the pore structure is very complicated. Therefore, it is difficult to accurately calculate the reservoir parameters. In order to accurately evaluate carbonate reservoirs, based on the pore structure evaluation of carbonate reservoirs, the classification methods of carbonate reservoirs are analyzed based on capillary pressure curves and flow units. Based on the capillary pressure curves, although the carbonate reservoirs can be classified, the relationship between porosity and permeability after classification is not ideal. On the basis of the flow units, the high-precision functional relationship between porosity and permeability after classification can be established. Therefore, the carbonate reservoirs can be quantitatively evaluated based on the classification of flow units. In the dolomite reservoirs, the average absolute error of calculated permeability decreases from 15.13 to 7.44 mD. Similarly, the average absolute error of calculated permeability of limestone reservoirs is reduced from 20.33 to 7.37 mD. Only by accurately characterizing pore structures and classifying reservoir types, reservoir parameters could be calculated accurately. Therefore, characterizing pore structures and classifying reservoir types are very important to accurate evaluation of complex carbonate reservoirs in the Middle East.

  9. Real-Time Blob-Wise Sugar Beets VS Weeds Classification for Monitoring Fields Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Milioto, A.; Lottes, P.; Stachniss, C.

    2017-08-01

    UAVs are becoming an important tool for field monitoring and precision farming. A prerequisite for observing and analyzing fields is the ability to identify crops and weeds from image data. In this paper, we address the problem of detecting the sugar beet plants and weeds in the field based solely on image data. We propose a system that combines vegetation detection and deep learning to obtain a high-quality classification of the vegetation in the field into value crops and weeds. We implemented and thoroughly evaluated our system on image data collected from different sugar beet fields and illustrate that our approach allows for accurately identifying the weeds on the field.

  10. Classification of maxillectomy defects: a systematic review and criteria necessary for a universal description.

    PubMed

    Bidra, Avinash S; Jacob, Rhonda F; Taylor, Thomas D

    2012-04-01

    Maxillectomy defects are complex and involve a number of anatomic structures. Several maxillectomy defect classifications have been proposed with no universal acceptance among surgeons and prosthodontists. Established criteria for describing the maxillectomy defect are lacking. This systematic review aimed to evaluate classification systems in the available literature, to provide a critical appraisal, and to identify the criteria necessary for a universal description of maxillectomy and midfacial defects. An electronic search of the English language literature between the periods of 1974 and June 2011 was performed by using PubMed, Scopus, and Cochrane databases with predetermined inclusion criteria. Key terms included in the search were maxillectomy classification, maxillary resection classification, maxillary removal classification, maxillary reconstruction classification, midfacial defect classification, and midfacial reconstruction classification. This was supplemented by a manual search of selected journals. After application of predetermined exclusion criteria, the final list of articles was reviewed in-depth to provide a critical appraisal and identify criteria for a universal description of a maxillectomy defect. The electronic database search yielded 261 titles. Systematic application of inclusion and exclusion criteria resulted in identification of 14 maxillectomy and midfacial defect classification systems. From these articles, 6 different criteria were identified as necessary for a universal description of a maxillectomy defect. Multiple deficiencies were noted in each classification system. Though most articles described the superior-inferior extent of the defect, only a small number of articles described the anterior-posterior and medial-lateral extent of the defect. Few articles listed dental status and soft palate involvement when describing maxillectomy defects. No classification system has accurately described the maxillectomy defect, based on criteria that satisfy both surgical and prosthodontic needs. The 6 criteria identified in this systematic review for a universal description of a maxillectomy defect are: 1) dental status; 2) oroantral/nasal communication status; 3) soft palate and other contiguous structure involvement; 4) superior-inferior extent; 5) anterior-posterior extent; and 6) medial-lateral extent of the defect. A criteria-based description appears more objective and amenable for universal use than a classification-based description. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  11. An online hybrid BCI system based on SSVEP and EMG

    NASA Astrophysics Data System (ADS)

    Lin, Ke; Cinetto, Andrea; Wang, Yijun; Chen, Xiaogang; Gao, Shangkai; Gao, Xiaorong

    2016-04-01

    Objective. A hybrid brain-computer interface (BCI) is a device combined with at least one other communication system that takes advantage of both parts to build a link between humans and machines. To increase the number of targets and the information transfer rate (ITR), electromyogram (EMG) and steady-state visual evoked potential (SSVEP) were combined to implement a hybrid BCI. A multi-choice selection method based on EMG was developed to enhance the system performance. Approach. A 60-target hybrid BCI speller was built in this study. A single trial was divided into two stages: a stimulation stage and an output selection stage. In the stimulation stage, SSVEP and EMG were used together. Every stimulus flickered at its given frequency to elicit SSVEP. All of the stimuli were divided equally into four sections with the same frequency set. The frequency of each stimulus in a section was different. SSVEPs were used to discriminate targets in the same section. Different sections were classified using EMG signals from the forearm. Subjects were asked to make different number of fists according to the target section. Canonical Correlation Analysis (CCA) and mean filtering was used to classify SSVEP and EMG separately. In the output selection stage, the top two optimal choices were given. The first choice with the highest probability of an accurate classification was the default output of the system. Subjects were required to make a fist to select the second choice only if the second choice was correct. Main results. The online results obtained from ten subjects showed that the mean accurate classification rate and ITR were 81.0% and 83.6 bits min-1 respectively only using the first choice selection. The ITR of the hybrid system was significantly higher than the ITR of any of the two single modalities (EMG: 30.7 bits min-1, SSVEP: 60.2 bits min-1). After the addition of the second choice selection and the correction task, the accurate classification rate and ITR was enhanced to 85.8% and 90.9 bit min-1. Significance. These results suggest that the hybrid system proposed here is suitable for practical use.

  12. Experiments with a novel content-based image retrieval software: can we eliminate classification systems in adolescent idiopathic scoliosis?

    PubMed

    Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma

    2014-02-01

    Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.

  13. Scanning electron microscope automatic defect classification of process induced defects

    NASA Astrophysics Data System (ADS)

    Wolfe, Scott; McGarvey, Steve

    2017-03-01

    With the integration of high speed Scanning Electron Microscope (SEM) based Automated Defect Redetection (ADR) in both high volume semiconductor manufacturing and Research and Development (R and D), the need for reliable SEM Automated Defect Classification (ADC) has grown tremendously in the past few years. In many high volume manufacturing facilities and R and D operations, defect inspection is performed on EBeam (EB), Bright Field (BF) or Dark Field (DF) defect inspection equipment. A comma separated value (CSV) file is created by both the patterned and non-patterned defect inspection tools. The defect inspection result file contains a list of the inspection anomalies detected during the inspection tools' examination of each structure, or the examination of an entire wafers surface for non-patterned applications. This file is imported into the Defect Review Scanning Electron Microscope (DRSEM). Following the defect inspection result file import, the DRSEM automatically moves the wafer to each defect coordinate and performs ADR. During ADR the DRSEM operates in a reference mode, capturing a SEM image at the exact position of the anomalies coordinates and capturing a SEM image of a reference location in the center of the wafer. A Defect reference image is created based on the Reference image minus the Defect image. The exact coordinates of the defect is calculated based on the calculated defect position and the anomalies stage coordinate calculated when the high magnification SEM defect image is captured. The captured SEM image is processed through either DRSEM ADC binning, exporting to a Yield Analysis System (YAS), or a combination of both. Process Engineers, Yield Analysis Engineers or Failure Analysis Engineers will manually review the captured images to insure that either the YAS defect binning is accurately classifying the defects or that the DRSEM defect binning is accurately classifying the defects. This paper is an exploration of the feasibility of the utilization of a Hitachi RS4000 Defect Review SEM to perform Automatic Defect Classification with the objective of the total automated classification accuracy being greater than human based defect classification binning when the defects do not require multiple process step knowledge for accurate classification. The implementation of DRSEM ADC has the potential to improve the response time between defect detection and defect classification. Faster defect classification will allow for rapid response to yield anomalies that will ultimately reduce the wafer and/or the die yield.

  14. Impact of Information based Classification on Network Epidemics

    PubMed Central

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-01-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results. PMID:27329348

  15. Correlation Equations for Condensing Heat Exchangers Based on an Algorithmic Performance-Data Classification

    NASA Astrophysics Data System (ADS)

    Pacheco-Vega, Arturo

    2016-09-01

    In this work a new set of correlation equations is developed and introduced to accurately describe the thermal performance of compact heat exchangers with possible condensation. The feasible operating conditions for the thermal system correspond to dry- surface, dropwise condensation, and film condensation. Using a prescribed form for each condition, a global regression analysis for the best-fit correlation to experimental data is carried out with a simulated annealing optimization technique. The experimental data were taken from the literature and algorithmically classified into three groups -related to the possible operating conditions- with a previously-introduced Gaussian-mixture-based methodology. Prior to their use in the analysis, the correct data classification was assessed and confirmed via artificial neural networks. Predictions from the correlations obtained for the different conditions are within the uncertainty of the experiments and substantially more accurate than those commonly used.

  16. Methods for assessing the quality of mammalian embryos: How far we are from the gold standard?

    PubMed

    Rocha, José C; Passalia, Felipe; Matos, Felipe D; Maserati, Marc P; Alves, Mayra F; Almeida, Tamie G de; Cardoso, Bruna L; Basso, Andrea C; Nogueira, Marcelo F G

    2016-08-01

    Morphological embryo classification is of great importance for many laboratory techniques, from basic research to the ones applied to assisted reproductive technology. However, the standard classification method for both human and cattle embryos, is based on quality parameters that reflect the overall morphological quality of the embryo in cattle, or the quality of the individual embryonic structures, more relevant in human embryo classification. This assessment method is biased by the subjectivity of the evaluator and even though several guidelines exist to standardize the classification, it is not a method capable of giving reliable and trustworthy results. Latest approaches for the improvement of quality assessment include the use of data from cellular metabolism, a new morphological grading system, development kinetics and cleavage symmetry, embryo cell biopsy followed by pre-implantation genetic diagnosis, zona pellucida birefringence, ion release by the embryo cells and so forth. Nowadays there exists a great need for evaluation methods that are practical and non-invasive while being accurate and objective. A method along these lines would be of great importance to embryo evaluation by embryologists, clinicians and other professionals who work with assisted reproductive technology. Several techniques shows promising results in this sense, one being the use of digital images of the embryo as basis for features extraction and classification by means of artificial intelligence techniques (as genetic algorithms and artificial neural networks). This process has the potential to become an accurate and objective standard for embryo quality assessment.

  17. Methods for assessing the quality of mammalian embryos: How far we are from the gold standard?

    PubMed Central

    Rocha, José C.; Passalia, Felipe; Matos, Felipe D.; Maserati Jr, Marc P.; Alves, Mayra F.; de Almeida, Tamie G.; Cardoso, Bruna L.; Basso, Andrea C.; Nogueira, Marcelo F. G.

    2016-01-01

    Morphological embryo classification is of great importance for many laboratory techniques, from basic research to the ones applied to assisted reproductive technology. However, the standard classification method for both human and cattle embryos, is based on quality parameters that reflect the overall morphological quality of the embryo in cattle, or the quality of the individual embryonic structures, more relevant in human embryo classification. This assessment method is biased by the subjectivity of the evaluator and even though several guidelines exist to standardize the classification, it is not a method capable of giving reliable and trustworthy results. Latest approaches for the improvement of quality assessment include the use of data from cellular metabolism, a new morphological grading system, development kinetics and cleavage symmetry, embryo cell biopsy followed by pre-implantation genetic diagnosis, zona pellucida birefringence, ion release by the embryo cells and so forth. Nowadays there exists a great need for evaluation methods that are practical and non-invasive while being accurate and objective. A method along these lines would be of great importance to embryo evaluation by embryologists, clinicians and other professionals who work with assisted reproductive technology. Several techniques shows promising results in this sense, one being the use of digital images of the embryo as basis for features extraction and classification by means of artificial intelligence techniques (as genetic algorithms and artificial neural networks). This process has the potential to become an accurate and objective standard for embryo quality assessment. PMID:27584609

  18. What nephrolopathologists need to know about antiphospholipid syndrome-associated nephropathy: Is it time for formulating a classification for renal morphologic lesions?

    PubMed

    Mubarak, Muhammed; Nasri, Hamid

    2014-01-01

    Antiphospholipid syndrome (APS) is a systemic autoimmune disorder which commonly affects kidneys. Directory of Open Access Journals (DOAJ), Google Scholar, PubMed (NLM), LISTA (EBSCO) and Web of Science have been searched. There is sufficient epidemiological, clinical and histopathological evidence to show that antiphospholipid syndrome is a distinctive lesion caused by antiphospholipid antibodies in patients with different forms of antiphospholipid syndrome. It is now time to devise a classification for an accurate diagnosis and prognostication of the disease. Now that the morphological lesions of APSN are sufficiently well characterized, it is prime time to devise a classification which is of diagnostic and prognostic utility in this disease.

  19. What nephrolopathologists need to know about antiphospholipid syndrome-associated nephropathy: Is it time for formulating a classification for renal morphologic lesions?

    PubMed Central

    Mubarak, Muhammed; Nasri, Hamid

    2014-01-01

    Context: Antiphospholipid syndrome (APS) is a systemic autoimmune disorder which commonly affects kidneys. Evidence Acquisitions: Directory of Open Access Journals (DOAJ), Google Scholar, PubMed (NLM), LISTA (EBSCO) and Web of Science have been searched. Results: There is sufficient epidemiological, clinical and histopathological evidence to show that antiphospholipid syndrome is a distinctive lesion caused by antiphospholipid antibodies in patients with different forms of antiphospholipid syndrome. It is now time to devise a classification for an accurate diagnosis and prognostication of the disease. Conclusions: Now that the morphological lesions of APSN are sufficiently well characterized, it is prime time to devise a classification which is of diagnostic and prognostic utility in this disease. PMID:24644536

  20. Automatic classification of singular elements for the electrostatic analysis of microelectromechanical systems

    NASA Astrophysics Data System (ADS)

    Su, Y.; Ong, E. T.; Lee, K. H.

    2002-05-01

    The past decade has seen an accelerated growth of technology in the field of microelectromechanical systems (MEMS). The development of MEMS products has generated the need for efficient analytical and simulation methods for minimizing the requirement for actual prototyping. The boundary element method is widely used in the electrostatic analysis for MEMS devices. However, singular elements are needed to accurately capture the behavior at singular regions, such as sharp corners and edges, where standard elements fail to give an accurate result. The manual classification of boundary elements based on their singularity conditions is an immensely laborious task, especially when the boundary element model is large. This process can be automated by querying the geometric model of the MEMS device for convex edges based on geometric information of the model. The associated nodes of the boundary elements on these edges can then be retrieved. The whole process is implemented in the MSC/PATRAN platform using the Patran Command Language (the source code is available as supplementary data in the electronic version of this journal issue).

  1. Using an object-based grid system to evaluate a newly developed EP approach to formulate SVMs as applied to the classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun

    2004-04-01

    This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.

  2. Data Clustering and Evolving Fuzzy Decision Tree for Data Base Classification Problems

    NASA Astrophysics Data System (ADS)

    Chang, Pei-Chann; Fan, Chin-Yuan; Wang, Yen-Wen

    Data base classification suffers from two well known difficulties, i.e., the high dimensionality and non-stationary variations within the large historic data. This paper presents a hybrid classification model by integrating a case based reasoning technique, a Fuzzy Decision Tree (FDT), and Genetic Algorithms (GA) to construct a decision-making system for data classification in various data base applications. The model is major based on the idea that the historic data base can be transformed into a smaller case-base together with a group of fuzzy decision rules. As a result, the model can be more accurately respond to the current data under classifying from the inductions by these smaller cases based fuzzy decision trees. Hit rate is applied as a performance measure and the effectiveness of our proposed model is demonstrated by experimentally compared with other approaches on different data base classification applications. The average hit rate of our proposed model is the highest among others.

  3. A thyroid nodule classification method based on TI-RADS

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Yang, Yang; Peng, Bo; Chen, Qin

    2017-07-01

    Thyroid Imaging Reporting and Data System(TI-RADS) is a valuable tool for differentiating the benign and the malignant thyroid nodules. In clinic, doctors can determine the extent of being benign or malignant in terms of different classes by using TI-RADS. Classification represents the degree of malignancy of thyroid nodules. TI-RADS as a classification standard can be used to guide the ultrasonic doctor to examine thyroid nodules more accurately and reliably. In this paper, we aim to classify the thyroid nodules with the help of TI-RADS. To this end, four ultrasound signs, i.e., cystic and solid, echo pattern, boundary feature and calcification of thyroid nodules are extracted and converted into feature vectors. Then semi-supervised fuzzy C-means ensemble (SS-FCME) model is applied to obtain the classification results. The experimental results demonstrate that the proposed method can help doctors diagnose the thyroid nodules effectively.

  4. Rough set classification based on quantum logic

    NASA Astrophysics Data System (ADS)

    Hassan, Yasser F.

    2017-11-01

    By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.

  5. Robust tissue classification for reproducible wound assessment in telemedicine environments

    NASA Astrophysics Data System (ADS)

    Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves

    2010-04-01

    In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.

  6. Highly efficient classification and identification of human pathogenic bacteria by MALDI-TOF MS.

    PubMed

    Hsieh, Sen-Yung; Tseng, Chiao-Li; Lee, Yun-Shien; Kuo, An-Jing; Sun, Chien-Feng; Lin, Yen-Hsiu; Chen, Jen-Kun

    2008-02-01

    Accurate and rapid identification of pathogenic microorganisms is of critical importance in disease treatment and public health. Conventional work flows are time-consuming, and procedures are multifaceted. MS can be an alternative but is limited by low efficiency for amino acid sequencing as well as low reproducibility for spectrum fingerprinting. We systematically analyzed the feasibility of applying MS for rapid and accurate bacterial identification. Directly applying bacterial colonies without further protein extraction to MALDI-TOF MS analysis revealed rich peak contents and high reproducibility. The MS spectra derived from 57 isolates comprising six human pathogenic bacterial species were analyzed using both unsupervised hierarchical clustering and supervised model construction via the Genetic Algorithm. Hierarchical clustering analysis categorized the spectra into six groups precisely corresponding to the six bacterial species. Precise classification was also maintained in an independently prepared set of bacteria even when the numbers of m/z values were reduced to six. In parallel, classification models were constructed via Genetic Algorithm analysis. A model containing 18 m/z values accurately classified independently prepared bacteria and identified those species originally not used for model construction. Moreover bacteria fewer than 10(4) cells and different species in bacterial mixtures were identified using the classification model approach. In conclusion, the application of MALDI-TOF MS in combination with a suitable model construction provides a highly accurate method for bacterial classification and identification. The approach can identify bacteria with low abundance even in mixed flora, suggesting that a rapid and accurate bacterial identification using MS techniques even before culture can be attained in the near future.

  7. A Swarm Optimization approach for clinical knowledge mining.

    PubMed

    Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A

    2015-10-01

    Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Evolving rule-based systems in two medical domains using genetic programming.

    PubMed

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan; Axer, Hubertus; Bjerregaard, Beth; von Keyserlingk, Diedrich Graf

    2004-11-01

    To demonstrate and compare the application of different genetic programming (GP) based intelligent methodologies for the construction of rule-based systems in two medical domains: the diagnosis of aphasia's subtypes and the classification of pap-smear examinations. Past data representing (a) successful diagnosis of aphasia's subtypes from collaborating medical experts through a free interview per patient, and (b) correctly classified smears (images of cells) by cyto-technologists, previously stained using the Papanicolaou method. Initially a hybrid approach is proposed, which combines standard genetic programming and heuristic hierarchical crisp rule-base construction. Then, genetic programming for the production of crisp rule based systems is attempted. Finally, another hybrid intelligent model is composed by a grammar driven genetic programming system for the generation of fuzzy rule-based systems. Results denote the effectiveness of the proposed systems, while they are also compared for their efficiency, accuracy and comprehensibility, to those of an inductive machine learning approach as well as to those of a standard genetic programming symbolic expression approach. The proposed GP-based intelligent methodologies are able to produce accurate and comprehensible results for medical experts performing competitive to other intelligent approaches. The aim of the authors was the production of accurate but also sensible decision rules that could potentially help medical doctors to extract conclusions, even at the expense of a higher classification score achievement.

  9. Numeric pathologic lymph node classification shows prognostic superiority to topographic pN classification in esophageal squamous cell carcinoma.

    PubMed

    Sugawara, Kotaro; Yamashita, Hiroharu; Uemura, Yukari; Mitsui, Takashi; Yagi, Koichi; Nishida, Masato; Aikou, Susumu; Mori, Kazuhiko; Nomura, Sachiyo; Seto, Yasuyuki

    2017-10-01

    The current eighth tumor node metastasis lymph node category pathologic lymph node staging system for esophageal squamous cell carcinoma is based solely on the number of metastatic nodes and does not consider anatomic distribution. We aimed to assess the prognostic capability of the eighth tumor node metastasis pathologic lymph node staging system (numeric-based) compared with the 11th Japan Esophageal Society (topography-based) pathologic lymph node staging system in patients with esophageal squamous cell carcinoma. We retrospectively reviewed the clinical records of 289 patients with esophageal squamous cell carcinoma who underwent esophagectomy with extended lymph node dissection during the period from January 2006 through June 2016. We compared discrimination abilities for overall survival, recurrence-free survival, and cancer-specific survival between these 2 staging systems using C-statistics. The median number of dissected and metastatic nodes was 61 (25% to 75% quartile range, 45 to 79) and 1 (25% to 75% quartile range, 0 to 3), respectively. The eighth tumor node metastasis pathologic lymph node staging system had a greater ability to accurately determine overall survival (C-statistics: tumor node metastasis classification, 0.69, 95% confidence interval, 0.62-0.76; Japan Esophageal Society classification; 0.65, 95% confidence interval, 0.58-0.71; P = .014) and cancer-specific survival (C-statistics: tumor node metastasis classification, 0.78, 95% confidence interval, 0.70-0.87; Japan Esophageal Society classification; 0.72, 95% confidence interval, 0.64-0.80; P = .018). Rates of total recurrence rose as the eighth tumor node metastasis pathologic lymph node stage increased, while stratification of patients according to the topography-based node classification system was not feasible. Numeric nodal staging is an essential tool for stratifying the oncologic outcomes of patients with esophageal squamous cell carcinoma even in the cohort in which adequate numbers of lymph nodes were harvested. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. [Research progress in molecular classification of gastric cancer].

    PubMed

    Zhou, Menglong; Li, Guichao; Zhang, Zhen

    2016-09-25

    Gastric cancer(GC) is a highly heterogeneous malignancy. The present widely used histopathological classifications have gradually failed to meet the needs of individualized diagnosis and treatment. Development of technologies such as microarray and next-generation sequencing (NGS) has allowed GC to be studied at the molecular level. Mechanisms about tumorigenesis and progression of GC can be elucidated in the aspects of gene mutations, chromosomal alterations, transcriptional and epigenetic changes, on the basis of which GC can be divided into several subtypes. The classifications of Tan's, Lei's, TCGA and ACRG are relatively comprehensive. Especially the TCGA and ACRG classifications have large sample size and abundant molecular profiling data, thus, the genomic characteristics of GC can be depicted more accurately. However, significant differences between both classifications still exist so that they cannot be substituted for each other. So far there is no widely accepted molecular classification of GC. Compared with TCGA classification, ACRG system may have more clinical significance in Chinese GC patients since the samples are mostly from Asian population and show better association with prognosis. The molecular classification of GC may provide the theoretical and experimental basis for early diagnosis, therapeutic efficacy prediction and treatment stratification while their clinical application is still limited. Future work should involve the application of molecular classifications in the clinical settings for improving the medical management of GC.

  11. Towards a ternary NIRS-BCI: single-trial classification of verbal fluency task, Stroop task and unconstrained rest

    NASA Astrophysics Data System (ADS)

    Schudlo, Larissa C.; Chau, Tom

    2015-12-01

    Objective. The majority of near-infrared spectroscopy (NIRS) brain-computer interface (BCI) studies have investigated binary classification problems. Limited work has considered differentiation of more than two mental states, or multi-class differentiation of higher-level cognitive tasks using measurements outside of the anterior prefrontal cortex. Improvements in accuracies are needed to deliver effective communication with a multi-class NIRS system. We investigated the feasibility of a ternary NIRS-BCI that supports mental states corresponding to verbal fluency task (VFT) performance, Stroop task performance, and unconstrained rest using prefrontal and parietal measurements. Approach. Prefrontal and parietal NIRS signals were acquired from 11 able-bodied adults during rest and performance of the VFT or Stroop task. Classification was performed offline using bagging with a linear discriminant base classifier trained on a 10 dimensional feature set. Main results. VFT, Stroop task and rest were classified at an average accuracy of 71.7% ± 7.9%. The ternary classification system provided a statistically significant improvement in information transfer rate relative to a binary system controlled by either mental task (0.87 ± 0.35 bits/min versus 0.73 ± 0.24 bits/min). Significance. These results suggest that effective communication can be achieved with a ternary NIRS-BCI that supports VFT, Stroop task and rest via measurements from the frontal and parietal cortices. Further development of such a system is warranted. Accurate ternary classification can enhance communication rates offered by NIRS-BCIs, improving the practicality of this technology.

  12. Reliability of intracerebral hemorrhage classification systems: A systematic review.

    PubMed

    Rannikmäe, Kristiina; Woodfield, Rebecca; Anderson, Craig S; Charidimou, Andreas; Chiewvit, Pipat; Greenberg, Steven M; Jeng, Jiann-Shing; Meretoja, Atte; Palm, Frederic; Putaala, Jukka; Rinkel, Gabriel Je; Rosand, Jonathan; Rost, Natalia S; Strbian, Daniel; Tatlisumak, Turgut; Tsai, Chung-Fen; Wermer, Marieke Jh; Werring, David; Yeh, Shin-Joe; Al-Shahi Salman, Rustam; Sudlow, Cathie Lm

    2016-08-01

    Accurately distinguishing non-traumatic intracerebral hemorrhage (ICH) subtypes is important since they may have different risk factors, causal pathways, management, and prognosis. We systematically assessed the inter- and intra-rater reliability of ICH classification systems. We sought all available reliability assessments of anatomical and mechanistic ICH classification systems from electronic databases and personal contacts until October 2014. We assessed included studies' characteristics, reporting quality and potential for bias; summarized reliability with kappa value forest plots; and performed meta-analyses of the proportion of cases classified into each subtype. We included 8 of 2152 studies identified. Inter- and intra-rater reliabilities were substantial to perfect for anatomical and mechanistic systems (inter-rater kappa values: anatomical 0.78-0.97 [six studies, 518 cases], mechanistic 0.89-0.93 [three studies, 510 cases]; intra-rater kappas: anatomical 0.80-1 [three studies, 137 cases], mechanistic 0.92-0.93 [two studies, 368 cases]). Reporting quality varied but no study fulfilled all criteria and none was free from potential bias. All reliability studies were performed with experienced raters in specialist centers. Proportions of ICH subtypes were largely consistent with previous reports suggesting that included studies are appropriately representative. Reliability of existing classification systems appears excellent but is unknown outside specialist centers with experienced raters. Future reliability comparisons should be facilitated by studies following recently published reporting guidelines. © 2016 World Stroke Organization.

  13. Acuity systems dialogue and patient classification system essentials.

    PubMed

    Harper, Kelle; McCully, Crystal

    2007-01-01

    Obtaining resources for quality patient care is a major responsibility of nurse leaders and requires accurate information in the political world of budgeting. Patient classification systems (PCS) assist nurse managers in controlling cost and improving patient care while appropriately using financial resources. This paper communicates acuity systems development, background, flaws, and components while discussing a few tools currently available. It also disseminates the development of a new acuity tool, the Patient Classification System. The PCS tool, developed in a small rural hospital, uses 5 broad concepts: (1) medications, (2) complicated procedures, (3) education, (4) psychosocial issues, and (5) complicated intravenous medications. These concepts embrace a 4-tiered scale that differentiates significant patient characteristics and assists in staffing measures for equality in patient staffing and improving quality of care and performance. Data obtained through use of the PCS can be used by nurse leaders to effectively and objectively lobby for appropriate patient care resources. Two questionnaires distributed to registered nurses on a medical-surgical unit evaluated the nurses' opinion of the 5 concepts and the importance for establishing patient acuity for in-patient care. Interrater reliability among nurses was 87% with the author's acuity tool.

  14. On the integrity of functional brain networks in schizophrenia, Parkinson's disease, and advanced age: Evidence from connectivity-based single-subject classification.

    PubMed

    Pläschke, Rachel N; Cieslik, Edna C; Müller, Veronika I; Hoffstaedter, Felix; Plachti, Anna; Varikuti, Deepthi P; Goosses, Mareike; Latz, Anne; Caspers, Svenja; Jockwitz, Christiane; Moebus, Susanne; Gruber, Oliver; Eickhoff, Claudia R; Reetz, Kathrin; Heller, Julia; Südmeyer, Martin; Mathys, Christian; Caspers, Julian; Grefkes, Christian; Kalenscher, Tobias; Langner, Robert; Eickhoff, Simon B

    2017-12-01

    Previous whole-brain functional connectivity studies achieved successful classifications of patients and healthy controls but only offered limited specificity as to affected brain systems. Here, we examined whether the connectivity patterns of functional systems affected in schizophrenia (SCZ), Parkinson's disease (PD), or normal aging equally translate into high classification accuracies for these conditions. We compared classification performance between pre-defined networks for each group and, for any given network, between groups. Separate support vector machine classifications of 86 SCZ patients, 80 PD patients, and 95 older adults relative to their matched healthy/young controls, respectively, were performed on functional connectivity in 12 task-based, meta-analytically defined networks using 25 replications of a nested 10-fold cross-validation scheme. Classification performance of the various networks clearly differed between conditions, as those networks that best classified one disease were usually non-informative for the other. For SCZ, but not PD, emotion-processing, empathy, and cognitive action control networks distinguished patients most accurately from controls. For PD, but not SCZ, networks subserving autobiographical or semantic memory, motor execution, and theory-of-mind cognition yielded the best classifications. In contrast, young-old classification was excellent based on all networks and outperformed both clinical classifications. Our pattern-classification approach captured associations between clinical and developmental conditions and functional network integrity with a higher level of specificity than did previous whole-brain analyses. Taken together, our results support resting-state connectivity as a marker of functional dysregulation in specific networks known to be affected by SCZ and PD, while suggesting that aging affects network integrity in a more global way. Hum Brain Mapp 38:5845-5858, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Mapping ecological systems with a random foret model: tradeoffs between errors and bias

    Treesearch

    Emilie Grossmann; Janet Ohmann; James Kagan; Heather May; Matthew Gregory

    2010-01-01

    New methods for predictive vegetation mapping allow improved estimations of plant community composition across large regions. Random Forest (RF) models limit over-fitting problems of other methods, and are known for making accurate classification predictions from noisy, nonnormal data, but can be biased when plot samples are unbalanced. We developed two contrasting...

  16. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    PubMed

    Zhu, Xiangbin; Qiu, Huiling

    2016-01-01

    Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  17. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections

    PubMed Central

    2016-01-01

    Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved. PMID:27893761

  18. Auto-simultaneous laser treatment and Ohshiro's classification of laser treatment

    NASA Astrophysics Data System (ADS)

    Ohshiro, Toshio

    2005-07-01

    When the laser was first applied in medicine and surgery in the late 1960"s and early 1970"s, early adopters reported better wound healing and less postoperative pain with laser procedures compared with the same procedure performed with the cold scalpel or with electrothermy, and multiple surgical effects such as incision, vaporization and hemocoagulation could be achieved with the same laser beam. There was thus an added beneficial component which was associated only with laser surgery. This was first recognized as the `?-effect", was then classified by the author as simultaneous laser therapy, but is now more accurately classified by the author as part of the auto-simultaneous aspect of laser treatment. Indeed, with the dramatic increase of the applications of the laser in surgery and medicine over the last 2 decades there has been a parallel increase in the need for a standardized classification of laser treatment. Some classifications have been machine-based, and thus inaccurate because at appropriate parameters, a `low-power laser" can produce a surgical effect and a `high power laser", a therapeutic one . A more accurate classification based on the tissue reaction is presented, developed by the author. In addition to this, the author has devised a graphical representation of laser surgical and therapeutic beams whereby the laser type, parameters, penetration depth, and tissue reaction can all be shown in a single illustration, which the author has termed the `Laser Apple", due to the typical pattern generated when a laser beam is incident on tissue. Laser/tissue reactions fall into three broad groups. If the photoreaction in the tissue is irreversible, then it is classified as high-reactive level laser treatment (HLLT). If some irreversible damage occurs together with reversible photodamage, as in tissue welding, the author refers to this as mid-reactive level laser treatment (MLLT). If the level of reaction in the target tissue is lower than the cells" survival threshold, then this is low reactive-level laser therapy (LLLT). All three of these classifications can occur simultaneously in the one target, and fall under the umbrella of laser treatment (LT). LT is further subdivided into three main types: mono-type LT (Mo-LT, treatment with a single laser system; multi-type LT (Mu-LT, treatment with multiple laser systems); and concomitant LT (Cc-LT), laser treatment in combination, each of which is further subdivided by tissue reaction to give an accurate, treatment-based categorization of laser treatment. When this effect-based classification is combined with and illustrated by the appropriate laser apple pattern, an accurate and simple method of classifying laser/tissue reactions by the reaction, rather than by the laser used to produce the reaction, is achieved. Examples will be given to illustrate the author"s new approach to this important concept.

  19. Counter unmanned aerial system testing and evaluation methodology

    NASA Astrophysics Data System (ADS)

    Kouhestani, C.; Woo, B.; Birch, G.

    2017-05-01

    Unmanned aerial systems (UAS) are increasing in flight times, ease of use, and payload sizes. Detection, classification, tracking, and neutralization of UAS is a necessary capability for infrastructure and facility protection. We discuss test and evaluation methodology developed at Sandia National Laboratories to establish a consistent, defendable, and unbiased means for evaluating counter unmanned aerial system (CUAS) technologies. The test approach described identifies test strategies, performance metrics, UAS types tested, key variables, and the necessary data analysis to accurately quantify the capabilities of CUAS technologies. The tests conducted, as defined by this approach, will allow for the determination of quantifiable limitations, strengths, and weaknesses in terms of detection, tracking, classification, and neutralization. Communicating the results of this testing in such a manner informs decisions by government sponsors and stakeholders that can be used to guide future investments and inform procurement, deployment, and advancement of such systems into their specific venues.

  20. Algorithms for Hyperspectral Endmember Extraction and Signature Classification with Morphological Dendritic Networks

    NASA Astrophysics Data System (ADS)

    Schmalz, M.; Ritter, G.

    Accurate multispectral or hyperspectral signature classification is key to the nonimaging detection and recognition of space objects. Additionally, signature classification accuracy depends on accurate spectral endmember determination [1]. Previous approaches to endmember computation and signature classification were based on linear operators or neural networks (NNs) expressed in terms of the algebra (R, +, x) [1,2]. Unfortunately, class separation in these methods tends to be suboptimal, and the number of signatures that can be accurately classified often depends linearly on the number of NN inputs. This can lead to poor endmember distinction, as well as potentially significant classification errors in the presence of noise or densely interleaved signatures. In contrast to traditional CNNs, autoassociative morphological memories (AMM) are a construct similar to Hopfield autoassociatived memories defined on the (R, +, ?,?) lattice algebra [3]. Unlimited storage and perfect recall of noiseless real valued patterns has been proven for AMMs [4]. However, AMMs suffer from sensitivity to specific noise models, that can be characterized as erosive and dilative noise. On the other hand, the prior definition of a set of endmembers corresponds to material spectra lying on vertices of the minimum convex region covering the image data. These vertices can be characterized as morphologically independent patterns. It has further been shown that AMMs can be based on dendritic computation [3,6]. These techniques yield improved accuracy and class segmentation/separation ability in the presence of highly interleaved signature data. In this paper, we present a procedure for endmember determination based on AMM noise sensitivity, which employs morphological dendritic computation. We show that detected endmembers can be exploited by AMM based classification techniques, to achieve accurate signature classification in the presence of noise, closely spaced or interleaved signatures, and simulated camera optical distortions. In particular, we examine two critical cases: (1) classification of multiple closely spaced signatures that are difficult to separate using distance measures, and (2) classification of materials in simulated hyperspectral images of spaceborne satellites. In each case, test data are derived from a NASA database of space material signatures. Additional analysis pertains to computational complexity and noise sensitivity, which are superior to classical NN based techniques.

  1. Classifying coastal resources by integrating optical and radar imagery and color infrared photography

    USGS Publications Warehouse

    Ramsey, Elijah W.; Nelson, Gene A.; Sapkota, Sijan

    1998-01-01

    A progressive classification of a marsh and forest system using Landsat Thematic Mapper (TM), color infrared (CIR) photograph, and ERS-1 synthetic aperture radar (SAR) data improved classification accuracy when compared to classification using solely TM reflective band data. The classification resulted in a detailed identification of differences within a nearly monotypic black needlerush marsh. Accuracy percentages of these classes were surprisingly high given the complexities of classification. The detailed classification resulted in a more accurate portrayal of the marsh transgressive sequence than was obtainable with TM data alone. Individual sensor contribution to the improved classification was compared to that using only the six reflective TM bands. Individually, the green reflective CIR and SAR data identified broad categories of water, marsh, and forest. In combination with TM, SAR and the green CIR band each improved overall accuracy by about 3% and 15% respectively. The SAR data improved the TM classification accuracy mostly in the marsh classes. The green CIR data also improved the marsh classification accuracy and accuracies in some water classes. The final combination of all sensor data improved almost all class accuracies from 2% to 70% with an overall improvement of about 20% over TM data alone. Not only was the identification of vegetation types improved, but the spatial detail of the classification approached 10 m in some areas.

  2. Comparative Performance Analysis of Support Vector Machine, Random Forest, Logistic Regression and k-Nearest Neighbours in Rainbow Trout (Oncorhynchus Mykiss) Classification Using Image-Based Features

    PubMed Central

    Císař, Petr; Labbé, Laurent; Souček, Pavel; Pelissier, Pablo; Kerneis, Thierry

    2018-01-01

    The main aim of this study was to develop a new objective method for evaluating the impacts of different diets on the live fish skin using image-based features. In total, one-hundred and sixty rainbow trout (Oncorhynchus mykiss) were fed either a fish-meal based diet (80 fish) or a 100% plant-based diet (80 fish) and photographed using consumer-grade digital camera. Twenty-three colour features and four texture features were extracted. Four different classification methods were used to evaluate fish diets including Random forest (RF), Support vector machine (SVM), Logistic regression (LR) and k-Nearest neighbours (k-NN). The SVM with radial based kernel provided the best classifier with correct classification rate (CCR) of 82% and Kappa coefficient of 0.65. Although the both LR and RF methods were less accurate than SVM, they achieved good classification with CCR 75% and 70% respectively. The k-NN was the least accurate (40%) classification model. Overall, it can be concluded that consumer-grade digital cameras could be employed as the fast, accurate and non-invasive sensor for classifying rainbow trout based on their diets. Furthermore, these was a close association between image-based features and fish diet received during cultivation. These procedures can be used as non-invasive, accurate and precise approaches for monitoring fish status during the cultivation by evaluating diet’s effects on fish skin. PMID:29596375

  3. Comparative Performance Analysis of Support Vector Machine, Random Forest, Logistic Regression and k-Nearest Neighbours in Rainbow Trout (Oncorhynchus Mykiss) Classification Using Image-Based Features.

    PubMed

    Saberioon, Mohammadmehdi; Císař, Petr; Labbé, Laurent; Souček, Pavel; Pelissier, Pablo; Kerneis, Thierry

    2018-03-29

    The main aim of this study was to develop a new objective method for evaluating the impacts of different diets on the live fish skin using image-based features. In total, one-hundred and sixty rainbow trout ( Oncorhynchus mykiss ) were fed either a fish-meal based diet (80 fish) or a 100% plant-based diet (80 fish) and photographed using consumer-grade digital camera. Twenty-three colour features and four texture features were extracted. Four different classification methods were used to evaluate fish diets including Random forest (RF), Support vector machine (SVM), Logistic regression (LR) and k -Nearest neighbours ( k -NN). The SVM with radial based kernel provided the best classifier with correct classification rate (CCR) of 82% and Kappa coefficient of 0.65. Although the both LR and RF methods were less accurate than SVM, they achieved good classification with CCR 75% and 70% respectively. The k -NN was the least accurate (40%) classification model. Overall, it can be concluded that consumer-grade digital cameras could be employed as the fast, accurate and non-invasive sensor for classifying rainbow trout based on their diets. Furthermore, these was a close association between image-based features and fish diet received during cultivation. These procedures can be used as non-invasive, accurate and precise approaches for monitoring fish status during the cultivation by evaluating diet's effects on fish skin.

  4. Global classification and coding of hypersensitivity diseases - An EAACI - WAO survey, strategic paper and review.

    PubMed

    Demoly, P; Tanno, L K; Akdis, C A; Lau, S; Calderon, M A; Santos, A F; Sanchez-Borges, M; Rosenwasser, L J; Pawankar, R; Papadopoulos, N G

    2014-05-01

    Hypersensitivity diseases are not adequately coded in the International Coding of Diseases (ICD)-10 resulting in misclassification, leading to low visibility of these conditions and general accuracy of official statistics. To call attention to the inadequacy of the ICD-10 in relation to allergic and hypersensitivity diseases and to contribute to improvements to be made in the forthcoming revision of ICD, a web-based global survey of healthcare professionals' attitudes toward allergic disorders classification was proposed to the members of European Academy of Allergy and Clinical Immunology (EAACI) (individuals) and World Allergy Organization (WAO) (representative responding on behalf of the national society), launched via internet and circulated for 6 week. As a result, we had 612 members of 144 countries from all six World Health Organization (WHO) global regions who answered the survey. ICD-10 is the most used classification worldwide, but it was not considered appropriate in clinical practice by the majority of participants. The majority indicated the EAACI-WAO classification as being easier and more accurate in the daily practice. They saw the need for a diagnostic system useful for nonallergists and endorsed the possibility of a global, cross-culturally applicable classification system of allergic disorders. This first and most broadly international survey ever conducted of health professionals' attitudes toward allergic disorders classification supports the need to update the current classifications of allergic diseases and can be useful to the WHO in improving the clinical utility of the classification and its global acceptability for the revised ICD-11. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Accurate mobile malware detection and classification in the cloud.

    PubMed

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service.

  6. Genome-Wide Comparative Gene Family Classification

    PubMed Central

    Frech, Christian; Chen, Nansheng

    2010-01-01

    Correct classification of genes into gene families is important for understanding gene function and evolution. Although gene families of many species have been resolved both computationally and experimentally with high accuracy, gene family classification in most newly sequenced genomes has not been done with the same high standard. This project has been designed to develop a strategy to effectively and accurately classify gene families across genomes. We first examine and compare the performance of computer programs developed for automated gene family classification. We demonstrate that some programs, including the hierarchical average-linkage clustering algorithm MC-UPGMA and the popular Markov clustering algorithm TRIBE-MCL, can reconstruct manual curation of gene families accurately. However, their performance is highly sensitive to parameter setting, i.e. different gene families require different program parameters for correct resolution. To circumvent the problem of parameterization, we have developed a comparative strategy for gene family classification. This strategy takes advantage of existing curated gene families of reference species to find suitable parameters for classifying genes in related genomes. To demonstrate the effectiveness of this novel strategy, we use TRIBE-MCL to classify chemosensory and ABC transporter gene families in C. elegans and its four sister species. We conclude that fully automated programs can establish biologically accurate gene families if parameterized accordingly. Comparative gene family classification finds optimal parameters automatically, thus allowing rapid insights into gene families of newly sequenced species. PMID:20976221

  7. A review of the automated detection and classification of acute leukaemia: Coherent taxonomy, datasets, validation and performance measurements, motivation, open challenges and recommendations.

    PubMed

    Alsalem, M A; Zaidan, A A; Zaidan, B B; Hashim, M; Madhloom, H T; Azeez, N D; Alsyisuf, S

    2018-05-01

    Acute leukaemia diagnosis is a field requiring automated solutions, tools and methods and the ability to facilitate early detection and even prediction. Many studies have focused on the automatic detection and classification of acute leukaemia and their subtypes to promote enable highly accurate diagnosis. This study aimed to review and analyse literature related to the detection and classification of acute leukaemia. The factors that were considered to improve understanding on the field's various contextual aspects in published studies and characteristics were motivation, open challenges that confronted researchers and recommendations presented to researchers to enhance this vital research area. We systematically searched all articles about the classification and detection of acute leukaemia, as well as their evaluation and benchmarking, in three main databases: ScienceDirect, Web of Science and IEEE Xplore from 2007 to 2017. These indices were considered to be sufficiently extensive to encompass our field of literature. Based on our inclusion and exclusion criteria, 89 articles were selected. Most studies (58/89) focused on the methods or algorithms of acute leukaemia classification, a number of papers (22/89) covered the developed systems for the detection or diagnosis of acute leukaemia and few papers (5/89) presented evaluation and comparative studies. The smallest portion (4/89) of articles comprised reviews and surveys. Acute leukaemia diagnosis, which is a field requiring automated solutions, tools and methods, entails the ability to facilitate early detection or even prediction. Many studies have been performed on the automatic detection and classification of acute leukaemia and their subtypes to promote accurate diagnosis. Research areas on medical-image classification vary, but they are all equally vital. We expect this systematic review to help emphasise current research opportunities and thus extend and create additional research fields. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. SLATE: scanning laser automatic threat extraction

    NASA Astrophysics Data System (ADS)

    Clark, David J.; Prickett, Shaun L.; Napier, Ashley A.; Mellor, Matthew P.

    2016-10-01

    SLATE is an Autonomous Sensor Module (ASM) designed to work with the SAPIENT system providing accurate location tracking and classifications of targets that pass through its field of view. The concept behind the SLATE ASM is to produce a sensor module that provides a complementary view of the world to the camera-based systems that are usually used for wide area surveillance. Cameras provide a hi-fidelity, human understandable view of the world with which tracking and identification algorithms can be used. Unfortunately, positioning and tracking in a 3D environment is difficult to implement robustly, making location-based threat assessment challenging. SLATE uses a Scanning Laser Rangefinder (SLR) that provides precise (<1cm) positions, sizes, shapes and velocities of targets within its field-of-view (FoV). In this paper we will discuss the development of the SLATE ASM including the techniques used to track and classify detections that move through the field of view of the sensor providing the accurate tracking information to the SAPIENT system. SLATE's ability to locate targets precisely allows subtle boundary-crossing judgements, e.g. on which side of a chain-link fence a target is. SLATE's ability to track targets in 3D throughout its FoV enables behavior classification such as running and walking which can provide an indication of intent and help reduce false alarm rates.

  9. Classification of Instructional Programs: 2000 Edition.

    ERIC Educational Resources Information Center

    Morgan, Robert L.; Hunt, E. Stephen

    This third revision of the Classification of Instructional Programs (CIP) updates and modifies education program classifications, providing a taxonomic scheme that supports the accurate tracking, assessment, and reporting of field of study and program completions activity. This edition has also been adopted as the standard field of study taxonomy…

  10. Two-tier tissue decomposition for histopathological image representation and classification.

    PubMed

    Gultekin, Tunc; Koyuncu, Can Fahrettin; Sokmensuer, Cenk; Gunduz-Demir, Cigdem

    2015-01-01

    In digital pathology, devising effective image representations is crucial to design robust automated diagnosis systems. To this end, many studies have proposed to develop object-based representations, instead of directly using image pixels, since a histopathological image may contain a considerable amount of noise typically at the pixel-level. These previous studies mostly employ color information to define their objects, which approximately represent histological tissue components in an image, and then use the spatial distribution of these objects for image representation and classification. Thus, object definition has a direct effect on the way of representing the image, which in turn affects classification accuracies. In this paper, our aim is to design a classification system for histopathological images. Towards this end, we present a new model for effective representation of these images that will be used by the classification system. The contributions of this model are twofold. First, it introduces a new two-tier tissue decomposition method for defining a set of multityped objects in an image. Different than the previous studies, these objects are defined combining texture, shape, and size information and they may correspond to individual histological tissue components as well as local tissue subregions of different characteristics. As its second contribution, it defines a new metric, which we call dominant blob scale, to characterize the shape and size of an object with a single scalar value. Our experiments on colon tissue images reveal that this new object definition and characterization provides distinguishing representation of normal and cancerous histopathological images, which is effective to obtain more accurate classification results compared to its counterparts.

  11. Pancreatic abnormalities detected by endoscopic ultrasound (EUS) in patients without clinical signs of pancreatic disease: any difference between standard and Rosemont classification scoring?

    PubMed

    Petrone, Maria Chiara; Terracciano, Fulvia; Perri, Francesco; Carrara, Silvia; Cavestro, Giulia Martina; Mariani, Alberto; Testoni, Pier Alberto; Arcidiacono, Paolo Giorgio

    2014-01-01

    The prevalence of nine EUS features of chronic pancreatitis (CP) according to the standard Wiersema classification has been investigated in 489 patients undergoing EUS for an indication not related to pancreatico-biliary disease. We showed that 82 subjects (16.8%) had at least one ductular or parenchymal abnormality. Among them, 18 (3.7% of study population) had ≥3 Wiersema criteria suggestive of CP. Recently, a new classification (Rosemont) of EUS findings consistent, suggestive or indeterminate for CP has been proposed. To stratify healthy subjects into different subgroups on the basis of EUS features of CP according to the Wiersema and Rosemont classifications and to evaluate the agreement in the diagnosis of CP with the two scoring systems. Weighted kappa statistics was computed to evaluate the strength of agreement between the two scoring systems. Univariate and multivariate analysis between any EUS abnormality and habits were performed. Eighty-two EUS videos were reviewed. Using the Wiersema classification, 18 subjects showed ≥3 EUS features suggestive of CP. The EUS diagnosis of CP in these 18 subjects was considered as consistent in only one patient, according to Rosemont classification. Weighted Kappa statistics was 0.34 showing that the strength of agreement was 'fair'. Alcohol use and smoking were identified as risk factors for having pancreatic abnormalities on EUS. The prevalence of EUS features consistent or suggestive of CP in healthy subjects according to the Rosemont classification is lower than that assessed by Wiersema criteria. In that regard the Rosemont classification seems to be more accurate in excluding clinically relevant CP. Overall agreement between the two classifications is fair. Copyright © 2014 IAP and EPC. Published by Elsevier B.V. All rights reserved.

  12. A proposal for a CT driven classification of left colon acute diverticulitis.

    PubMed

    Sartelli, Massimo; Moore, Frederick A; Ansaloni, Luca; Di Saverio, Salomone; Coccolini, Federico; Griffiths, Ewen A; Coimbra, Raul; Agresta, Ferdinando; Sakakushev, Boris; Ordoñez, Carlos A; Abu-Zidan, Fikri M; Karamarkovic, Aleksandar; Augustin, Goran; Costa Navarro, David; Ulrych, Jan; Demetrashvili, Zaza; Melo, Renato B; Marwah, Sanjay; Zachariah, Sanoop K; Wani, Imtiaz; Shelat, Vishal G; Kim, Jae Il; McFarlane, Michael; Pintar, Tadaja; Rems, Miran; Bala, Miklosh; Ben-Ishay, Offir; Gomes, Carlos Augusto; Faro, Mario Paulo; Pereira, Gerson Alves; Catani, Marco; Baiocchi, Gianluca; Bini, Roberto; Anania, Gabriele; Negoi, Ionut; Kecbaja, Zurabs; Omari, Abdelkarim H; Cui, Yunfeng; Kenig, Jakub; Sato, Norio; Vereczkei, Andras; Skrovina, Matej; Das, Koray; Bellanova, Giovanni; Di Carlo, Isidoro; Segovia Lohse, Helmut A; Kong, Victor; Kok, Kenneth Y; Massalou, Damien; Smirnov, Dmitry; Gachabayov, Mahir; Gkiokas, Georgios; Marinis, Athanasios; Spyropoulos, Charalampos; Nikolopoulos, Ioannis; Bouliaris, Konstantinos; Tepp, Jaan; Lohsiriwat, Varut; Çolak, Elif; Isik, Arda; Rios-Cruz, Daniel; Soto, Rodolfo; Abbas, Ashraf; Tranà, Cristian; Caproli, Emanuele; Soldatenkova, Darija; Corcione, Francesco; Piazza, Diego; Catena, Fausto

    2015-01-01

    Computed tomography (CT) imaging is the most appropriate diagnostic tool to confirm suspected left colonic diverticulitis. However, the utility of CT imaging goes beyond accurate diagnosis of diverticulitis; the grade of severity on CT imaging may drive treatment planning of patients presenting with acute diverticulitis. The appropriate management of left colon acute diverticulitis remains still debated because of the vast spectrum of clinical presentations and different approaches to treatment proposed. The authors present a new simple classification system based on both CT scan results driving decisions making management of acute diverticulitis that may be universally accepted for day to day practice.

  13. Towards automatic music transcription: note extraction based on independent subspace analysis

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens; Hoynck, Michael

    2005-01-01

    Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.

  14. Towards automatic music transcription: note extraction based on independent subspace analysis

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens; Höynck, Michael

    2004-12-01

    Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.

  15. Land use/cover classification in the Brazilian Amazon using satellite images.

    PubMed

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira

    2012-09-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.

  16. Land use/cover classification in the Brazilian Amazon using satellite images

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant’Anna, Sidnei João Siqueira

    2013-01-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data. PMID:24353353

  17. Multiclass cancer diagnosis using tumor gene expression signatures

    DOE PAGES

    Ramaswamy, S.; Tamayo, P.; Rifkin, R.; ...

    2001-12-11

    The optimal treatment of patients with cancer depends on establishing accurate diagnoses by using a complex combination of clinical and histopathological data. In some instances, this task is difficult or impossible because of atypical clinical presentation or histopathology. To determine whether the diagnosis of multiple common adult malignancies could be achieved purely by molecular classification, we subjected 218 tumor samples, spanning 14 common tumor types, and 90 normal tissue samples to oligonucleotide microarray gene expression analysis. The expression levels of 16,063 genes and expressed sequence tags were used to evaluate the accuracy of a multiclass classifier based on a supportmore » vector machine algorithm. Overall classification accuracy was 78%, far exceeding the accuracy of random classification (9%). Poorly differentiated cancers resulted in low-confidence predictions and could not be accurately classified according to their tissue of origin, indicating that they are molecularly distinct entities with dramatically different gene expression patterns compared with their well differentiated counterparts. Taken together, these results demonstrate the feasibility of accurate, multiclass molecular cancer classification and suggest a strategy for future clinical implementation of molecular cancer diagnostics.« less

  18. Neuromuscular disease classification system

    NASA Astrophysics Data System (ADS)

    Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M.; Serrano, Carmen

    2013-06-01

    Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.

  19. The ESHRE/ESGE consensus on the classification of female genital tract congenital anomalies.

    PubMed

    Grimbizis, Grigoris F; Gordts, Stephan; Di Spiezio Sardo, Attilio; Brucker, Sara; De Angelis, Carlo; Gergolet, Marco; Li, Tin-Chiu; Tanos, Vasilios; Brölmann, Hans; Gianaroli, Luca; Campo, Rudi

    2013-08-01

    What classification system is more suitable for the accurate, clear, simple and related to the clinical management categorization of female genital anomalies? The new ESHRE/ESGE classification system of female genital anomalies is presented. Congenital malformations of the female genital tract are common miscellaneous deviations from normal anatomy with health and reproductive consequences. Until now, three systems have been proposed for their categorization but all of them are associated with serious limitations. The European Society of Human Reproduction and Embryology (ESHRE) and the European Society for Gynaecological Endoscopy (ESGE) have established a common Working Group, under the name CONUTA (CONgenital UTerine Anomalies), with the goal of developing a new updated classification system. A scientific committee (SC) has been appointed to run the project, looking also for consensus within the scientists working in the field. The new system is designed and developed based on (i) scientific research through critical review of current proposals and preparation of an initial proposal for discussion between the experts, (ii) consensus measurement among the experts through the use of the DELPHI procedure and (iii) consensus development by the SC, taking into account the results of the DELPHI procedure and the comments of the experts. Almost 90 participants took part in the process of development of the ESHRE/ESGE classification system, contributing with their structured answers and comments. The ESHRE/ESGE classification system is based on anatomy. Anomalies are classified into the following main classes, expressing uterine anatomical deviations deriving from the same embryological origin: U0, normal uterus; U1, dysmorphic uterus; U2, septate uterus; U3, bicorporeal uterus; U4, hemi-uterus; U5, aplastic uterus; U6, for still unclassified cases. Main classes have been divided into sub-classes expressing anatomical varieties with clinical significance. Cervical and vaginal anomalies are classified independently into sub-classes having clinical significance. The ESHRE/ESGE classification of female genital anomalies seems to fulfill the expectations and the needs of the experts in the field, but its clinical value needs to be proved in everyday practice. The ESHRE/ESGE classification system of female genital anomalies could be used as a starting point for the development of guidelines for their diagnosis and treatment. None.

  20. Automatic ICD-10 multi-class classification of cause of death from plaintext autopsy reports through expert-driven feature selection.

    PubMed

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali

    2017-01-01

    Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports.

  1. Automatic ICD-10 multi-class classification of cause of death from plaintext autopsy reports through expert-driven feature selection

    PubMed Central

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali

    2017-01-01

    Objectives Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Methods Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Results Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. Conclusion The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports. PMID:28166263

  2. Agent Collaborative Target Localization and Classification in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng

    2007-01-01

    Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.

  3. DeepPap: Deep Convolutional Networks for Cervical Cell Classification.

    PubMed

    Zhang, Ling; Le Lu; Nogues, Isabella; Summers, Ronald M; Liu, Shaoxiong; Yao, Jianhua

    2017-11-01

    Automation-assisted cervical screening via Pap smear or liquid-based cytology (LBC) is a highly effective cell imaging based cancer detection tool, where cells are partitioned into "abnormal" and "normal" categories. However, the success of most traditional classification methods relies on the presence of accurate cell segmentations. Despite sixty years of research in this field, accurate segmentation remains a challenge in the presence of cell clusters and pathologies. Moreover, previous classification methods are only built upon the extraction of hand-crafted features, such as morphology and texture. This paper addresses these limitations by proposing a method to directly classify cervical cells-without prior segmentation-based on deep features, using convolutional neural networks (ConvNets). First, the ConvNet is pretrained on a natural image dataset. It is subsequently fine-tuned on a cervical cell dataset consisting of adaptively resampled image patches coarsely centered on the nuclei. In the testing phase, aggregation is used to average the prediction scores of a similar set of image patches. The proposed method is evaluated on both Pap smear and LBC datasets. Results show that our method outperforms previous algorithms in classification accuracy (98.3%), area under the curve (0.99) values, and especially specificity (98.3%), when applied to the Herlev benchmark Pap smear dataset and evaluated using five-fold cross validation. Similar superior performances are also achieved on the HEMLBC (H&E stained manual LBC) dataset. Our method is promising for the development of automation-assisted reading systems in primary cervical screening.

  4. Effects of stress typicality during speeded grammatical classification.

    PubMed

    Arciuli, Joanne; Cupples, Linda

    2003-01-01

    The experiments reported here were designed to investigate the influence of stress typicality during speeded grammatical classification of disyllabic English words by native and non-native speakers. Trochaic nouns and iambic gram verbs were considered to be typically stressed, whereas iambic nouns and trochaic verbs were considered to be atypically stressed. Experiments 1a and 2a showed that while native speakers classified typically stressed words individual more quickly and more accurately than atypically stressed words during differences reading, there were no overall effects during classification of spoken stimuli. However, a subgroup of native speakers with high error rates did show a significant effect during classification of spoken stimuli. Experiments 1b and 2b showed that non-native speakers classified typically stressed words more quickly and more accurately than atypically stressed words during reading. Typically stressed words were classified more accurately than atypically stressed words when the stimuli were spoken. Importantly, there was a significant relationship between error rates, vocabulary size and the size of the stress typicality effect in each experiment. We conclude that participants use information about lexical stress to help them distinguish between disyllabic nouns and verbs during speeded grammatical classification. This is especially so for individuals with a limited vocabulary who lack other knowledge (e.g., semantic knowledge) about the differences between these grammatical categories.

  5. Regional Climate Modeling over the Marmara Region, Turkey, with Improved Land Cover Data

    NASA Astrophysics Data System (ADS)

    Sertel, E.; Robock, A.

    2007-12-01

    Land surface controls the partitioning of available energy at the surface between sensible and latent heat,and controls partitioning of available water between evaporation and runoff. Current land cover data available within the regional climate models such as Regional Atmospheric Modeling System (RAMS), the Fifth-Generation NCAR/Penn State Mesoscale Model (MM5) and Weather Research and Forecasting (WRF) was obtained from 1- km Advanced Very High Resolution Radiometer satellite images spanning April 1992 through March 1993 with an unsupervised classification technique. These data are not up-to-date and are not accurate for all regions and some land cover types such as urban areas. Here we introduce new, up-to-date and accurate land cover data for the Marmara Region, Turkey derived from Landsat Enhanced Thematic Mapper images into the WRF regional climate model. We used several image processing techniques to create accurate land cover data from Landsat images obtained between 2001 and 2005. First, all images were atmospherically and radiometrically corrected to minimize contamination effects of atmospheric particles and systematic errors. Then, geometric correction was performed for each image to eliminate geometric distortions and define images in a common coordinate system. Finally, unsupervised and supervised classification techniques were utilized to form the most accurate land cover data yet for the study area. Accuracy assessments of the classifications were performed using error matrix and kappa statistics to find the best classification results. Maximum likelihood classification method gave the most accurate results over the study area. We compared the new land cover data with the default WRF land cover data. WRF land cover data cannot represent urban areas in the cities of Istanbul, Izmit, and Bursa. As an example, both original satellite images and new land cover data showed the expansion of urban areas into the Istanbul metropolitan area, but in the WRF land cover data only a limited area along the Bosporus is shown as urban. In addition, the new land cover data indicate that the northern part of Istanbul is covered by evergreen and deciduous forest (verified by ground truth data), but the WRF data indicate that most of this region is croplands. In the northern part of the Marmara Region, there is bare ground as a result of open mining activities and this class can be identified in our land cover data, whereas the WRF data indicated this region as woodland. We then used this new data set to conduct WRF simulations for one main and two nested domains, where the inner-most domain represents the Marmara Region with 3 km horizontal resolution. The vertical domain of both main and nested domains extends over 28 vertical levels. Initial and boundary conditions were obtained from National Centers for Environmental Prediction-Department of Energy Reanalysis II and the Noah model was selected as the land surface model. Two model simulations were conducted; one with available land cover data and one with the newly created land cover data. Using detailed meteorological station data within the study area, we find that the simulation with the new land cover data set produces better temperature and precipitation simulations for the region, showing the value of accurate land cover data and that changing land cover data can be an important influence on local climate change.

  6. LONGITUDINAL COHORT METHODS STUDIES

    EPA Science Inventory

    Accurate exposure classification tools are required to link exposure with health effects in epidemiological studies. Exposure classification for occupational studies is relatively easy compared to predicting residential childhood exposures. Recent NHEXAS (Maryland) study articl...

  7. Pathological brain detection based on wavelet entropy and Hu moment invariants.

    PubMed

    Zhang, Yudong; Wang, Shuihua; Sun, Ping; Phillips, Preetha

    2015-01-01

    With the aim of developing an accurate pathological brain detection system, we proposed a novel automatic computer-aided diagnosis (CAD) to detect pathological brains from normal brains obtained by magnetic resonance imaging (MRI) scanning. The problem still remained a challenge for technicians and clinicians, since MR imaging generated an exceptionally large information dataset. A new two-step approach was proposed in this study. We used wavelet entropy (WE) and Hu moment invariants (HMI) for feature extraction, and the generalized eigenvalue proximal support vector machine (GEPSVM) for classification. To further enhance classification accuracy, the popular radial basis function (RBF) kernel was employed. The 10 runs of k-fold stratified cross validation result showed that the proposed "WE + HMI + GEPSVM + RBF" method was superior to existing methods w.r.t. classification accuracy. It obtained the average classification accuracies of 100%, 100%, and 99.45% over Dataset-66, Dataset-160, and Dataset-255, respectively. The proposed method is effective and can be applied to realistic use.

  8. Automatic Fault Characterization via Abnormality-Enhanced Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G; Laguna, I; de Supinski, B R

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less

  9. Comparison of Neural Networks and Tabular Nearest Neighbor Encoding for Hyperspectral Signature Classification in Unresolved Object Detection

    NASA Astrophysics Data System (ADS)

    Schmalz, M.; Ritter, G.; Key, R.

    Accurate and computationally efficient spectral signature classification is a crucial step in the nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications using linear mixing models, signature classification accuracy depends on accurate spectral endmember discrimination [1]. If the endmembers cannot be classified correctly, then the signatures cannot be classified correctly, and object recognition from hyperspectral data will be inaccurate. In practice, the number of endmembers accurately classified often depends linearly on the number of inputs. This can lead to potentially severe classification errors in the presence of noise or densely interleaved signatures. In this paper, we present an comparison of emerging technologies for nonimaging spectral signature classfication based on a highly accurate, efficient search engine called Tabular Nearest Neighbor Encoding (TNE) [3,4] and a neural network technology called Morphological Neural Networks (MNNs) [5]. Based on prior results, TNE can optimize its classifier performance to track input nonergodicities, as well as yield measures of confidence or caution for evaluation of classification results. Unlike neural networks, TNE does not have a hidden intermediate data structure (e.g., the neural net weight matrix). Instead, TNE generates and exploits a user-accessible data structure called the agreement map (AM), which can be manipulated by Boolean logic operations to effect accurate classifier refinement algorithms. The open architecture and programmability of TNE's agreement map processing allows a TNE programmer or user to determine classification accuracy, as well as characterize in detail the signatures for which TNE did not obtain classification matches, and why such mis-matches occurred. In this study, we will compare TNE and MNN based endmember classification, using performance metrics such as probability of correct classification (Pd) and rate of false detections (Rfa). As proof of principle, we analyze classification of multiple closely spaced signatures from a NASA database of space material signatures. Additional analysis pertains to computational complexity and noise sensitivity, which are superior to Bayesian techniques based on classical neural networks. [1] Winter, M.E. "Fast autonomous spectral end-member determination in hyperspectral data," in Proceedings of the 13th International Conference On Applied Geologic Remote Sensing, Vancouver, B.C., Canada, pp. 337-44 (1999). [2] N. Keshava, "A survey of spectral unmixing algorithms," Lincoln Laboratory Journal 14:55-78 (2003). [3] Key, G., M.S. SCHMALZ, F.M. Caimi, and G.X. Ritter. "Performance analysis of tabular nearest neighbor encoding algorithm for joint compression and ATR", in Proceedings SPIE 3814:115-126 (1999). [4] Schmalz, M.S. and G. Key. "Algorithms for hyperspectral signature classification in unresolved object detection using tabular nearest neighbor encoding" in Proceedings of the 2007 AMOS Conference, Maui HI (2007). [5] Ritter, G.X., G. Urcid, and M.S. Schmalz. "Autonomous single-pass endmember approximation using lattice auto-associative memories", Neurocomputing (Elsevier), accepted (June 2008).

  10. Military personnel recognition system using texture, colour, and SURF features

    NASA Astrophysics Data System (ADS)

    Irhebhude, Martins E.; Edirisinghe, Eran A.

    2014-06-01

    This paper presents an automatic, machine vision based, military personnel identification and classification system. Classification is done using a Support Vector Machine (SVM) on sets of Army, Air Force and Navy camouflage uniform personnel datasets. In the proposed system, the arm of service of personnel is recognised by the camouflage of a persons uniform, type of cap and the type of badge/logo. The detailed analysis done include; camouflage cap and plain cap differentiation using gray level co-occurrence matrix (GLCM) texture feature; classification on Army, Air Force and Navy camouflaged uniforms using GLCM texture and colour histogram bin features; plain cap badge classification into Army, Air Force and Navy using Speed Up Robust Feature (SURF). The proposed method recognised camouflage personnel arm of service on sets of data retrieved from google images and selected military websites. Correlation-based Feature Selection (CFS) was used to improve recognition and reduce dimensionality, thereby speeding the classification process. With this method success rates recorded during the analysis include 93.8% for camouflage appearance category, 100%, 90% and 100% rates of plain cap and camouflage cap categories for Army, Air Force and Navy categories, respectively. Accurate recognition was recorded using SURF for the plain cap badge category. Substantial analysis has been carried out and results prove that the proposed method can correctly classify military personnel into various arms of service. We show that the proposed method can be integrated into a face recognition system, which will recognise personnel in addition to determining the arm of service which the personnel belong. Such a system can be used to enhance the security of a military base or facility.

  11. Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan Mohammad; Novack, Steven

    2015-01-01

    Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.

  12. Refining estimates of public health spending as measured in national health expenditures accounts: the United States experience.

    PubMed

    Sensenig, Arthur L

    2007-01-01

    Providing for the delivery of public health services and understanding the funding mechanisms for these services are topics of great currency in the United States. In 2002, the Department of Homeland Security was created and the responsibility for providing public health services was realigned among federal agencies. State and local public health agencies are under increased financial pressures even as they shoulder more responsibilities as the vital first link in the provision of public health services. Recent events, such as hurricanes Katrina and Rita, served to highlight the need to accurately access the public health delivery system at all levels of government. The National Health Expenditure Accounts (NHEA), prepared by the National Health Statistics Group, measure expenditures on healthcare goods and services in the United States. Government public health activity constitutes an important service category in the NHEA. In the most recent set of estimates, Government Public Health Activity expenditures totaled $56.1 billion in 2004, or 3.0 percent of total US health spending. Accurately measuring expenditures for public health services in the United States presents many challenges. Among these challenges is the difficult task of defining what types of government activity constitute public health services. There is no clear-cut, universally accepted definition of government public health care services, and the definitions in the proposed International Classification for Health Accounts are difficult to apply to an individual country's unique delivery systems. Other challenges include the definitional issues associated with the boundaries of healthcare as well as the requirement that census and survey data collected from government(s) be compliant with the Classification of Functions of Government (COFOG), an internationally recognized classification system developed by the United Nations.

  13. Validation of the Japanese disease severity classification and the GAP model in Japanese patients with idiopathic pulmonary fibrosis.

    PubMed

    Kondoh, Shun; Chiba, Hirofumi; Nishikiori, Hirotaka; Umeda, Yasuaki; Kuronuma, Koji; Otsuka, Mitsuo; Yamada, Gen; Ohnishi, Hirofumi; Mori, Mitsuru; Kondoh, Yasuhiro; Taniguchi, Hiroyuki; Homma, Sakae; Takahashi, Hiroki

    2016-09-01

    The clinical course of idiopathic pulmonary fibrosis (IPF) shows great inter-individual differences. It is important to standardize the severity classification to accurately evaluate each patient׳s prognosis. In Japan, an original severity classification (the Japanese disease severity classification, JSC) is used. In the United States, the new multidimensional index and staging system (the GAP model) has been proposed. The objective of this study was to evaluate the model performance for the prediction of mortality risk of the JSC and GAP models using a large cohort of Japanese patients with IPF. This is a retrospective cohort study including 326 patients with IPF in the Hokkaido prefecture from 2003 to 2007. We obtained the survival curves of each stage of the GAP and JSC models to perform a comparison. In the GAP model, the prognostic value for mortality risk of Japanese patients was also evaluated. In the JSC, patient prognoses were roughly divided into two groups, mild cases (Stages I and II) and severe cases (Stages III and IV). In the GAP model, there was no significant difference in survival between Stages II and III, and the mortality rates in the patients classified into the GAP Stages I and II were underestimated. It is difficult to predict accurate prognosis of IPF using the JSC and the GAP models. A re-examination of the variables from the two models is required, as well as an evaluation of the prognostic value to revise the severity classification for Japanese patients with IPF. Copyright © 2016 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  14. [What changes for rheumatologists in the G-DRG system 2006?].

    PubMed

    Liedtke-Dyong, A; Fiori, W; Lakomek, H-J; Hülsemann, J L; Köneke, N; Liman, W; Roeder, N

    2006-07-01

    Once more, the revision of the German DRG catalogue 2006 provides for more accurate reimbursement, particularly for specialised medical services. The newly established DRG I97Z (Rheumatologische Komplexbehandlung bei Krankheiten und Störungen an Muskel-Skelett-System und Bindegewebe) for the complex and multimodal treatment of rheumatic diseases allows an accurate picture of clinical practice in specialized rheumatologic departments and hospitals. Using this specific DRG-description, it will be possible to reduce the financial pressure which results from the redistribution of budgets in the second year of the period of convergence. A precondition for the affected hospitals is to deal with budget planning and calculation of G-DRGs without calculated cost weights for 2006. In addition, this article discusses the relevance of other modifications to the G-DRG system, additional payments, the conditions for payment, the coding standards, and the classification systems for diagnosis and procedures.

  15. Scalable metagenomic taxonomy classification using a reference genome database

    PubMed Central

    Ames, Sasha K.; Hysom, David A.; Gardner, Shea N.; Lloyd, G. Scott; Gokhale, Maya B.; Allen, Jonathan E.

    2013-01-01

    Motivation: Deep metagenomic sequencing of biological samples has the potential to recover otherwise difficult-to-detect microorganisms and accurately characterize biological samples with limited prior knowledge of sample contents. Existing metagenomic taxonomic classification algorithms, however, do not scale well to analyze large metagenomic datasets, and balancing classification accuracy with computational efficiency presents a fundamental challenge. Results: A method is presented to shift computational costs to an off-line computation by creating a taxonomy/genome index that supports scalable metagenomic classification. Scalable performance is demonstrated on real and simulated data to show accurate classification in the presence of novel organisms on samples that include viruses, prokaryotes, fungi and protists. Taxonomic classification of the previously published 150 giga-base Tyrolean Iceman dataset was found to take <20 h on a single node 40 core large memory machine and provide new insights on the metagenomic contents of the sample. Availability: Software was implemented in C++ and is freely available at http://sourceforge.net/projects/lmat Contact: allen99@llnl.gov Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23828782

  16. An automatic device for detection and classification of malaria parasite species in thick blood film.

    PubMed

    Kaewkamnerd, Saowaluck; Uthaipibull, Chairat; Intarapanich, Apichart; Pannarut, Montri; Chaotheing, Sastra; Tongsima, Sissades

    2012-01-01

    Current malaria diagnosis relies primarily on microscopic examination of Giemsa-stained thick and thin blood films. This method requires vigorously trained technicians to efficiently detect and classify the malaria parasite species such as Plasmodium falciparum (Pf) and Plasmodium vivax (Pv) for an appropriate drug administration. However, accurate classification of parasite species is difficult to achieve because of inherent technical limitations and human inconsistency. To improve performance of malaria parasite classification, many researchers have proposed automated malaria detection devices using digital image analysis. These image processing tools, however, focus on detection of parasites on thin blood films, which may not detect the existence of parasites due to the parasite scarcity on the thin blood film. The problem is aggravated with low parasitemia condition. Automated detection and classification of parasites on thick blood films, which contain more numbers of parasite per detection area, would address the previous limitation. The prototype of an automatic malaria parasite identification system is equipped with mountable motorized units for controlling the movements of objective lens and microscope stage. This unit was tested for its precision to move objective lens (vertical movement, z-axis) and microscope stage (in x- and y-horizontal movements). The average precision of x-, y- and z-axes movements were 71.481 ± 7.266 μm, 40.009 ± 0.000 μm, and 7.540 ± 0.889 nm, respectively. Classification of parasites on 60 Giemsa-stained thick blood films (40 blood films containing infected red blood cells and 20 control blood films of normal red blood cells) was tested using the image analysis module. By comparing our results with the ones verified by trained malaria microscopists, the prototype detected parasite-positive and parasite-negative blood films at the rate of 95% and 68.5% accuracy, respectively. For classification performance, the thick blood films with Pv parasite was correctly classified with the success rate of 75% while the accuracy of Pf classification was 90%. This work presents an automatic device for both detection and classification of malaria parasite species on thick blood film. The system is based on digital image analysis and featured with motorized stage units, designed to easily be mounted on most conventional light microscopes used in the endemic areas. The constructed motorized module could control the movements of objective lens and microscope stage at high precision for effective acquisition of quality images for analysis. The analysis program could accurately classify parasite species, into Pf or Pv, based on distribution of chromatin size.

  17. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo

    2018-06-01

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  18. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo

    2018-06-05

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  19. Data fusion with artificial neural networks (ANN) for classification of earth surface from microwave satellite measurements

    NASA Technical Reports Server (NTRS)

    Lure, Y. M. Fleming; Grody, Norman C.; Chiou, Y. S. Peter; Yeh, H. Y. Michael

    1993-01-01

    A data fusion system with artificial neural networks (ANN) is used for fast and accurate classification of five earth surface conditions and surface changes, based on seven SSMI multichannel microwave satellite measurements. The measurements include brightness temperatures at 19, 22, 37, and 85 GHz at both H and V polarizations (only V at 22 GHz). The seven channel measurements are processed through a convolution computation such that all measurements are located at same grid. Five surface classes including non-scattering surface, precipitation over land, over ocean, snow, and desert are identified from ground-truth observations. The system processes sensory data in three consecutive phases: (1) pre-processing to extract feature vectors and enhance separability among detected classes; (2) preliminary classification of Earth surface patterns using two separate and parallely acting classifiers: back-propagation neural network and binary decision tree classifiers; and (3) data fusion of results from preliminary classifiers to obtain the optimal performance in overall classification. Both the binary decision tree classifier and the fusion processing centers are implemented by neural network architectures. The fusion system configuration is a hierarchical neural network architecture, in which each functional neural net will handle different processing phases in a pipelined fashion. There is a total of around 13,500 samples for this analysis, of which 4 percent are used as the training set and 96 percent as the testing set. After training, this classification system is able to bring up the detection accuracy to 94 percent compared with 88 percent for back-propagation artificial neural networks and 80 percent for binary decision tree classifiers. The neural network data fusion classification is currently under progress to be integrated in an image processing system at NOAA and to be implemented in a prototype of a massively parallel and dynamically reconfigurable Modular Neural Ring (MNR).

  20. Comparison of two Classification methods (MLC and SVM) to extract land use and land cover in Johor Malaysia

    NASA Astrophysics Data System (ADS)

    Rokni Deilmai, B.; Ahmad, B. Bin; Zabihi, H.

    2014-06-01

    Mapping is essential for the analysis of the land use and land cover, which influence many environmental processes and properties. For the purpose of the creation of land cover maps, it is important to minimize error. These errors will propagate into later analyses based on these land cover maps. The reliability of land cover maps derived from remotely sensed data depends on an accurate classification. In this study, we have analyzed multispectral data using two different classifiers including Maximum Likelihood Classifier (MLC) and Support Vector Machine (SVM). To pursue this aim, Landsat Thematic Mapper data and identical field-based training sample datasets in Johor Malaysia used for each classification method, which results indicate in five land cover classes forest, oil palm, urban area, water, rubber. Classification results indicate that SVM was more accurate than MLC. With demonstrated capability to produce reliable cover results, the SVM methods should be especially useful for land cover classification.

  1. Chemometric and multivariate statistical analysis of time-of-flight secondary ion mass spectrometry spectra from complex Cu-Fe sulfides.

    PubMed

    Kalegowda, Yogesh; Harmer, Sarah L

    2012-03-20

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) spectra of mineral samples are complex, comprised of large mass ranges and many peaks. Consequently, characterization and classification analysis of these systems is challenging. In this study, different chemometric and statistical data evaluation methods, based on monolayer sensitive TOF-SIMS data, have been tested for the characterization and classification of copper-iron sulfide minerals (chalcopyrite, chalcocite, bornite, and pyrite) at different flotation pulp conditions (feed, conditioned feed, and Eh modified). The complex mass spectral data sets were analyzed using the following chemometric and statistical techniques: principal component analysis (PCA); principal component-discriminant functional analysis (PC-DFA); soft independent modeling of class analogy (SIMCA); and k-Nearest Neighbor (k-NN) classification. PCA was found to be an important first step in multivariate analysis, providing insight into both the relative grouping of samples and the elemental/molecular basis for those groupings. For samples exposed to oxidative conditions (at Eh ~430 mV), each technique (PCA, PC-DFA, SIMCA, and k-NN) was found to produce excellent classification. For samples at reductive conditions (at Eh ~ -200 mV SHE), k-NN and SIMCA produced the most accurate classification. Phase identification of particles that contain the same elements but a different crystal structure in a mixed multimetal mineral system has been achieved.

  2. Prostate segmentation by sparse representation based classification

    PubMed Central

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-01-01

    Purpose: The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. Methods: To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. Results: The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. Conclusions: The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation. PMID:23039673

  3. Prostate segmentation by sparse representation based classification.

    PubMed

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-10-01

    The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation.

  4. Fusion with Language Models Improves Spelling Accuracy for ERP-based Brain Computer Interface Spellers

    PubMed Central

    Orhan, Umut; Erdogmus, Deniz; Roark, Brian; Purwar, Shalini; Hild, Kenneth E.; Oken, Barry; Nezamfar, Hooman; Fried-Oken, Melanie

    2013-01-01

    Event related potentials (ERP) corresponding to a stimulus in electroencephalography (EEG) can be used to detect the intent of a person for brain computer interfaces (BCI). This paradigm is widely utilized to build letter-by-letter text input systems using BCI. Nevertheless using a BCI-typewriter depending only on EEG responses will not be sufficiently accurate for single-trial operation in general, and existing systems utilize many-trial schemes to achieve accuracy at the cost of speed. Hence incorporation of a language model based prior or additional evidence is vital to improve accuracy and speed. In this paper, we study the effects of Bayesian fusion of an n-gram language model with a regularized discriminant analysis ERP detector for EEG-based BCIs. The letter classification accuracies are rigorously evaluated for varying language model orders as well as number of ERP-inducing trials. The results demonstrate that the language models contribute significantly to letter classification accuracy. Specifically, we find that a BCI-speller supported by a 4-gram language model may achieve the same performance using 3-trial ERP classification for the initial letters of the words and using single trial ERP classification for the subsequent ones. Overall, fusion of evidence from EEG and language models yields a significant opportunity to increase the word rate of a BCI based typing system. PMID:22255652

  5. Construction of an Yucatec Maya soil classification and comparison with the WRB framework

    PubMed Central

    2010-01-01

    Background Mayas living in southeast Mexico have used soils for millennia and provide thus a good example for understanding soil-culture relationships and for exploring the ways indigenous people name and classify the soils of their territory. This paper shows an attempt to organize the Maya soil knowledge into a soil classification scheme and compares the latter with the World Reference Base for Soil Resources (WRB). Methods Several participative soil surveys were carried out in the period 2000-2009 with the help of bilingual Maya-Spanish-speaking farmers. A multilingual soil database was built with 315 soil profile descriptions. Results On the basis of the diagnostic soil properties and the soil nomenclature used by Maya farmers, a soil classification scheme with a hierarchic, dichotomous and open structure was constructed, organized in groups and qualifiers in a fashion similar to that of the WRB system. Maya soil properties were used at the same categorical levels as similar diagnostic properties are used in the WRB system. Conclusions The Maya soil classification (MSC) is a natural system based on key properties, such as relief position, rock types, size and quantity of stones, color of topsoil and subsoil, depth, water dynamics, and plant-supporting processes. The MSC addresses the soil properties of surficial and subsurficial horizons, and uses plant communities as qualifier in some cases. The MSC is more accurate than the WRB for classifying Leptosols. PMID:20152047

  6. Construction of an Yucatec Maya soil classification and comparison with the WRB framework.

    PubMed

    Bautista, Francisco; Zinck, J Alfred

    2010-02-13

    Mayas living in southeast Mexico have used soils for millennia and provide thus a good example for understanding soil-culture relationships and for exploring the ways indigenous people name and classify the soils of their territory. This paper shows an attempt to organize the Maya soil knowledge into a soil classification scheme and compares the latter with the World Reference Base for Soil Resources (WRB). Several participative soil surveys were carried out in the period 2000-2009 with the help of bilingual Maya-Spanish-speaking farmers. A multilingual soil database was built with 315 soil profile descriptions. On the basis of the diagnostic soil properties and the soil nomenclature used by Maya farmers, a soil classification scheme with a hierarchic, dichotomous and open structure was constructed, organized in groups and qualifiers in a fashion similar to that of the WRB system. Maya soil properties were used at the same categorical levels as similar diagnostic properties are used in the WRB system. The Maya soil classification (MSC) is a natural system based on key properties, such as relief position, rock types, size and quantity of stones, color of topsoil and subsoil, depth, water dynamics, and plant-supporting processes. The MSC addresses the soil properties of surficial and subsurficial horizons, and uses plant communities as qualifier in some cases. The MSC is more accurate than the WRB for classifying Leptosols.

  7. Tissue classification and segmentation of pressure injuries using convolutional neural networks.

    PubMed

    Zahia, Sofia; Sierra-Sosa, Daniel; Garcia-Zapirain, Begonya; Elmaghraby, Adel

    2018-06-01

    This paper presents a new approach for automatic tissue classification in pressure injuries. These wounds are localized skin damages which need frequent diagnosis and treatment. Therefore, a reliable and accurate systems for segmentation and tissue type identification are needed in order to achieve better treatment results. Our proposed system is based on a Convolutional Neural Network (CNN) devoted to performing optimized segmentation of the different tissue types present in pressure injuries (granulation, slough, and necrotic tissues). A preprocessing step removes the flash light and creates a set of 5x5 sub-images which are used as input for the CNN network. The network output will classify every sub-image of the validation set into one of the three classes studied. The metrics used to evaluate our approach show an overall average classification accuracy of 92.01%, an average total weighted Dice Similarity Coefficient of 91.38%, and an average precision per class of 97.31% for granulation tissue, 96.59% for necrotic tissue, and 77.90% for slough tissue. Our system has been proven to make recognition of complicated structures in biomedical images feasible. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. The Traumatized TFCC: An Illustrated Review of the Anatomy and Injury Patterns of the Triangular Fibrocartilage Complex.

    PubMed

    Skalski, Matthew R; White, Eric A; Patel, Dakshesh B; Schein, Aaron J; RiveraMelo, Hector; Matcuk, George R

    2016-01-01

    The triangular fibrocartilage complex (TFCC) plays an important role in wrist biomechanics and is prone to traumatic and degenerative injury, making it a common source of ulnar-sided wrist pain. Because of this, the TFCC is frequently imaged, and a detailed understanding of its anatomy and injury patterns is critical in generating an accurate report to help guide treatment. In this review, we provide a detailed overview of TFCC anatomy, its normal appearance on magnetic resonance imaging, the spectrum of TFCC injuries based on the Palmer classification system, and pitfalls in accurate assessment. Copyright © 2015 Mosby, Inc. All rights reserved.

  9. Accurate positioning based on acoustic and optical sensors

    NASA Astrophysics Data System (ADS)

    Cai, Kerong; Deng, Jiahao; Guo, Hualing

    2009-11-01

    Unattended laser target designator (ULTD) was designed to partly take the place of conventional LTDs for accurate positioning and laser marking. Analyzed the precision, accuracy and errors of acoustic sensor array, the requirements of laser generator, and the technology of image analysis and tracking, the major system modules were determined. The target's classification, velocity and position can be measured by sensors, and then coded laser beam will be emitted intelligently to mark the excellent position at the excellent time. The conclusion shows that, ULTD can not only avoid security threats, be deployed massively, and accomplish battle damage assessment (BDA), but also be fit for information-based warfare.

  10. Electro-optical seasonal weather and gender data collection

    NASA Astrophysics Data System (ADS)

    McCoppin, Ryan; Koester, Nathan; Rude, Howard N.; Rizki, Mateen; Tamburino, Louis; Freeman, Andrew; Mendoza-Schrock, Olga

    2013-05-01

    This paper describes the process used to collect the Seasonal Weather And Gender (SWAG) dataset; an electro-optical dataset of human subjects that can be used to develop advanced gender classification algorithms. Several novel features characterize this ongoing effort (1) the human subjects self-label their gender by performing a specific action during the data collection and (2) the data collection will span months and even years resulting in a dataset containing realistic levels and types of clothing corresponding to the various seasons and weather conditions. It is envisioned that this type of data will support the development and evaluation of more robust gender classification systems that are capable of accurate gender recognition under extended operating conditions.

  11. Identification of agricultural crops by computer processing of ERTS MSS data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Cipra, J. E.

    1973-01-01

    Quantitative evaluation of computer-processed ERTS MSS data classifications has shown that major crop species (corn and soybeans) can be accurately identified. The classifications of satellite data over a 2000 square mile area not only covered more than 100 times the area previously covered using aircraft, but also yielded improved results through the use of temporal and spatial data in addition to the spectral information. Furthermore, training sets could be extended over far larger areas than was ever possible with aircraft scanner data. And, preliminary comparisons of acreage estimates from ERTS data and ground-based systems agreed well. The results demonstrate the potential utility of this technology for obtaining crop production information.

  12. Classification of forest land attributes using multi-source remotely sensed data

    NASA Astrophysics Data System (ADS)

    Pippuri, Inka; Suvanto, Aki; Maltamo, Matti; Korhonen, Kari T.; Pitkänen, Juho; Packalen, Petteri

    2016-02-01

    The aim of the study was to (1) examine the classification of forest land using airborne laser scanning (ALS) data, satellite images and sample plots of the Finnish National Forest Inventory (NFI) as training data and to (2) identify best performing metrics for classifying forest land attributes. Six different schemes of forest land classification were studied: land use/land cover (LU/LC) classification using both national classes and FAO (Food and Agricultural Organization of the United Nations) classes, main type, site type, peat land type and drainage status. Special interest was to test different ALS-based surface metrics in classification of forest land attributes. Field data consisted of 828 NFI plots collected in 2008-2012 in southern Finland and remotely sensed data was from summer 2010. Multinomial logistic regression was used as the classification method. Classification of LU/LC classes were highly accurate (kappa-values 0.90 and 0.91) but also the classification of site type, peat land type and drainage status succeeded moderately well (kappa-values 0.51, 0.69 and 0.52). ALS-based surface metrics were found to be the most important predictor variables in classification of LU/LC class, main type and drainage status. In best classification models of forest site types both spectral metrics from satellite data and point cloud metrics from ALS were used. In turn, in the classification of peat land types ALS point cloud metrics played the most important role. Results indicated that the prediction of site type and forest land category could be incorporated into stand level forest management inventory system in Finland.

  13. Halitosis: a new definition and classification.

    PubMed

    Aydin, M; Harvey-Woodworth, C N

    2014-07-11

    There is no universally accepted, precise definition, nor standardisation in terminology and classification of halitosis. To propose a new definition, free from subjective descriptions (faecal, fish odour, etc), one-time sulphide detector readings and organoleptic estimation of odour levels, and excludes temporary exogenous odours (for example, from dietary sources). Some terms previously used in the literature are revised. A new aetiologic classification is proposed, dividing pathologic halitosis into Type 1 (oral), Type 2 (airway), Type 3 (gastroesophageal), Type 4 (blood-borne) and Type 5 (subjective). In reality, any halitosis complaint is potentially the sum of these types in any combination, superimposed on the Type 0 (physiologic odour) present in health. This system allows for multiple diagnoses in the same patient, reflecting the multifactorial nature of the complaint. It represents the most accurate model to understand halitosis and forms an efficient and logical basis for clinical management of the complaint.

  14. Sea ice type maps from Alaska synthetic aperture radar facility imagery: An assessment

    NASA Technical Reports Server (NTRS)

    Fetterer, Florence M.; Gineris, Denise; Kwok, Ronald

    1994-01-01

    Synthetic aperture radar (SAR) imagery received at the Alaskan SAR Facility is routinely and automatically classified on the Geophysical Processor System (GPS) to create ice type maps. We evaluated the wintertime performance of the GPS classification algorithm by comparing ice type percentages from supervised classification with percentages from the algorithm. The root mean square (RMS) difference for multiyear ice is about 6%, while the inconsistency in supervised classification is about 3%. The algorithm separates first-year from multiyear ice well, although it sometimes fails to correctly classify new ice and open water owing to the wide distribution of backscatter for these classes. Our results imply a high degree of accuracy and consistency in the growing archive of multiyear and first-year ice distribution maps. These results have implications for heat and mass balance studies which are furthered by the ability to accurately characterize ice type distributions over a large part of the Arctic.

  15. Aircraft Operations Classification System

    NASA Technical Reports Server (NTRS)

    Harlow, Charles; Zhu, Weihong

    2001-01-01

    Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events.

  16. Object-based forest classification to facilitate landscape-scale conservation in the Mississippi Alluvial Valley

    USGS Publications Warehouse

    Mitchell, Michael; Wilson, R. Randy; Twedt, Daniel J.; Mini, Anne E.; James, J. Dale

    2016-01-01

    The Mississippi Alluvial Valley is a floodplain along the southern extent of the Mississippi River extending from southern Missouri to the Gulf of Mexico. This area once encompassed nearly 10 million ha of floodplain forests, most of which has been converted to agriculture over the past two centuries. Conservation programs in this region revolve around protection of existing forest and reforestation of converted lands. Therefore, an accurate and up to date classification of forest cover is essential for conservation planning, including efforts that prioritize areas for conservation activities. We used object-based image analysis with Random Forest classification to quickly and accurately classify forest cover. We used Landsat band, band ratio, and band index statistics to identify and define similar objects as our training sets instead of selecting individual training points. This provided a single rule-set that was used to classify each of the 11 Landsat 5 Thematic Mapper scenes that encompassed the Mississippi Alluvial Valley. We classified 3,307,910±85,344 ha (32% of this region) as forest. Our overall classification accuracy was 96.9% with Kappa statistic of 0.96. Because this method of forest classification is rapid and accurate, assessment of forest cover can be regularly updated and progress toward forest habitat goals identified in conservation plans can be periodically evaluated.

  17. Delineation and geometric modeling of road networks

    NASA Astrophysics Data System (ADS)

    Poullis, Charalambos; You, Suya

    In this work we present a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR. Uniquely, the proposed system is an integrated solution that merges the power of perceptual grouping theory (Gabor filtering, tensor voting) and optimized segmentation techniques (global optimization using graph-cuts) into a unified framework to address the challenging problems of geospatial feature detection and classification. Firstly, the local precision of the Gabor filters is combined with the global context of the tensor voting to produce accurate classification of the geospatial features. In addition, the tensorial representation used for the encoding of the data eliminates the need for any thresholds, therefore removing any data dependencies. Secondly, a novel orientation-based segmentation is presented which incorporates the classification of the perceptual grouping, and results in segmentations with better defined boundaries and continuous linear segments. Finally, a set of gaussian-based filters are applied to automatically extract centerline information (magnitude, width and orientation). This information is then used for creating road segments and transforming them to their polygonal representations.

  18. Methods for Real-Time Prediction of the Mode of Travel Using Smartphone-Based GPS and Accelerometer Data

    PubMed Central

    Martin, Bryan D.; Wolfson, Julian; Adomavicius, Gediminas; Fan, Yingling

    2017-01-01

    We propose and compare combinations of several methods for classifying transportation activity data from smartphone GPS and accelerometer sensors. We have two main objectives. First, we aim to classify our data as accurately as possible. Second, we aim to reduce the dimensionality of the data as much as possible in order to reduce the computational burden of the classification. We combine dimension reduction and classification algorithms and compare them with a metric that balances accuracy and dimensionality. In doing so, we develop a classification algorithm that accurately classifies five different modes of transportation (i.e., walking, biking, car, bus and rail) while being computationally simple enough to run on a typical smartphone. Further, we use data that required no behavioral changes from the smartphone users to collect. Our best classification model uses the random forest algorithm to achieve 96.8% accuracy. PMID:28885550

  19. Methods for Real-Time Prediction of the Mode of Travel Using Smartphone-Based GPS and Accelerometer Data.

    PubMed

    Martin, Bryan D; Addona, Vittorio; Wolfson, Julian; Adomavicius, Gediminas; Fan, Yingling

    2017-09-08

    We propose and compare combinations of several methods for classifying transportation activity data from smartphone GPS and accelerometer sensors. We have two main objectives. First, we aim to classify our data as accurately as possible. Second, we aim to reduce the dimensionality of the data as much as possible in order to reduce the computational burden of the classification. We combine dimension reduction and classification algorithms and compare them with a metric that balances accuracy and dimensionality. In doing so, we develop a classification algorithm that accurately classifies five different modes of transportation (i.e., walking, biking, car, bus and rail) while being computationally simple enough to run on a typical smartphone. Further, we use data that required no behavioral changes from the smartphone users to collect. Our best classification model uses the random forest algorithm to achieve 96.8% accuracy.

  20. Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations.

    PubMed

    Fabelo, Himar; Ortega, Samuel; Ravi, Daniele; Kiran, B Ravi; Sosa, Coralia; Bulters, Diederik; Callicó, Gustavo M; Bulstrode, Harry; Szolna, Adam; Piñeiro, Juan F; Kabwama, Silvester; Madroñal, Daniel; Lazcano, Raquel; J-O'Shanahan, Aruma; Bisshopp, Sara; Hernández, María; Báez, Abelardo; Yang, Guang-Zhong; Stanciulescu, Bogdan; Salvador, Rubén; Juárez, Eduardo; Sarmiento, Roberto

    2018-01-01

    Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area.

  1. Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations

    PubMed Central

    Kabwama, Silvester; Madroñal, Daniel; Lazcano, Raquel; J-O’Shanahan, Aruma; Bisshopp, Sara; Hernández, María; Báez, Abelardo; Yang, Guang-Zhong; Stanciulescu, Bogdan; Salvador, Rubén; Juárez, Eduardo; Sarmiento, Roberto

    2018-01-01

    Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area. PMID:29554126

  2. Overweight and obesity prevalence among Cree youth of Eeyou Istchee according to three body mass index classification systems.

    PubMed

    St-Jean, Audray; Meziou, Salma; Ayotte, Pierre; Lucas, Michel

    2017-11-22

    Little is known about the suitability of three commonly used body mass index (BMI) classification systems for Indigenous youth. We estimated overweight and obesity prevalence among Cree youth of Eeyou Istchee according to three BMI classification systems, assessed the level of agreement between them, and evaluated their accuracy through body fat and cardiometabolic risk factors. Data on 288 youth (aged 8-17 years) were collected. Overweight and obesity prevalence were estimated with Centers for Disease Control and Prevention (CDC), International Obesity Task Force (IOTF) and World Health Organization (WHO) criteria. Agreement was measured with weighted kappa (κw). Associations with body fat and cardiometabolic risk factors were evaluated by analysis of variance. Obesity prevalence was 42.7% with IOTF, 47.2% with CDC, and 49.3% with WHO criteria. Agreement was almost perfect between IOTF and CDC (κw = 0.93), IOTF and WHO (κw = 0.91), and WHO and CDC (κw = 0.94). Means of body fat and cardiometabolic risk factors were significantly higher (P trend  < 0.001) from normal weight to obesity, regardless of the system used. Youth considered overweight by IOTF but obese by CDC or WHO exhibited less severe clinical obesity. IOTF seems to be more accurate in identifying obesity in Cree youth.

  3. Fall Detection System for the Elderly Based on the Classification of Shimmer Sensor Prototype Data

    PubMed Central

    Ahmed, Moiz; Mehmood, Nadeem; Mehmood, Amir; Rizwan, Kashif

    2017-01-01

    Objectives Falling in the elderly is considered a major cause of death. In recent years, ambient and wireless sensor platforms have been extensively used in developed countries for the detection of falls in the elderly. However, we believe extra efforts are required to address this issue in developing countries, such as Pakistan, where most deaths due to falls are not even reported. Considering this, in this paper, we propose a fall detection system prototype that s based on the classification on real time shimmer sensor data. Methods We first developed a data set, ‘SMotion’ of certain postures that could lead to falls in the elderly by using a body area network of Shimmer sensors and categorized the items in this data set into age and weight groups. We developed a feature selection and classification system using three classifiers, namely, support vector machine (SVM), K-nearest neighbor (KNN), and neural network (NN). Finally, a prototype was fabricated to generate alerts to caregivers, health experts, or emergency services in case of fall. Results To evaluate the proposed system, SVM, KNN, and NN were used. The results of this study identified KNN as the most accurate classifier with maximum accuracy of 96% for age groups and 93% for weight groups. Conclusions In this paper, a classification-based fall detection system is proposed. For this purpose, the SMotion data set was developed and categorized into two groups (age and weight groups). The proposed fall detection system for the elderly is implemented through a body area sensor network using third-generation sensors. The evaluation results demonstrate the reasonable performance of the proposed fall detection prototype system in the tested scenarios. PMID:28875049

  4. Physiological sensor signals classification for healthcare using sensor data fusion and case-based reasoning.

    PubMed

    Begum, Shahina; Barua, Shaibal; Ahmed, Mobyen Uddin

    2014-07-03

    Today, clinicians often do diagnosis and classification of diseases based on information collected from several physiological sensor signals. However, sensor signal could easily be vulnerable to uncertain noises or interferences and due to large individual variations sensitivity to different physiological sensors could also vary. Therefore, multiple sensor signal fusion is valuable to provide more robust and reliable decision. This paper demonstrates a physiological sensor signal classification approach using sensor signal fusion and case-based reasoning. The proposed approach has been evaluated to classify Stressed or Relaxed individuals using sensor data fusion. Physiological sensor signals i.e., Heart Rate (HR), Finger Temperature (FT), Respiration Rate (RR), Carbon dioxide (CO2) and Oxygen Saturation (SpO2) are collected during the data collection phase. Here, sensor fusion has been done in two different ways: (i) decision-level fusion using features extracted through traditional approaches; and (ii) data-level fusion using features extracted by means of Multivariate Multiscale Entropy (MMSE). Case-Based Reasoning (CBR) is applied for the classification of the signals. The experimental result shows that the proposed system could classify Stressed or Relaxed individual 87.5% accurately compare to an expert in the domain. So, it shows promising result in the psychophysiological domain and could be possible to adapt this approach to other relevant healthcare systems.

  5. What can we learn in drug allergy management from World Health Organization's international classifications?

    PubMed

    Tanno, L K; Torres, M J; Castells, M; Demoly, P

    2018-05-01

    Drug hypersensitivity reactions (DHRs) represent growing health problem worldwide, affecting more than 7% of the general population, and represent an important public health problem. However, knowledge in DHRs morbidity and mortality epidemiological data is still not optimal and international comparable standards remain poorly accessed. Institutional databases worldwide increasingly use the WHO International Classification of Diseases (ICD) system to classify diagnoses, health services utilization, and death data. The misclassification of disorders in the ICD system contributes to a lack of ascertainment and recognition of their importance for healthcare planning and resource allocation. It also hampers clinical practice and prevention actions. To further inform the allergy community and to ensure that the revision process is transparent as advised in the WHO ICD-11 revision agenda, we report the advances and use of the pioneering "Drug hypersensitivity" subsection of ICD-11 and implementation in the WHO International Classification of Health Interventions (ICHI). The new classification addressed to DHRs will enable the collection of more accurate epidemiological data to support quality management of patients with drug allergies and better facilitate healthcare planning and decision-making and public health measures to prevent and reduce the morbidity and mortality attributable to DHRs. © 2017 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.

  6. Accurate, Rapid Taxonomic Classification of Fungal Large-Subunit rRNA Genes

    PubMed Central

    Liu, Kuan-Liang; Porras-Alfaro, Andrea; Eichorst, Stephanie A.

    2012-01-01

    Taxonomic and phylogenetic fingerprinting based on sequence analysis of gene fragments from the large-subunit rRNA (LSU) gene or the internal transcribed spacer (ITS) region is becoming an integral part of fungal classification. The lack of an accurate and robust classification tool trained by a validated sequence database for taxonomic placement of fungal LSU genes is a severe limitation in taxonomic analysis of fungal isolates or large data sets obtained from environmental surveys. Using a hand-curated set of 8,506 fungal LSU gene fragments, we determined the performance characteristics of a naïve Bayesian classifier across multiple taxonomic levels and compared the classifier performance to that of a sequence similarity-based (BLASTN) approach. The naïve Bayesian classifier was computationally more rapid (>460-fold with our system) than the BLASTN approach, and it provided equal or superior classification accuracy. Classifier accuracies were compared using sequence fragments of 100 bp and 400 bp and two different PCR primer anchor points to mimic sequence read lengths commonly obtained using current high-throughput sequencing technologies. Accuracy was higher with 400-bp sequence reads than with 100-bp reads. It was also significantly affected by sequence location across the 1,400-bp test region. The highest accuracy was obtained across either the D1 or D2 variable region. The naïve Bayesian classifier provides an effective and rapid means to classify fungal LSU sequences from large environmental surveys. The training set and tool are publicly available through the Ribosomal Database Project (http://rdp.cme.msu.edu/classifier/classifier.jsp). PMID:22194300

  7. Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth

    PubMed Central

    Just, Marcel Adam; Pan, Lisa; Cherkassky, Vladimir L.; McMakin, Dana; Cha, Christine; Nock, Matthew K.; Brent, David

    2017-01-01

    The clinical assessment of suicidal risk would be significantly complemented by a biologically-based measure that assesses alterations in the neural representations of concepts related to death and life in people who engage in suicidal ideation. This study used machine-learning algorithms (Gaussian Naïve Bayes) to identify such individuals (17 suicidal ideators vs 17 controls) with high (91%) accuracy, based on their altered fMRI neural signatures of death and life-related concepts. The most discriminating concepts were death, cruelty, trouble, carefree, good, and praise. A similar classification accurately (94%) discriminated 9 suicidal ideators who had made a suicide attempt from 8 who had not. Moreover, a major facet of the concept alterations was the evoked emotion, whose neural signature served as an alternative basis for accurate (85%) group classification. The study establishes a biological, neurocognitive basis for altered concept representations in participants with suicidal ideation, which enables highly accurate group membership classification. PMID:29367952

  8. Classification of Dynamical Diffusion States in Single Molecule Tracking Microscopy

    PubMed Central

    Bosch, Peter J.; Kanger, Johannes S.; Subramaniam, Vinod

    2014-01-01

    Single molecule tracking of membrane proteins by fluorescence microscopy is a promising method to investigate dynamic processes in live cells. Translating the trajectories of proteins to biological implications, such as protein interactions, requires the classification of protein motion within the trajectories. Spatial information of protein motion may reveal where the protein interacts with cellular structures, because binding of proteins to such structures often alters their diffusion speed. For dynamic diffusion systems, we provide an analytical framework to determine in which diffusion state a molecule is residing during the course of its trajectory. We compare different methods for the quantification of motion to utilize this framework for the classification of two diffusion states (two populations with different diffusion speed). We found that a gyration quantification method and a Bayesian statistics-based method are the most accurate in diffusion-state classification for realistic experimentally obtained datasets, of which the gyration method is much less computationally demanding. After classification of the diffusion, the lifetime of the states can be determined, and images of the diffusion states can be reconstructed at high resolution. Simulations validate these applications. We apply the classification and its applications to experimental data to demonstrate the potential of this approach to obtain further insights into the dynamics of cell membrane proteins. PMID:25099798

  9. Computer-Aided Diagnosis of Acute Lymphoblastic Leukaemia

    PubMed Central

    2018-01-01

    Leukaemia is a form of blood cancer which affects the white blood cells and damages the bone marrow. Usually complete blood count (CBC) and bone marrow aspiration are used to diagnose the acute lymphoblastic leukaemia. It can be a fatal disease if not diagnosed at the earlier stage. In practice, manual microscopic evaluation of stained sample slide is used for diagnosis of leukaemia. But manual diagnostic methods are time-consuming, less accurate, and prone to errors due to various human factors like stress, fatigue, and so forth. Therefore, different automated systems have been proposed to wrestle the glitches in the manual diagnostic methods. In recent past, some computer-aided leukaemia diagnosis methods are presented. These automated systems are fast, reliable, and accurate as compared to manual diagnosis methods. This paper presents review of computer-aided diagnosis systems regarding their methodologies that include enhancement, segmentation, feature extraction, classification, and accuracy. PMID:29681996

  10. Advances in Spectral-Spatial Classification of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2012-01-01

    Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation and contrast of the spatial structures present in the image. Then the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines using the available spectral information and the extracted spatial information. Spatial post-processing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple classifier system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral-spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.

  11. An approach for combining airborne LiDAR and high-resolution aerial color imagery using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Liu, Yansong; Monteiro, Sildomar T.; Saber, Eli

    2015-10-01

    Changes in vegetation cover, building construction, road network and traffic conditions caused by urban expansion affect the human habitat as well as the natural environment in rapidly developing cities. It is crucial to assess these changes and respond accordingly by identifying man-made and natural structures with accurate classification algorithms. With the increase in use of multi-sensor remote sensing systems, researchers are able to obtain a more complete description of the scene of interest. By utilizing multi-sensor data, the accuracy of classification algorithms can be improved. In this paper, we propose a method for combining 3D LiDAR point clouds and high-resolution color images to classify urban areas using Gaussian processes (GP). GP classification is a powerful non-parametric classification method that yields probabilistic classification results. It makes predictions in a way that addresses the uncertainty of real world. In this paper, we attempt to identify man-made and natural objects in urban areas including buildings, roads, trees, grass, water and vehicles. LiDAR features are derived from the 3D point clouds and the spatial and color features are extracted from RGB images. For classification, we use the Laplacian approximation for GP binary classification on the new combined feature space. The multiclass classification has been implemented by using one-vs-all binary classification strategy. The result of applying support vector machines (SVMs) and logistic regression (LR) classifier is also provided for comparison. Our experiments show a clear improvement of classification results by using the two sensors combined instead of each sensor separately. Also we found the advantage of applying GP approach to handle the uncertainty in classification result without compromising accuracy compared to SVM, which is considered as the state-of-the-art classification method.

  12. Realizing parameterless automatic classification of remote sensing imagery using ontology engineering and cyberinfrastructure techniques

    NASA Astrophysics Data System (ADS)

    Sun, Ziheng; Fang, Hui; Di, Liping; Yue, Peng

    2016-09-01

    It was an untouchable dream for remote sensing experts to realize total automatic image classification without inputting any parameter values. Experts usually spend hours and hours on tuning the input parameters of classification algorithms in order to obtain the best results. With the rapid development of knowledge engineering and cyberinfrastructure, a lot of data processing and knowledge reasoning capabilities become online accessible, shareable and interoperable. Based on these recent improvements, this paper presents an idea of parameterless automatic classification which only requires an image and automatically outputs a labeled vector. No parameters and operations are needed from endpoint consumers. An approach is proposed to realize the idea. It adopts an ontology database to store the experiences of tuning values for classifiers. A sample database is used to record training samples of image segments. Geoprocessing Web services are used as functionality blocks to finish basic classification steps. Workflow technology is involved to turn the overall image classification into a total automatic process. A Web-based prototypical system named PACS (Parameterless Automatic Classification System) is implemented. A number of images are fed into the system for evaluation purposes. The results show that the approach could automatically classify remote sensing images and have a fairly good average accuracy. It is indicated that the classified results will be more accurate if the two databases have higher quality. Once the experiences and samples in the databases are accumulated as many as an expert has, the approach should be able to get the results with similar quality to that a human expert can get. Since the approach is total automatic and parameterless, it can not only relieve remote sensing workers from the heavy and time-consuming parameter tuning work, but also significantly shorten the waiting time for consumers and facilitate them to engage in image classification activities. Currently, the approach is used only on high resolution optical three-band remote sensing imagery. The feasibility using the approach on other kinds of remote sensing images or involving additional bands in classification will be studied in future.

  13. Accurate vehicle classification including motorcycles using piezoelectric sensors.

    DOT National Transportation Integrated Search

    2013-03-01

    State and federal departments of transportation are charged with classifying vehicles and monitoring mileage traveled. Accurate data reporting enables suitable roadway design for safety and capacity. Vehicle classifiers currently employ inductive loo...

  14. The ESHRE/ESGE consensus on the classification of female genital tract congenital anomalies†,‡

    PubMed Central

    Grimbizis, Grigoris F.; Gordts, Stephan; Di Spiezio Sardo, Attilio; Brucker, Sara; De Angelis, Carlo; Gergolet, Marco; Li, Tin-Chiu; Tanos, Vasilios; Brölmann, Hans; Gianaroli, Luca; Campo, Rudi

    2013-01-01

    STUDY QUESTION What classification system is more suitable for the accurate, clear, simple and related to the clinical management categorization of female genital anomalies? SUMMARY ANSWER The new ESHRE/ESGE classification system of female genital anomalies is presented. WHAT IS KNOWN ALREADY Congenital malformations of the female genital tract are common miscellaneous deviations from normal anatomy with health and reproductive consequences. Until now, three systems have been proposed for their categorization but all of them are associated with serious limitations. STUDY DESIGN, SIZE AND DURATION The European Society of Human Reproduction and Embryology (ESHRE) and the European Society for Gynaecological Endoscopy (ESGE) have established a common Working Group, under the name CONUTA (CONgenital UTerine Anomalies), with the goal of developing a new updated classification system. A scientific committee (SC) has been appointed to run the project, looking also for consensus within the scientists working in the field. PARTICIPANTS/MATERIALS, SETTING, METHODS The new system is designed and developed based on (i) scientific research through critical review of current proposals and preparation of an initial proposal for discussion between the experts, (ii) consensus measurement among the experts through the use of the DELPHI procedure and (iii) consensus development by the SC, taking into account the results of the DELPHI procedure and the comments of the experts. Almost 90 participants took part in the process of development of the ESHRE/ESGE classification system, contributing with their structured answers and comments. MAIN RESULTS AND THE ROLE OF CHANCE The ESHRE/ESGE classification system is based on anatomy. Anomalies are classified into the following main classes, expressing uterine anatomical deviations deriving from the same embryological origin: U0, normal uterus; U1, dysmorphic uterus; U2, septate uterus; U3, bicorporeal uterus; U4, hemi-uterus; U5, aplastic uterus; U6, for still unclassified cases. Main classes have been divided into sub-classes expressing anatomical varieties with clinical significance. Cervical and vaginal anomalies are classified independently into sub-classes having clinical significance. LIMITATIONS, REASONS FOR CAUTION The ESHRE/ESGE classification of female genital anomalies seems to fulfill the expectations and the needs of the experts in the field, but its clinical value needs to be proved in everyday practice. WIDER IMPLICATIONS OF THE FINDINGS The ESHRE/ESGE classification system of female genital anomalies could be used as a starting point for the development of guidelines for their diagnosis and treatment. STUDY FUNDING/COMPETING INTEREST(S) None. PMID:23771171

  15. Mayo clinic NLP system for patient smoking status identification.

    PubMed

    Savova, Guergana K; Ogren, Philip V; Duffy, Patrick H; Buntrock, James D; Chute, Christopher G

    2008-01-01

    This article describes our system entry for the 2006 I2B2 contest "Challenges in Natural Language Processing for Clinical Data" for the task of identifying the smoking status of patients. Our system makes the simplifying assumption that patient-level smoking status determination can be achieved by accurately classifying individual sentences from a patient's record. We created our system with reusable text analysis components built on the Unstructured Information Management Architecture and Weka. This reuse of code minimized the development effort related specifically to our smoking status classifier. We report precision, recall, F-score, and 95% exact confidence intervals for each metric. Recasting the classification task for the sentence level and reusing code from other text analysis projects allowed us to quickly build a classification system that performs with a system F-score of 92.64 based on held-out data tests and of 85.57 on the formal evaluation data. Our general medical natural language engine is easily adaptable to a real-world medical informatics application. Some of the limitations as applied to the use-case are negation detection and temporal resolution.

  16. Probabilistic multiple sclerosis lesion classification based on modeling regional intensity variability and local neighborhood information.

    PubMed

    Harmouche, Rola; Subbanna, Nagesh K; Collins, D Louis; Arnold, Douglas L; Arbel, Tal

    2015-05-01

    In this paper, a fully automatic probabilistic method for multiple sclerosis (MS) lesion classification is presented, whereby the posterior probability density function over healthy tissues and two types of lesions (T1-hypointense and T2-hyperintense) is generated at every voxel. During training, the system explicitly models the spatial variability of the intensity distributions throughout the brain by first segmenting it into distinct anatomical regions and then building regional likelihood distributions for each tissue class based on multimodal magnetic resonance image (MRI) intensities. Local class smoothness is ensured by incorporating neighboring voxel information in the prior probability through Markov random fields. The system is tested on two datasets from real multisite clinical trials consisting of multimodal MRIs from a total of 100 patients with MS. Lesion classification results based on the framework are compared with and without the regional information, as well as with other state-of-the-art methods against the labels from expert manual raters. The metrics for comparison include Dice overlap, sensitivity, and positive predictive rates for both voxel and lesion classifications. Statistically significant improvements in Dice values ( ), for voxel-based and lesion-based sensitivity values ( ), and positive predictive rates ( and respectively) are shown when the proposed method is compared to the method without regional information, and to a widely used method [1]. This holds particularly true in the posterior fossa, an area where classification is very challenging. The proposed method allows us to provide clinicians with accurate tissue labels for T1-hypointense and T2-hyperintense lesions, two types of lesions that differ in appearance and clinical ramifications, and with a confidence level in the classification, which helps clinicians assess the classification results.

  17. Sexing adult black-legged kittiwakes by DNA, behavior, and morphology

    USGS Publications Warehouse

    Jodice, P.G.R.; Lanctot, Richard B.; Gill, V.A.; Roby, D.D.; Hatch, Shyla A.

    2000-01-01

    We sexed adult Black-legged Kittiwakes (Rissa tridactyla) using DNA-based genetic techniques, behavior and morphology and compared results from these techniques. Genetic and morphology data were collected on 605 breeding kittiwakes and sex-specific behaviors were recorded for a sub-sample of 285 of these individuals. We compared sex classification based on both genetic and behavioral techniques for this sub-sample to assess the accuracy of the genetic technique. DNA-based techniques correctly sexed 97.2% and sex-specific behaviors, 96.5% of this sub-sample. We used the corrected genetic classifications from this sub-sample and the genetic classifications for the remaining birds, under the assumption they were correct, to develop predictive morphometric discriminant function models for all 605 birds. These models accurately predicted the sex of 73-96% of individuals examined, depending on the sample of birds used and the characters included. The most accurate single measurement for determining sex was length of head plus bill, which correctly classified 88% of individuals tested. When both members of a pair were measured, classification levels improved and approached the accuracy of both behavioral observations and genetic analyses. Morphometric techniques were only slightly less accurate than genetic techniques but were easier to implement in the field and less costly. Behavioral observations, while highly accurate, required that birds be easily observable during the breeding season and that birds be identifiable. As such, sex-specific behaviors may best be applied as a confirmation of sex for previously marked birds. All three techniques thus have the potential to be highly accurate, and the selection of one or more will depend on the circumstances of any particular field study.

  18. Multiclass classification of microarray data samples with a reduced number of genes

    PubMed Central

    2011-01-01

    Background Multiclass classification of microarray data samples with a reduced number of genes is a rich and challenging problem in Bioinformatics research. The problem gets harder as the number of classes is increased. In addition, the performance of most classifiers is tightly linked to the effectiveness of mandatory gene selection methods. Critical to gene selection is the availability of estimates about the maximum number of genes that can be handled by any classification algorithm. Lack of such estimates may lead to either computationally demanding explorations of a search space with thousands of dimensions or classification models based on gene sets of unrestricted size. In the former case, unbiased but possibly overfitted classification models may arise. In the latter case, biased classification models unable to support statistically significant findings may be obtained. Results A novel bound on the maximum number of genes that can be handled by binary classifiers in binary mediated multiclass classification algorithms of microarray data samples is presented. The bound suggests that high-dimensional binary output domains might favor the existence of accurate and sparse binary mediated multiclass classifiers for microarray data samples. Conclusions A comprehensive experimental work shows that the bound is indeed useful to induce accurate and sparse multiclass classifiers for microarray data samples. PMID:21342522

  19. Object-Based Paddy Rice Mapping Using HJ-1A/B Data and Temporal Features Extracted from Time Series MODIS NDVI Data

    PubMed Central

    Singha, Mrinal; Wu, Bingfang; Zhang, Miao

    2016-01-01

    Accurate and timely mapping of paddy rice is vital for food security and environmental sustainability. This study evaluates the utility of temporal features extracted from coarse resolution data for object-based paddy rice classification of fine resolution data. The coarse resolution vegetation index data is first fused with the fine resolution data to generate the time series fine resolution data. Temporal features are extracted from the fused data and added with the multi-spectral data to improve the classification accuracy. Temporal features provided the crop growth information, while multi-spectral data provided the pattern variation of paddy rice. The achieved overall classification accuracy and kappa coefficient were 84.37% and 0.68, respectively. The results indicate that the use of temporal features improved the overall classification accuracy of a single-date multi-spectral image by 18.75% from 65.62% to 84.37%. The minimum sensitivity (MS) of the paddy rice classification has also been improved. The comparison showed that the mapped paddy area was analogous to the agricultural statistics at the district level. This work also highlighted the importance of feature selection to achieve higher classification accuracies. These results demonstrate the potential of the combined use of temporal and spectral features for accurate paddy rice classification. PMID:28025525

  20. Object-Based Paddy Rice Mapping Using HJ-1A/B Data and Temporal Features Extracted from Time Series MODIS NDVI Data.

    PubMed

    Singha, Mrinal; Wu, Bingfang; Zhang, Miao

    2016-12-22

    Accurate and timely mapping of paddy rice is vital for food security and environmental sustainability. This study evaluates the utility of temporal features extracted from coarse resolution data for object-based paddy rice classification of fine resolution data. The coarse resolution vegetation index data is first fused with the fine resolution data to generate the time series fine resolution data. Temporal features are extracted from the fused data and added with the multi-spectral data to improve the classification accuracy. Temporal features provided the crop growth information, while multi-spectral data provided the pattern variation of paddy rice. The achieved overall classification accuracy and kappa coefficient were 84.37% and 0.68, respectively. The results indicate that the use of temporal features improved the overall classification accuracy of a single-date multi-spectral image by 18.75% from 65.62% to 84.37%. The minimum sensitivity (MS) of the paddy rice classification has also been improved. The comparison showed that the mapped paddy area was analogous to the agricultural statistics at the district level. This work also highlighted the importance of feature selection to achieve higher classification accuracies. These results demonstrate the potential of the combined use of temporal and spectral features for accurate paddy rice classification.

  1. Forested land cover classification on the Cumberland Plateau, Jackson County, Alabama: a comparison of Landsat ETM+ and SPOT5 images

    Treesearch

    Yong Wang; Shanta Parajuli; Callie Schweitzer; Glendon Smalley; Dawn Lemke; Wubishet Tadesse; Xiongwen Chen

    2010-01-01

    Forest cover classifications focus on the overall growth form (physiognomy) of the community, dominant vegetation, and species composition of the existing forest. Accurately classifying the forest cover type is important for forest inventory and silviculture. We compared classification accuracy based on Landsat Enhanced Thematic Mapper Plus (Landsat ETM+) and Satellite...

  2. Mycofier: a new machine learning-based classifier for fungal ITS sequences.

    PubMed

    Delgado-Serrano, Luisa; Restrepo, Silvia; Bustos, Jose Ricardo; Zambrano, Maria Mercedes; Anzola, Juan Manuel

    2016-08-11

    The taxonomic and phylogenetic classification based on sequence analysis of the ITS1 genomic region has become a crucial component of fungal ecology and diversity studies. Nowadays, there is no accurate alignment-free classification tool for fungal ITS1 sequences for large environmental surveys. This study describes the development of a machine learning-based classifier for the taxonomical assignment of fungal ITS1 sequences at the genus level. A fungal ITS1 sequence database was built using curated data. Training and test sets were generated from it. A Naïve Bayesian classifier was built using features from the primary sequence with an accuracy of 87 % in the classification at the genus level. The final model was based on a Naïve Bayes algorithm using ITS1 sequences from 510 fungal genera. This classifier, denoted as Mycofier, provides similar classification accuracy compared to BLASTN, but the database used for the classification contains curated data and the tool, independent of alignment, is more efficient and contributes to the field, given the lack of an accurate classification tool for large data from fungal ITS1 sequences. The software and source code for Mycofier are freely available at https://github.com/ldelgado-serrano/mycofier.git .

  3. Comparing Features for Classification of MEG Responses to Motor Imagery.

    PubMed

    Halme, Hanna-Leena; Parkkonen, Lauri

    2016-01-01

    Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio-spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system.

  4. EEG-based driver fatigue detection using hybrid deep generic model.

    PubMed

    Phyo Phyo San; Sai Ho Ling; Rifai Chai; Tran, Yvonne; Craig, Ashley; Hung Nguyen

    2016-08-01

    Classification of electroencephalography (EEG)-based application is one of the important process for biomedical engineering. Driver fatigue is a major case of traffic accidents worldwide and considered as a significant problem in recent decades. In this paper, a hybrid deep generic model (DGM)-based support vector machine is proposed for accurate detection of driver fatigue. Traditionally, a probabilistic DGM with deep architecture is quite good at learning invariant features, but it is not always optimal for classification due to its trainable parameters are in the middle layer. Alternatively, Support Vector Machine (SVM) itself is unable to learn complicated invariance, but produces good decision surface when applied to well-behaved features. Consolidating unsupervised high-level feature extraction techniques, DGM and SVM classification makes the integrated framework stronger and enhance mutually in feature extraction and classification. The experimental results showed that the proposed DBN-based driver fatigue monitoring system achieves better testing accuracy of 73.29 % with 91.10 % sensitivity and 55.48 % specificity. In short, the proposed hybrid DGM-based SVM is an effective method for the detection of driver fatigue in EEG.

  5. Molecular Phylogeny of the Widely Distributed Marine Protists, Phaeodaria (Rhizaria, Cercozoa).

    PubMed

    Nakamura, Yasuhide; Imai, Ichiro; Yamaguchi, Atsushi; Tuji, Akihiro; Not, Fabrice; Suzuki, Noritoshi

    2015-07-01

    Phaeodarians are a group of widely distributed marine cercozoans. These plankton organisms can exhibit a large biomass in the environment and are supposed to play an important role in marine ecosystems and in material cycles in the ocean. Accurate knowledge of phaeodarian classification is thus necessary to better understand marine biology, however, phylogenetic information on Phaeodaria is limited. The present study analyzed 18S rDNA sequences encompassing all existing phaeodarian orders, to clarify their phylogenetic relationships and improve their taxonomic classification. The monophyly of Phaeodaria was confirmed and strongly supported by phylogenetic analysis with a larger data set than in previous studies. The phaeodarian clade contained 11 subclades which generally did not correspond to the families and orders of the current classification system. Two families (Challengeriidae and Aulosphaeridae) and two orders (Phaeogromida and Phaeocalpida) are possibly polyphyletic or paraphyletic, and consequently the classification needs to be revised at both the family and order levels by integrative taxonomy approaches. Two morphological criteria, 1) the scleracoma type and 2) its surface structure, could be useful markers at the family level. Copyright © 2015 Elsevier GmbH. All rights reserved.

  6. Classification and grading of muscle injuries: a narrative review

    PubMed Central

    Hamilton, Bruce; Valle, Xavier; Rodas, Gil; Til, Luis; Grive, Ricard Pruna; Rincon, Josep Antoni Gutierrez; Tol, Johannes L

    2015-01-01

    A limitation to the accurate study of muscle injuries and their management has been the lack of a uniform approach to the categorisation and grading of muscle injuries. The goal of this narrative review was to provide a framework from which to understand the historical progression of the classification and grading of muscle injuries. We reviewed the classification and grading of muscle injuries in the literature to critically illustrate the strengths, weaknesses, contradictions or controversies. A retrospective, citation-based methodology was applied to search for English language literature which evaluated or utilised a novel muscle classification or grading system. While there is an abundance of literature classifying and grading muscle injuries, it is predominantly expert opinion, and there remains little evidence relating any of the clinical or radiological features to an established pathology or clinical outcome. While the categorical grading of injury severity may have been a reasonable solution to a clinical challenge identified in the middle of the 20th century, it is time to recognise the complexity of the injury, cease trying to oversimplify it and to develop appropriately powered research projects to answer important questions. PMID:25394420

  7. IDENTIFICATION OF TIME-INTEGRATED SAMPLING AND MEASUREMENT TECHNIQUES TO SUPPORT HUMAN EXPOSURE STUDIES

    EPA Science Inventory

    Accurate exposure classification tools are required to link exposure with health effects in epidemiological studies. Long-term, time-integrated exposure measures would be desirable to address the problem of developing appropriate residential childhood exposure classifications. ...

  8. Branch classification: A new mechanism for improving branch predictor performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, P.Y.; Hao, E.; Patt, Y.

    There is wide agreement that one of the most significant impediments to the performance of current and future pipelined superscalar processors is the presence of conditional branches in the instruction stream. Speculative execution is one solution to the branch problem, but speculative work is discarded if a branch is mispredicted. For it to be effective, speculative work is discarded if a branch is mispredicted. For it to be effective, speculative execution requires a very accurate branch predictor; 95% accuracy is not good enough. This paper proposes branch classification, a methodology for building more accurate branch predictors. Branch classification allows anmore » individual branch instruction to be associated with the branch predictor best suited to predict its direction. Using this approach, a hybrid branch predictor can be constructed such that each component branch predictor predicts those branches for which it is best suited. To demonstrate the usefulness of branch classification, an example classification scheme is given and a new hybrid predictor is built based on this scheme which achieves a higher prediction accuracy than any branch predictor previously reported in the literature.« less

  9. Compressed learning and its applications to subcellular localization.

    PubMed

    Zheng, Zhong-Long; Guo, Li; Jia, Jiong; Xie, Chen-Mao; Zeng, Wen-Cai; Yang, Jie

    2011-09-01

    One of the main challenges faced by biological applications is to predict protein subcellular localization in automatic fashion accurately. To achieve this in these applications, a wide variety of machine learning methods have been proposed in recent years. Most of them focus on finding the optimal classification scheme and less of them take the simplifying the complexity of biological systems into account. Traditionally, such bio-data are analyzed by first performing a feature selection before classification. Motivated by CS (Compressed Sensing) theory, we propose the methodology which performs compressed learning with a sparseness criterion such that feature selection and dimension reduction are merged into one analysis. The proposed methodology decreases the complexity of biological system, while increases protein subcellular localization accuracy. Experimental results are quite encouraging, indicating that the aforementioned sparse methods are quite promising in dealing with complicated biological problems, such as predicting the subcellular localization of Gram-negative bacterial proteins.

  10. Construction of a Calibrated Probabilistic Classification Catalog: Application to 50k Variable Sources in the All-Sky Automated Survey

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Brink, Henrik; Crellin-Quick, Arien

    2012-12-01

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  11. CONSTRUCTION OF A CALIBRATED PROBABILISTIC CLASSIFICATION CATALOG: APPLICATION TO 50k VARIABLE SOURCES IN THE ALL-SKY AUTOMATED SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.

    2012-12-15

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In additionmore » to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.« less

  12. Classification and definition of misuse, abuse, and related events in clinical trials: ACTTION systematic review and recommendations.

    PubMed

    Smith, Shannon M; Dart, Richard C; Katz, Nathaniel P; Paillard, Florence; Adams, Edgar H; Comer, Sandra D; Degroot, Aldemar; Edwards, Robert R; Haddox, J David; Jaffe, Jerome H; Jones, Christopher M; Kleber, Herbert D; Kopecky, Ernest A; Markman, John D; Montoya, Ivan D; O'Brien, Charles; Roland, Carl L; Stanton, Marsha; Strain, Eric C; Vorsanger, Gary; Wasan, Ajay D; Weiss, Roger D; Turk, Dennis C; Dworkin, Robert H

    2013-11-01

    As the nontherapeutic use of prescription medications escalates, serious associated consequences have also increased. This makes it essential to estimate misuse, abuse, and related events (MAREs) in the development and postmarketing adverse event surveillance and monitoring of prescription drugs accurately. However, classifications and definitions to describe prescription drug MAREs differ depending on the purpose of the classification system, may apply to single events or ongoing patterns of inappropriate use, and are not standardized or systematically employed, thereby complicating the ability to assess MARE occurrence adequately. In a systematic review of existing prescription drug MARE terminology and definitions from consensus efforts, review articles, and major institutions and agencies, MARE terms were often defined inconsistently or idiosyncratically, or had definitions that overlapped with other MARE terms. The Analgesic, Anesthetic, and Addiction Clinical Trials, Translations, Innovations, Opportunities, and Networks (ACTTION) public-private partnership convened an expert panel to develop mutually exclusive and exhaustive consensus classifications and definitions of MAREs occurring in clinical trials of analgesic medications to increase accuracy and consistency in characterizing their occurrence and prevalence in clinical trials. The proposed ACTTION classifications and definitions are designed as a first step in a system to adjudicate MAREs that occur in analgesic clinical trials and postmarketing adverse event surveillance and monitoring, which can be used in conjunction with other methods of assessing a treatment's abuse potential. Copyright © 2013 International Association for the Study of Pain. All rights reserved.

  13. Neuropsychological Test Selection for Cognitive Impairment Classification: A Machine Learning Approach

    PubMed Central

    Williams, Jennifer A.; Schmitter-Edgecombe, Maureen; Cook, Diane J.

    2016-01-01

    Introduction Reducing the amount of testing required to accurately detect cognitive impairment is clinically relevant. The aim of this research was to determine the fewest number of clinical measures required to accurately classify participants as healthy older adult, mild cognitive impairment (MCI) or dementia using a suite of classification techniques. Methods Two variable selection machine learning models (i.e., naive Bayes, decision tree), a logistic regression, and two participant datasets (i.e., clinical diagnosis, clinical dementia rating; CDR) were explored. Participants classified using clinical diagnosis criteria included 52 individuals with dementia, 97 with MCI, and 161 cognitively healthy older adults. Participants classified using CDR included 154 individuals CDR = 0, 93 individuals with CDR = 0.5, and 25 individuals with CDR = 1.0+. Twenty-seven demographic, psychological, and neuropsychological variables were available for variable selection. Results No significant difference was observed between naive Bayes, decision tree, and logistic regression models for classification of both clinical diagnosis and CDR datasets. Participant classification (70.0 – 99.1%), geometric mean (60.9 – 98.1%), sensitivity (44.2 – 100%), and specificity (52.7 – 100%) were generally satisfactory. Unsurprisingly, the MCI/CDR = 0.5 participant group was the most challenging to classify. Through variable selection only 2 – 9 variables were required for classification and varied between datasets in a clinically meaningful way. Conclusions The current study results reveal that machine learning techniques can accurately classifying cognitive impairment and reduce the number of measures required for diagnosis. PMID:26332171

  14. Robust representation and recognition of facial emotions using extreme sparse learning.

    PubMed

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  15. Using a geographic information system and scanning technology to create high-resolution land-use data sets

    USGS Publications Warehouse

    Harvey, Craig A.; Kolpin, Dana W.; Battaglin, William A.

    1996-01-01

    A geographic information system (GIS) procedure was developed to compile low-altitude aerial photography, digitized data, and land-use data from U.S. Department of Agriculture Consolidated Farm Service Agency (CFSA) offices into a high-resolution (approximately 5 meters) land-use GIS data set. The aerial photography consisted of 35-mm slides which were scanned into tagged information file format (TIFF) images. These TIFF images were then imported into the GIS where they were registered into a geographically referenced coordinate system. Boundaries between land use were delineated from these GIS data sets using on-screen digitizing techniques. Crop types were determined using information obtained from the U.S. Department of Agriculture CFSA offices. Crop information not supplied by the CFSA was attributed by manual classification procedures. Automated methods to provide delineation of the field boundaries and land-use classification were investigated. It was determined that using these data sources, automated methods were less efficient and accurate than manual methods of delineating field boundaries and classifying land use.

  16. DNA methylation-based classification of central nervous system tumours.

    PubMed

    Capper, David; Jones, David T W; Sill, Martin; Hovestadt, Volker; Schrimpf, Daniel; Sturm, Dominik; Koelsche, Christian; Sahm, Felix; Chavez, Lukas; Reuss, David E; Kratz, Annekathrin; Wefers, Annika K; Huang, Kristin; Pajtler, Kristian W; Schweizer, Leonille; Stichel, Damian; Olar, Adriana; Engel, Nils W; Lindenberg, Kerstin; Harter, Patrick N; Braczynski, Anne K; Plate, Karl H; Dohmen, Hildegard; Garvalov, Boyan K; Coras, Roland; Hölsken, Annett; Hewer, Ekkehard; Bewerunge-Hudler, Melanie; Schick, Matthias; Fischer, Roger; Beschorner, Rudi; Schittenhelm, Jens; Staszewski, Ori; Wani, Khalida; Varlet, Pascale; Pages, Melanie; Temming, Petra; Lohmann, Dietmar; Selt, Florian; Witt, Hendrik; Milde, Till; Witt, Olaf; Aronica, Eleonora; Giangaspero, Felice; Rushing, Elisabeth; Scheurlen, Wolfram; Geisenberger, Christoph; Rodriguez, Fausto J; Becker, Albert; Preusser, Matthias; Haberler, Christine; Bjerkvig, Rolf; Cryan, Jane; Farrell, Michael; Deckert, Martina; Hench, Jürgen; Frank, Stephan; Serrano, Jonathan; Kannan, Kasthuri; Tsirigos, Aristotelis; Brück, Wolfgang; Hofer, Silvia; Brehmer, Stefanie; Seiz-Rosenhagen, Marcel; Hänggi, Daniel; Hans, Volkmar; Rozsnoki, Stephanie; Hansford, Jordan R; Kohlhof, Patricia; Kristensen, Bjarne W; Lechner, Matt; Lopes, Beatriz; Mawrin, Christian; Ketter, Ralf; Kulozik, Andreas; Khatib, Ziad; Heppner, Frank; Koch, Arend; Jouvet, Anne; Keohane, Catherine; Mühleisen, Helmut; Mueller, Wolf; Pohl, Ute; Prinz, Marco; Benner, Axel; Zapatka, Marc; Gottardo, Nicholas G; Driever, Pablo Hernáiz; Kramm, Christof M; Müller, Hermann L; Rutkowski, Stefan; von Hoff, Katja; Frühwald, Michael C; Gnekow, Astrid; Fleischhack, Gudrun; Tippelt, Stephan; Calaminus, Gabriele; Monoranu, Camelia-Maria; Perry, Arie; Jones, Chris; Jacques, Thomas S; Radlwimmer, Bernhard; Gessi, Marco; Pietsch, Torsten; Schramm, Johannes; Schackert, Gabriele; Westphal, Manfred; Reifenberger, Guido; Wesseling, Pieter; Weller, Michael; Collins, Vincent Peter; Blümcke, Ingmar; Bendszus, Martin; Debus, Jürgen; Huang, Annie; Jabado, Nada; Northcott, Paul A; Paulus, Werner; Gajjar, Amar; Robinson, Giles W; Taylor, Michael D; Jaunmuktane, Zane; Ryzhova, Marina; Platten, Michael; Unterberg, Andreas; Wick, Wolfgang; Karajannis, Matthias A; Mittelbronn, Michel; Acker, Till; Hartmann, Christian; Aldape, Kenneth; Schüller, Ulrich; Buslei, Rolf; Lichter, Peter; Kool, Marcel; Herold-Mende, Christel; Ellison, David W; Hasselblatt, Martin; Snuderl, Matija; Brandner, Sebastian; Korshunov, Andrey; von Deimling, Andreas; Pfister, Stefan M

    2018-03-22

    Accurate pathological diagnosis is crucial for optimal management of patients with cancer. For the approximately 100 known tumour types of the central nervous system, standardization of the diagnostic process has been shown to be particularly challenging-with substantial inter-observer variability in the histopathological diagnosis of many tumour types. Here we present a comprehensive approach for the DNA methylation-based classification of central nervous system tumours across all entities and age groups, and demonstrate its application in a routine diagnostic setting. We show that the availability of this method may have a substantial impact on diagnostic precision compared to standard methods, resulting in a change of diagnosis in up to 12% of prospective cases. For broader accessibility, we have designed a free online classifier tool, the use of which does not require any additional onsite data processing. Our results provide a blueprint for the generation of machine-learning-based tumour classifiers across other cancer entities, with the potential to fundamentally transform tumour pathology.

  17. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    NASA Astrophysics Data System (ADS)

    Assaleh, Khaled; Al-Rousan, M.

    2005-12-01

    Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  18. Melanoma recognition framework based on expert definition of ABCD for dermoscopic images.

    PubMed

    Abbas, Qaisar; Emre Celebi, M; Garcia, Irene Fondón; Ahmad, Waqar

    2013-02-01

    Melanoma Recognition based on clinical ABCD rule is widely used for clinical diagnosis of pigmented skin lesions in dermoscopy images. However, the current computer-aided diagnostic (CAD) systems for classification between malignant and nevus lesions using the ABCD criteria are imperfect due to use of ineffective computerized techniques. In this study, a novel melanoma recognition system (MRS) is presented by focusing more on extracting features from the lesions using ABCD criteria. The complete MRS system consists of the following six major steps: transformation to the CIEL*a*b* color space, preprocessing to enhance the tumor region, black-frame and hair artifacts removal, tumor-area segmentation, quantification of feature using ABCD criteria and normalization, and finally feature selection and classification. The MRS system for melanoma-nevus lesions is tested on a total of 120 dermoscopic images. To test the performance of the MRS diagnostic classifier, the area under the receiver operating characteristics curve (AUC) is utilized. The proposed classifier achieved a sensitivity of 88.2%, specificity of 91.3%, and AUC of 0.880. The experimental results show that the proposed MRS system can accurately distinguish between malignant and benign lesions. The MRS technique is fully automatic and can easily integrate to an existing CAD system. To increase the classification accuracy of MRS, the CASH pattern recognition technique, visual inspection of dermatologist, contextual information from the patients, and the histopathological tests can be included to investigate the impact with this system. © 2012 John Wiley & Sons A/S.

  19. Classification of male lower torso for underwear design

    NASA Astrophysics Data System (ADS)

    Cheng, Z.; Kuzmichev, V. E.

    2017-10-01

    By means of scanning technology we have got new information about the morphology of male bodies and have redistricted the classification of men’s underwear by adopting one to consumer demands. To build the new classification in accordance with male body characteristic factors of lower torso, we make the method of underwear designing which allow to get the accurate and convenience for consumers products.

  20. Object-based land cover classification and change analysis in the Baltimore metropolitan area using multitemporal high resolution remote sensing data

    Treesearch

    Weiqi Zhou; Austin Troy; Morgan Grove

    2008-01-01

    Accurate and timely information about land cover pattern and change in urban areas is crucial for urban land management decision-making, ecosystem monitoring and urban planning. This paper presents the methods and results of an object-based classification and post-classification change detection of multitemporal high-spatial resolution Emerge aerial imagery in the...

  1. California desert resource inventory using multispectral classification of digitally mosaicked Landsat frames

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Mcleod, R. G.; Zobrist, A. L.; Johnson, H. B.

    1979-01-01

    Procedures for adjustment of brightness values between frames and the digital mosaicking of Landsat frames to standard map projections are developed for providing a continuous data base for multispectral thematic classification. A combination of local terrain variations in the Californian deserts and a global sampling strategy based on transects provided the framework for accurate classification throughout the entire geographic region.

  2. TIME-INTEGRATED EXPOSURE MEASURES TO IMPROVE THE PREDICTIVE POWER OF EXPOSURE CLASSIFICATION FOR EPIDEMIOLOGIC STUDIES

    EPA Science Inventory

    Accurate exposure classification tools are required to link exposure with health effects in epidemiological studies. Although long-term integrated exposure measurements are a critical component of exposure assessment, the ability to include these measurements into epidemiologic...

  3. Combined texture feature analysis of segmentation and classification of benign and malignant tumour CT slices.

    PubMed

    Padma, A; Sukanesh, R

    2013-01-01

    A computer software system is designed for the segmentation and classification of benign from malignant tumour slices in brain computed tomography (CT) images. This paper presents a method to find and select both the dominant run length and co-occurrence texture features of region of interest (ROI) of the tumour region of each slice to be segmented by Fuzzy c means clustering (FCM) and evaluate the performance of support vector machine (SVM)-based classifiers in classifying benign and malignant tumour slices. Two hundred and six tumour confirmed CT slices are considered in this study. A total of 17 texture features are extracted by a feature extraction procedure, and six features are selected using Principal Component Analysis (PCA). This study constructed the SVM-based classifier with the selected features and by comparing the segmentation results with the experienced radiologist labelled ground truth (target). Quantitative analysis between ground truth and segmented tumour is presented in terms of segmentation accuracy, segmentation error and overlap similarity measures such as the Jaccard index. The classification performance of the SVM-based classifier with the same selected features is also evaluated using a 10-fold cross-validation method. The proposed system provides some newly found texture features have an important contribution in classifying benign and malignant tumour slices efficiently and accurately with less computational time. The experimental results showed that the proposed system is able to achieve the highest segmentation and classification accuracy effectiveness as measured by jaccard index and sensitivity and specificity.

  4. Reliability of Radiographic Assessments of Adolescent Midshaft Clavicle Fractures by the FACTS Multicenter Study Group.

    PubMed

    Li, Ying; Donohue, Kyna S; Robbins, Christopher B; Pennock, Andrew T; Ellis, Henry B; Nepple, Jeffrey J; Pandya, Nirav; Spence, David D; Willimon, Samuel Clifton; Heyworth, Benton E

    2017-09-01

    There is a recent trend toward increased surgical treatment of displaced midshaft clavicle fractures in adolescents. The primary purpose of this study was to evaluate the intrarater and interrater reliability of clavicle fracture classification systems and measurements of displacement, shortening, and angulation in adolescents. The secondary purpose was to compare 2 different measurement methods for fracture shortening. This study was performed by a multicenter study group conducting a prospective, comparative, observational cohort study of adolescent clavicle fractures. Eight raters evaluated 24 deidentified anteroposterior clavicle radiographs selected from patients 10-18 years of age with midshaft clavicle fractures. Two clavicle fracture classification systems were used, and 2 measurements for shortening, 1 measurement for superior-inferior displacement, and 2 measurements for fracture angulation were performed. A minimum of 2 weeks after the first round, the process was repeated. Intraclass correlation coefficients were calculated. Good to excellent intrarater and interrater agreement was achieved for the descriptive classification system of fracture displacement, direction of angulation, presence of comminution, and all continuous variables, including both measurements of shortening, superior-inferior displacement, and degrees of angulation. Moderate agreement was achieved for the Arbeitsgemeinschaft für Osteosynthesefragen classification system overall. Mean shortening by 2 different methods were significantly different from each other (P < 0.0001). Most radiographic measurements performed by investigators in a multicenter, prospective cohort study of adolescent clavicle fractures demonstrated good-to-excellent intrarater and interrater reliability. Future consensus on the most accurate and clinically appropriate measurement method for fracture shortening is critical.

  5. Analysis of the changes in the tarcrete layer on the desert surface of Kuwait using satellite imagery and cell-based modeling

    NASA Astrophysics Data System (ADS)

    Al-Doasari, Ahmad E.

    The 1991 Gulf War caused massive environmental damage in Kuwait. Deposition of oil and soot droplets from hundreds of burning oil-wells created a layer of tarcrete on the desert surface covering over 900 km2. This research investigates the spatial change in the tarcrete extent from 1991 to 1998 using Landsat Thematic Mapper (TM) imagery and statistical modeling techniques. The pixel structure of TM data allows the spatial analysis of the change in tarcrete extent to be conducted at the pixel (cell) level within a geographical information system (GIS). There are two components to this research. The first is a comparison of three remote sensing classification techniques used to map the tarcrete layer. The second is a spatial-temporal analysis and simulation of tarcrete changes through time. The analysis focuses on an area of 389 km2 located south of the Al-Burgan oil field. Five TM images acquired in 1991, 1993, 1994, 1995, and 1998 were geometrically and atmospherically corrected. These images were classified into six classes: oil lakes; heavy, intermediate, light, and traces of tarcrete; and sand. The classification methods tested were unsupervised, supervised, and neural network supervised (fuzzy ARTMAP). Field data of tarcrete characteristics were collected to support the classification process and to evaluate the classification accuracies. Overall, the neural network method is more accurate (60 percent) than the other two methods; both the unsupervised and the supervised classification accuracy assessments resulted in 46 percent accuracy. The five classifications were used in a lagged autologistic model to analyze the spatial changes of the tarcrete through time. The autologistic model correctly identified overall tarcrete contraction between 1991--1993 and 1995--1998. However, tarcrete contraction between 1993--1994 and 1994--1995 was less well marked, in part because of classification errors in the maps from these time periods. Initial simulations of tarcrete contraction with a cellular automaton model were not very successful. However, more accurate classifications could improve the simulations. This study illustrates how an empirical investigation using satellite images, field data, GIS, and spatial statistics can simulate dynamic land-cover change through the use of a discrete statistical and cellular automaton model.

  6. The Optimization of Trained and Untrained Image Classification Algorithms for Use on Large Spatial Datasets

    NASA Technical Reports Server (NTRS)

    Kocurek, Michael J.

    2005-01-01

    The HARVIST project seeks to automatically provide an accurate, interactive interface to predict crop yield over the entire United States. In order to accomplish this goal, large images must be quickly and automatically classified by crop type. Current trained and untrained classification algorithms, while accurate, are highly inefficient when operating on large datasets. This project sought to develop new variants of two standard trained and untrained classification algorithms that are optimized to take advantage of the spatial nature of image data. The first algorithm, harvist-cluster, utilizes divide-and-conquer techniques to precluster an image in the hopes of increasing overall clustering speed. The second algorithm, harvistSVM, utilizes support vector machines (SVMs), a type of trained classifier. It seeks to increase classification speed by applying a "meta-SVM" to a quick (but inaccurate) SVM to approximate a slower, yet more accurate, SVM. Speedups were achieved by tuning the algorithm to quickly identify when the quick SVM was incorrect, and then reclassifying low-confidence pixels as necessary. Comparing the classification speeds of both algorithms to known baselines showed a slight speedup for large values of k (the number of clusters) for harvist-cluster, and a significant speedup for harvistSVM. Future work aims to automate the parameter tuning process required for harvistSVM, and further improve classification accuracy and speed. Additionally, this research will move documents created in Canvas into ArcGIS. The launch of the Mars Reconnaissance Orbiter (MRO) will provide a wealth of image data such as global maps of Martian weather and high resolution global images of Mars. The ability to store this new data in a georeferenced format will support future Mars missions by providing data for landing site selection and the search for water on Mars.

  7. Nomenclature and the National Wetland Plant List

    DTIC Science & Technology

    2009-05-01

    older or previously used scientific names that are no longer viewed as acceptable or accurate based on current standards and ideology ). All synonyms...name ) classification system (one name to indicate the genus one to indicate the species). Scientific names, consisting of genus and species...mostly of either Greek or Latin origin, make up the binomial. An example is Acer rubrum L., where Acer is the genus , rubrum is the species, and L. is the

  8. Study on bayes discriminant analysis of EEG data.

    PubMed

    Shi, Yuan; He, DanDan; Qin, Fang

    2014-01-01

    In this paper, we have done Bayes Discriminant analysis to EEG data of experiment objects which are recorded impersonally come up with a relatively accurate method used in feature extraction and classification decisions. In accordance with the strength of α wave, the head electrodes are divided into four species. In use of part of 21 electrodes EEG data of 63 people, we have done Bayes Discriminant analysis to EEG data of six objects. Results In use of part of EEG data of 63 people, we have done Bayes Discriminant analysis, the electrode classification accuracy rates is 64.4%. Bayes Discriminant has higher prediction accuracy, EEG features (mainly αwave) extract more accurate. Bayes Discriminant would be better applied to the feature extraction and classification decisions of EEG data.

  9. Early postoperative repair status after rotator cuff repair cannot be accurately classified using questionnaires of patient function and isokinetic strength evaluation.

    PubMed

    Colliver, Jessica; Wang, Allan; Joss, Brendan; Ebert, Jay; Koh, Eamon; Breidahl, William; Ackland, Timothy

    2016-04-01

    This study investigated if patients with an intact tendon repair or partial-thickness retear early after rotator cuff repair display differences in clinical evaluations and whether early tendon healing can be predicted using these assessments. We prospectively evaluated 60 patients at 16 weeks after arthroscopic supraspinatus repair. Evaluation included the Oxford Shoulder Score, 11-item version of the Disabilities of the Arm, Shoulder and Hand, visual analog scale for pain, 12-item Short Form Health Survey, isokinetic strength, and magnetic resonance imaging (MRI). Independent t tests investigated clinical differences in patients based on the Sugaya MRI rotator cuff classification system (grades 1, 2, or 3). Discriminant analysis determined whether intact repairs (Sugaya grade 1) and partial-thickness retears (Sugaya grades 2 and 3) could be predicted. No differences (P < .05) existed in the clinical or strength measures. Although discriminant analysis revealed the 11-item version of the Disabilities of the Arm, Shoulder and Hand produced a 97% true-positive rate for predicting partial thickness retears, it also produced a 90% false-positive rate whereby it incorrectly predicted a retear in 90% of patients whose repair was intact. The ability to discriminate between groups was enhanced with up to 5 variables entered; however, only 87% of the partial-retear group and 36% of the intact-repair group were correctly classified. No differences in clinical scores existed between patients stratified by the Sugaya MRI classification system at 16 weeks. An intact repair or partial-thickness retear could not be accurately predicted. Our results suggest that correct classification of healing in the early postoperative stages should involve imaging. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  10. Automated classification of Acid Rock Drainage potential from Corescan drill core imagery

    NASA Astrophysics Data System (ADS)

    Cracknell, M. J.; Jackson, L.; Parbhakar-Fox, A.; Savinova, K.

    2017-12-01

    Classification of the acid forming potential of waste rock is important for managing environmental hazards associated with mining operations. Current methods for the classification of acid rock drainage (ARD) potential usually involve labour intensive and subjective assessment of drill core and/or hand specimens. Manual methods are subject to operator bias, human error and the amount of material that can be assessed within a given time frame is limited. The automated classification of ARD potential documented here is based on the ARD Index developed by Parbhakar-Fox et al. (2011). This ARD Index involves the combination of five indicators: A - sulphide content; B - sulphide alteration; C - sulphide morphology; D - primary neutraliser content; and E - sulphide mineral association. Several components of the ARD Index require accurate identification of sulphide minerals. This is achieved by classifying Corescan Red-Green-Blue true colour images into the presence or absence of sulphide minerals using supervised classification. Subsequently, sulphide classification images are processed and combined with Corescan SWIR-based mineral classifications to obtain information on sulphide content, indices representing sulphide textures (disseminated versus massive and degree of veining), and spatially associated minerals. This information is combined to calculate ARD Index indicator values that feed into the classification of ARD potential. Automated ARD potential classifications of drill core samples associated with a porphyry Cu-Au deposit are compared to manually derived classifications and those obtained by standard static geochemical testing and X-ray diffractometry analyses. Results indicate a high degree of similarity between automated and manual ARD potential classifications. Major differences between approaches are observed in sulphide and neutraliser mineral percentages, likely due to the subjective nature of manual estimates of mineral content. The automated approach presented here for the classification of ARD potential offers rapid, repeatable and accurate outcomes comparable to manually derived classifications. Methods for automated ARD classifications from digital drill core data represent a step-change for geoenvironmental management practices in the mining industry.

  11. Non-linear dynamical classification of short time series of the rössler system in high noise regimes.

    PubMed

    Lainscsek, Claudia; Weyhenmeyer, Jonathan; Hernandez, Manuel E; Poizner, Howard; Sejnowski, Terrence J

    2013-01-01

    Time series analysis with delay differential equations (DDEs) reveals non-linear properties of the underlying dynamical system and can serve as a non-linear time-domain classification tool. Here global DDE models were used to analyze short segments of simulated time series from a known dynamical system, the Rössler system, in high noise regimes. In a companion paper, we apply the DDE model developed here to classify short segments of encephalographic (EEG) data recorded from patients with Parkinson's disease and healthy subjects. Nine simulated subjects in each of two distinct classes were generated by varying the bifurcation parameter b and keeping the other two parameters (a and c) of the Rössler system fixed. All choices of b were in the chaotic parameter range. We diluted the simulated data using white noise ranging from 10 to -30 dB signal-to-noise ratios (SNR). Structure selection was supervised by selecting the number of terms, delays, and order of non-linearity of the model DDE model that best linearly separated the two classes of data. The distances d from the linear dividing hyperplane was then used to assess the classification performance by computing the area A' under the ROC curve. The selected model was tested on untrained data using repeated random sub-sampling validation. DDEs were able to accurately distinguish the two dynamical conditions, and moreover, to quantify the changes in the dynamics. There was a significant correlation between the dynamical bifurcation parameter b of the simulated data and the classification parameter d from our analysis. This correlation still held for new simulated subjects with new dynamical parameters selected from each of the two dynamical regimes. Furthermore, the correlation was robust to added noise, being significant even when the noise was greater than the signal. We conclude that DDE models may be used as a generalizable and reliable classification tool for even small segments of noisy data.

  12. Non-Linear Dynamical Classification of Short Time Series of the Rössler System in High Noise Regimes

    PubMed Central

    Lainscsek, Claudia; Weyhenmeyer, Jonathan; Hernandez, Manuel E.; Poizner, Howard; Sejnowski, Terrence J.

    2013-01-01

    Time series analysis with delay differential equations (DDEs) reveals non-linear properties of the underlying dynamical system and can serve as a non-linear time-domain classification tool. Here global DDE models were used to analyze short segments of simulated time series from a known dynamical system, the Rössler system, in high noise regimes. In a companion paper, we apply the DDE model developed here to classify short segments of encephalographic (EEG) data recorded from patients with Parkinson’s disease and healthy subjects. Nine simulated subjects in each of two distinct classes were generated by varying the bifurcation parameter b and keeping the other two parameters (a and c) of the Rössler system fixed. All choices of b were in the chaotic parameter range. We diluted the simulated data using white noise ranging from 10 to −30 dB signal-to-noise ratios (SNR). Structure selection was supervised by selecting the number of terms, delays, and order of non-linearity of the model DDE model that best linearly separated the two classes of data. The distances d from the linear dividing hyperplane was then used to assess the classification performance by computing the area A′ under the ROC curve. The selected model was tested on untrained data using repeated random sub-sampling validation. DDEs were able to accurately distinguish the two dynamical conditions, and moreover, to quantify the changes in the dynamics. There was a significant correlation between the dynamical bifurcation parameter b of the simulated data and the classification parameter d from our analysis. This correlation still held for new simulated subjects with new dynamical parameters selected from each of the two dynamical regimes. Furthermore, the correlation was robust to added noise, being significant even when the noise was greater than the signal. We conclude that DDE models may be used as a generalizable and reliable classification tool for even small segments of noisy data. PMID:24379798

  13. Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis.

    PubMed

    Myburgh, Hermanus C; van Zijl, Willemien H; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude

    2016-03-01

    Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations.

  14. Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis

    PubMed Central

    Myburgh, Hermanus C.; van Zijl, Willemien H.; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude

    2016-01-01

    Background Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. Methods A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. Findings An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. Interpretation The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~ 64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations. PMID:27077122

  15. Performance analysis of mineral mapping method to delineate mineralization zones under tropical region

    NASA Astrophysics Data System (ADS)

    Wakila, M. H.; Saepuloh, A.; Heriawan, M. N.; Susanto, A.

    2016-09-01

    Geothermal explorations and productions are currently being intensively conducted at certain areas in Indonesia such as Wayang Windu Geothermal Field (WWGF) in West Java, Indonesia. The WWGF is located at wide area covering about 40 km2. An accurate method to map the distribution of heterogeneity minerals is necessary for wide areas such as WWGF. Mineral mapping is an important method in geothermal explorations to determine the distribution of minerals which indicate the surface manifestations of geothermal system. This study is aimed to determine the most precise and accurate methods for minerals mapping at geothermal field. Field measurements were performed to assess the accuracy of three proposed methods: 1) Minimum Noise Fraction (MNF), utilizing the linear transformation method to eliminate the correlation among the spectra bands and to reduce the noise in the data, 2) Pixel Purity Index (PPI), a designed method to find the most extreme spectrum pixels and their characteristics due to end-members mixing, 3) Spectral Angle Mapper (SAM), an image classification technique by measuring the spectral similarity between an unknown object with spectral reference in n- dimension. The output of those methods were mineral distribution occurrence. The performance of each mapping method was analyzed based on the ground truth data. Among the three proposed method, the SAM classification method is the most appropriate and accurate for mineral mapping related to spatial distribution of alteration minerals.

  16. Lava Morphology Classification of a Fast-Spreading Ridge Using Deep-Towed Sonar Data: East Pacific Rise

    NASA Astrophysics Data System (ADS)

    Meyer, J.; White, S.

    2005-05-01

    Classification of lava morphology on a regional scale contributes to the understanding of the distribution and extent of lava flows at a mid-ocean ridge. Seafloor classification is essential to understand the regional undersea environment at midocean ridges. In this study, the development of a classification scheme is found to identify and extract textural patterns of different lava morphologies along the East Pacific Rise using DSL-120 side-scan and ARGO camera imagery. Application of an accurate image classification technique to side-scan sonar allows us to expand upon the locally available visual ground reference data to make the first comprehensive regional maps of small-scale lava morphology present at a mid-ocean ridge. The submarine lava morphologies focused upon in this study; sheet flows, lobate flows, and pillow flows; have unique textures. Several algorithms were applied to the sonar backscatter intensity images to produce multiple textural image layers useful in distinguishing the different lava morphologies. The intensity and spatially enhanced images were then combined and applied to a hybrid classification technique. The hybrid classification involves two integrated classifiers, a rule-based expert system classifier and a machine learning classifier. The complementary capabilities of the two integrated classifiers provided a higher accuracy of regional seafloor classification compared to using either classifier alone. Once trained, the hybrid classifier can then be applied to classify neighboring images with relative ease. This classification technique has been used to map the lava morphology distribution and infer spatial variability of lava effusion rates along two segments of the East Pacific Rise, 17 deg S and 9 deg N. Future use of this technique may also be useful for attaining temporal information. Repeated documentation of morphology classification in this dynamic environment can be compared to detect regional seafloor change.

  17. Assessment of Cropping System Diversity in the Fergana Valley Through Image Fusion of Landsat 8 and SENTINEL-1

    NASA Astrophysics Data System (ADS)

    Dimov, D.; Kuhn, J.; Conrad, C.

    2016-06-01

    In the transitioning agricultural societies of the world, food security is an essential element of livelihood and economic development with the agricultural sector very often being the major employment factor and income source. Rapid population growth, urbanization, pollution, desertification, soil degradation and climate change pose a variety of threats to a sustainable agricultural development and can be expressed as agricultural vulnerability components. Diverse cropping patterns may help to adapt the agricultural systems to those hazards in terms of increasing the potential yield and resilience to water scarcity. Thus, the quantification of crop diversity using indices like the Simpson Index of Diversity (SID) e.g. through freely available remote sensing data becomes a very important issue. This however requires accurate land use classifications. In this study, the focus is set on the cropping system diversity of garden plots, summer crop fields and orchard plots which are the prevalent agricultural systems in the test area of the Fergana Valley in Uzbekistan. In order to improve the accuracy of land use classification algorithms with low or medium resolution data, a novel processing chain through the hitherto unique fusion of optical and SAR data from the Landsat 8 and Sentinel-1 platforms is proposed. The combination of both sensors is intended to enhance the object's textural and spectral signature rather than just to enhance the spatial context through pansharpening. It could be concluded that the Ehlers fusion algorithm gave the most suitable results. Based on the derived image fusion different object-based image classification algorithms such as SVM, Naïve Bayesian and Random Forest were evaluated whereby the latter one achieved the highest classification accuracy. Subsequently, the SID was applied to measure the diversification of the three main cropping systems.

  18. Lung tumor diagnosis and subtype discovery by gene expression profiling.

    PubMed

    Wang, Lu-yong; Tu, Zhuowen

    2006-01-01

    The optimal treatment of patients with complex diseases, such as cancers, depends on the accurate diagnosis by using a combination of clinical and histopathological data. In many scenarios, it becomes tremendously difficult because of the limitations in clinical presentation and histopathology. To accurate diagnose complex diseases, the molecular classification based on gene or protein expression profiles are indispensable for modern medicine. Moreover, many heterogeneous diseases consist of various potential subtypes in molecular basis and differ remarkably in their response to therapies. It is critical to accurate predict subgroup on disease gene expression profiles. More fundamental knowledge of the molecular basis and classification of disease could aid in the prediction of patient outcome, the informed selection of therapies, and identification of novel molecular targets for therapy. In this paper, we propose a new disease diagnostic method, probabilistic boosting tree (PB tree) method, on gene expression profiles of lung tumors. It enables accurate disease classification and subtype discovery in disease. It automatically constructs a tree in which each node combines a number of weak classifiers into a strong classifier. Also, subtype discovery is naturally embedded in the learning process. Our algorithm achieves excellent diagnostic performance, and meanwhile it is capable of detecting the disease subtype based on gene expression profile.

  19. Intra- and interobserver agreement in the classification and treatment of distal third clavicle fractures.

    PubMed

    Bishop, Julie Y; Jones, Grant L; Lewis, Brian; Pedroza, Angela

    2015-04-01

    In treatment of distal third clavicle fractures, the Neer classification system, based on the location of the fracture in relation to the coracoclavicular ligaments, has traditionally been used to determine fracture pattern stability. To determine the intra- and interobserver reliability in the classification of distal third clavicle fractures via standard plain radiographs and the intra- and interobserver agreement in the preferred treatment of these fractures. Cohort study (Diagnosis); Level of evidence, 3. Thirty radiographs of distal clavicle fractures were randomly selected from patients treated for distal clavicle fractures between 2006 and 2011. The radiographs were distributed to 22 shoulder/sports medicine fellowship-trained orthopaedic surgeons. Fourteen surgeons responded and took part in the study. The evaluators were asked to measure the size of the distal fragment, classify the fracture pattern as stable or unstable, assign the Neer classification, and recommend operative versus nonoperative treatment. The radiographs were reordered and redistributed 3 months later. Inter- and intrarater agreement was determined for the distal fragment size, stability of the fracture, Neer classification, and decision to operate. Single variable logistic regression was performed to determine what factors could most accurately predict the decision for surgery. Interrater agreement was fair for distal fragment size, moderate for stability, fair for Neer classification, slight for type IIB and III fractures, and moderate for treatment approach. Intrarater agreement was moderate for distal fragment size categories (κ = 0.50, P < .001) and Neer classification (κ = 0.42, P < .001) and substantial for stable fracture (κ = 0.65, P < .001) and decision to operate (κ = 0.65, P < .001). Fracture stability was the best predictor of treatment, with 89% accuracy (P < .001). Fracture stability determination and the decision to operate had the highest interobserver agreement. Fracture stability was the key determinant of treatment, rather than the Neer classification system or the size of the distal fragment. © 2015 The Author(s).

  20. CARSVM: a class association rule-based classification framework and its application to gene expression data.

    PubMed

    Kianmehr, Keivan; Alhajj, Reda

    2008-09-01

    In this study, we aim at building a classification framework, namely the CARSVM model, which integrates association rule mining and support vector machine (SVM). The goal is to benefit from advantages of both, the discriminative knowledge represented by class association rules and the classification power of the SVM algorithm, to construct an efficient and accurate classifier model that improves the interpretability problem of SVM as a traditional machine learning technique and overcomes the efficiency issues of associative classification algorithms. In our proposed framework: instead of using the original training set, a set of rule-based feature vectors, which are generated based on the discriminative ability of class association rules over the training samples, are presented to the learning component of the SVM algorithm. We show that rule-based feature vectors present a high-qualified source of discrimination knowledge that can impact substantially the prediction power of SVM and associative classification techniques. They provide users with more conveniences in terms of understandability and interpretability as well. We have used four datasets from UCI ML repository to evaluate the performance of the developed system in comparison with five well-known existing classification methods. Because of the importance and popularity of gene expression analysis as real world application of the classification model, we present an extension of CARSVM combined with feature selection to be applied to gene expression data. Then, we describe how this combination will provide biologists with an efficient and understandable classifier model. The reported test results and their biological interpretation demonstrate the applicability, efficiency and effectiveness of the proposed model. From the results, it can be concluded that a considerable increase in classification accuracy can be obtained when the rule-based feature vectors are integrated in the learning process of the SVM algorithm. In the context of applicability, according to the results obtained from gene expression analysis, we can conclude that the CARSVM system can be utilized in a variety of real world applications with some adjustments.

  1. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

    PubMed

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-02-03

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  2. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-01-01

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods. PMID:29401681

  3. Utilization of the NSQIP-Pediatric Database in Development and Validation of a New Predictive Model of Pediatric Postoperative Wound Complications.

    PubMed

    Maizlin, Ilan I; Redden, David T; Beierle, Elizabeth A; Chen, Mike K; Russell, Robert T

    2017-04-01

    Surgical wound classification, introduced in 1964, stratifies the risk of surgical site infection (SSI) based on a clinical estimate of the inoculum of bacteria encountered during the procedure. Recent literature has questioned the accuracy of predicting SSI risk based on wound classification. We hypothesized that a more specific model founded on specific patient and perioperative factors would more accurately predict the risk of SSI. Using all observations from the 2012 to 2014 pediatric National Surgical Quality Improvement Program-Pediatric (NSQIP-P) Participant Use File, patients were randomized into model creation and model validation datasets. Potential perioperative predictive factors were assessed with univariate analysis for each of 4 outcomes: wound dehiscence, superficial wound infection, deep wound infection, and organ space infection. A multiple logistic regression model with a step-wise backwards elimination was performed. A receiver operating characteristic curve with c-statistic was generated to assess the model discrimination for each outcome. A total of 183,233 patients were included. All perioperative NSQIP factors were evaluated for clinical pertinence. Of the original 43 perioperative predictive factors selected, 6 to 9 predictors for each outcome were significantly associated with postoperative SSI. The predictive accuracy level of our model compared favorably with the traditional wound classification in each outcome of interest. The proposed model from NSQIP-P demonstrated a significantly improved predictive ability for postoperative SSIs than the current wound classification system. This model will allow providers to more effectively counsel families and patients of these risks, and more accurately reflect true risks for individual surgical patients to hospitals and payers. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  4. A vegetational and ecological resource analysis from space and high flight photography

    NASA Technical Reports Server (NTRS)

    Poulton, C. E.; Faulkner, D. P.; Schrumpf, B. J.

    1970-01-01

    A hierarchial classification of vegetation and related resources is considered that is applicable to convert remote sensing data in space and aerial synoptic photography. The numerical symbolization provides for three levels of vegetational classification and three levels of classification of environmental features associated with each vegetational class. It is shown that synoptic space photography accurately projects how urban sprawl affects agricultural land use areas and ecological resources.

  5. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  6. Echocardiographic Classification and Surgical Approaches to Double-Outlet Right Ventricle for Great Arteries Arising Almost Exclusively from the Right Ventricle.

    PubMed

    Pang, Kun-Jing; Meng, Hong; Hu, Sheng-Shou; Wang, Hao; Hsi, David; Hua, Zhong-Dong; Pan, Xiang-Bin; Li, Shou-Jun

    2017-08-01

    Selecting an appropriate surgical approach for double-outlet right ventricle (DORV), a complex congenital cardiac malformation with many anatomic variations, is difficult. Therefore, we determined the feasibility of using an echocardiographic classification system, which describes the anatomic variations in more precise terms than the current system does, to determine whether it could help direct surgical plans. Our system includes 8 DORV subtypes, categorized according to 3 factors: the relative positions of the great arteries (normal or abnormal), the relationship between the great arteries and the ventricular septal defect (committed or noncommitted), and the presence or absence of right ventricular outflow tract obstruction (RVOTO). Surgical approaches in 407 patients were based on their DORV subtype, as determined by echocardiography. We found that the optimal surgical management of patients classified as normal/committed/no RVOTO, normal/committed/RVOTO, and abnormal/committed/no RVOTO was, respectively, like that for patients with large ventricular septal defects, tetralogy of Fallot, and transposition of the great arteries without RVOTO. Patients with abnormal/committed/RVOTO anatomy and those with abnormal/noncommitted/RVOTO anatomy underwent intraventricular repair and double-root translocation. For patients with other types of DORV, choosing the appropriate surgical approach and biventricular repair techniques was more complex. We think that our classification system accurately groups DORV patients and enables surgeons to select the best approach for each patient's cardiac anatomy.

  7. Automatic classification of blank substrate defects

    NASA Astrophysics Data System (ADS)

    Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati

    2014-10-01

    Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask Technology Center (MPMask). The Calibre ADC tool was qualified on production mask blanks against the manual classification. The classification accuracy of ADC is greater than 95% for critical defects with an overall accuracy of 90%. The sensitivity to weak defect signals and locating the defect in the images is a challenge we are resolving. The performance of the tool has been demonstrated on multiple mask types and is ready for deployment in full volume mask manufacturing production flow. Implementation of Calibre ADC is estimated to reduce the misclassification of critical defects by 60-80%.

  8. Detection of inter-patient left and right bundle branch block heartbeats in ECG using ensemble classifiers

    PubMed Central

    2014-01-01

    Background Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). Methods This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. Results The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. Conclusions A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients. PMID:24903422

  9. Detection of inter-patient left and right bundle branch block heartbeats in ECG using ensemble classifiers.

    PubMed

    Huang, Huifang; Liu, Jie; Zhu, Qiang; Wang, Ruiping; Hu, Guangshu

    2014-06-05

    Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients.

  10. Towards automated spectroscopic tissue classification in thyroid and parathyroid surgery.

    PubMed

    Schols, Rutger M; Alic, Lejla; Wieringa, Fokko P; Bouvy, Nicole D; Stassen, Laurents P S

    2017-03-01

    In (para-)thyroid surgery iatrogenic parathyroid injury should be prevented. To aid the surgeons' eye, a camera system enabling parathyroid-specific image enhancement would be useful. Hyperspectral camera technology might work, provided that the spectral signature of parathyroid tissue offers enough specific features to be reliably and automatically distinguished from surrounding tissues. As a first step to investigate this, we examined the feasibility of wide band diffuse reflectance spectroscopy (DRS) for automated spectroscopic tissue classification, using silicon (Si) and indium-gallium-arsenide (InGaAs) sensors. DRS (350-1830 nm) was performed during (para-)thyroid resections. From the acquired spectra 36 features at predefined wavelengths were extracted. The best features for classification of parathyroid from adipose or thyroid were assessed by binary logistic regression for Si- and InGaAs-sensor ranges. Classification performance was evaluated by leave-one-out cross-validation. In 19 patients 299 spectra were recorded (62 tissue sites: thyroid = 23, parathyroid = 21, adipose = 18). Classification accuracy of parathyroid-adipose was, respectively, 79% (Si), 82% (InGaAs) and 97% (Si/InGaAs combined). Parathyroid-thyroid classification accuracies were 80% (Si), 75% (InGaAs), 82% (Si/InGaAs combined). Si and InGaAs sensors are fairly accurate for automated spectroscopic classification of parathyroid, adipose and thyroid tissues. Combination of both sensor technologies improves accuracy. Follow-up research, aimed towards hyperspectral imaging seems justified. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Dance recognition system using lower body movement.

    PubMed

    Simpson, Travis T; Wiesner, Susan L; Bennett, Bradford C

    2014-02-01

    The current means of locating specific movements in film necessitate hours of viewing, making the task of conducting research into movement characteristics and patterns tedious and difficult. This is particularly problematic for the research and analysis of complex movement systems such as sports and dance. While some systems have been developed to manually annotate film, to date no automated way of identifying complex, full body movement exists. With pattern recognition technology and knowledge of joint locations, automatically describing filmed movement using computer software is possible. This study used various forms of lower body kinematic analysis to identify codified dance movements. We created an algorithm that compares an unknown move with a specified start and stop against known dance moves. Our recognition method consists of classification and template correlation using a database of model moves. This system was optimized to include nearly 90 dance and Tai Chi Chuan movements, producing accurate name identification in over 97% of trials. In addition, the program had the capability to provide a kinematic description of either matched or unmatched moves obtained from classification recognition.

  12. Comparative analysis of expert and machine-learning methods for classification of body cavity effusions in companion animals.

    PubMed

    Hotz, Christine S; Templeton, Steven J; Christopher, Mary M

    2005-03-01

    A rule-based expert system using CLIPS programming language was created to classify body cavity effusions as transudates, modified transudates, exudates, chylous, and hemorrhagic effusions. The diagnostic accuracy of the rule-based system was compared with that produced by 2 machine-learning methods: Rosetta, a rough sets algorithm and RIPPER, a rule-induction method. Results of 508 body cavity fluid analyses (canine, feline, equine) obtained from the University of California-Davis Veterinary Medical Teaching Hospital computerized patient database were used to test CLIPS and to test and train RIPPER and Rosetta. The CLIPS system, using 17 rules, achieved an accuracy of 93.5% compared with pathologist consensus diagnoses. Rosetta accurately classified 91% of effusions by using 5,479 rules. RIPPER achieved the greatest accuracy (95.5%) using only 10 rules. When the original rules of the CLIPS application were replaced with those of RIPPER, the accuracy rates were identical. These results suggest that both rule-based expert systems and machine-learning methods hold promise for the preliminary classification of body fluids in the clinical laboratory.

  13. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligence Revolutionizing Wildlife Monitoring and Conservation

    PubMed Central

    Gonzalez, Luis F.; Montes, Glen A.; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J.

    2016-01-01

    Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification. PMID:26784196

  14. [Changes for rheumatology in the G-DRG system 2005].

    PubMed

    Fiori, W; Roeder, N; Lakomek, H-J; Liman, W; Köneke, N; Hülsemann, J L; Lehmann, H; Wenke, A

    2005-02-01

    The German prospective payment system G-DRG has been recently adapted and recalculated. Apart from the adjustments of the G-DRG classification system itself changes in the legal framework like the extension of the "convergence period" or the limitation of budget loss due to DRG introduction have to be considered. Especially the introduction of new procedure codes (OPS) describing the specialized and complex rheumatologic treatment of inpatients might be of significant importance. Even though these procedures will not yet develop influence on the grouping process in 2005, it will enable a more accurate description of the efforts of acute-rheumatologic treatment which can be used for further adaptations of the DRG algorithm. Numerous newly introduced additive payment components (ZE) result in a more adequate description of the "DRG-products". Although not increasing the individual hospital budget, these additive payments contribute to more transparency of high cost services and can be addressed separately from the DRG-budget. Furthermore a lot of other relevant changes to the G-DRG catalogue, the classification systems ICD-10-GM and OPS-301 and the German Coding Standards (DKR) are presented.

  15. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligence Revolutionizing Wildlife Monitoring and Conservation.

    PubMed

    Gonzalez, Luis F; Montes, Glen A; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J

    2016-01-14

    Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.

  16. Inverse imaging of the breast with a material classification technique.

    PubMed

    Manry, C W; Broschat, S L

    1998-03-01

    In recent publications [Chew et al., IEEE Trans. Blomed. Eng. BME-9, 218-225 (1990); Borup et al., Ultrason. Imaging 14, 69-85 (1992)] the inverse imaging problem has been solved by means of a two-step iterative method. In this paper, a third step is introduced for ultrasound imaging of the breast. In this step, which is based on statistical pattern recognition, classification of tissue types and a priori knowledge of the anatomy of the breast are integrated into the iterative method. Use of this material classification technique results in more rapid convergence to the inverse solution--approximately 40% fewer iterations are required--as well as greater accuracy. In addition, tumors are detected early in the reconstruction process. Results for reconstructions of a simple two-dimensional model of the human breast are presented. These reconstructions are extremely accurate when system noise and variations in tissue parameters are not too great. However, for the algorithm used, degradation of the reconstructions and divergence from the correct solution occur when system noise and variations in parameters exceed threshold values. Even in this case, however, tumors are still identified within a few iterations.

  17. The Comprehensive AOCMF Classification System: Mandible Fractures- Level 2 Tutorial

    PubMed Central

    Cornelius, Carl-Peter; Audigé, Laurent; Kunz, Christoph; Rudderman, Randal; Buitrago-Téllez, Carlos H.; Frodel, John; Prein, Joachim

    2014-01-01

    This tutorial outlines the details of the AOCMF image-based classification system for fractures of the mandible at the precision level 2 allowing description of their topographical distribution. A short introduction about the anatomy is made. Mandibular fractures are classified by the anatomic regions involved. For this purpose, the mandible is delineated into an array of nine regions identified by letters: the symphysis/parasymphysis region anteriorly, two body regions on each lateral side, combined angle and ascending ramus regions, and finally the condylar and coronoid processes. A precise definition of the demarcation lines between these regions is given for the unambiguous allocation of fractures. Four transition zones allow an accurate topographic assignment if fractures end up in or run across the borders of anatomic regions. These zones are defined between angle/ramus and body, and between body and symphysis/parasymphysis. A fracture is classified as “confined” as long as it is located within a region, in contrast to a fracture being “nonconfined” when it extents to an adjoining region. Illustrations and case examples of mandible fractures are presented to become familiar with the classification procedure in daily routine. PMID:25489388

  18. Obsessive-compulsive skin disorders: a novel classification based on degree of insight.

    PubMed

    Zhu, Tian Hao; Nakamura, Mio; Farahnik, Benjamin; Abrouk, Michael; Reichenberg, Jason; Bhutani, Tina; Koo, John

    2017-06-01

    Individuals with obsessive-compulsive features frequently visit dermatologists for complaints of the skin, hair or nails, and often progress towards a chronic relapsing course due to the challenge associated with accurate diagnosis and management of their psychiatric symptoms. The current DSM-5 formally recognizes body dysmorphic disorder, trichotillomania, neurotic excoriation and body focused repetitive behavior disorder as psychodermatological disorders belonging to the category of Obsessive-Compulsive and Related Disorders. However there is evidence that other relevant skin diseases such as delusions of parasitosis, dermatitis artefacta, contamination dermatitis, AIDS phobia, trichotemnomania and even lichen simplex chronicus possess prominent obsessive-compulsive characteristics that do not necessarily fit the full diagnostic criteria of the DSM-5. Therefore, to increase dermatologists' awareness of this unique group of skin disorders with OCD features, we propose a novel classification system called Obsessive-Compulsive Insight Continuum. Under this new classification system, obsessive-compulsive skin manifestations are categorized along a continuum based on degree of insight, from minimal insight with delusional obsessions to good insight with minimal obsessions. Understanding the level of insight is thus an important first step for clinicians who routinely interact with these patients.

  19. Classification of spatially unresolved objects

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Horwitz, H. M.; Hyde, P. D.; Morgenstern, J. P.

    1972-01-01

    A proportion estimation technique for classification of multispectral scanner images is reported that uses data point averaging to extract and compute estimated proportions for a single average data point to classify spatial unresolved areas. Example extraction calculations of spectral signatures for bare soil, weeds, alfalfa, and barley prove quite accurate.

  20. INVENTORY AND CLASSIFICATION OF GREAT LAKES COASTAL WETLANDS FOR MONITORING AND ASSESSMENT AT LARGE SPATIAL SCALES

    EPA Science Inventory

    Monitoring aquatic resources for regional assessments requires an accurate and comprehensive inventory of the resource and useful classification of exosystem similarities. Our research effort to create an electronic database and work with various ways to classify coastal wetlands...

  1. Multiclass Posterior Probability Twin SVM for Motor Imagery EEG Classification.

    PubMed

    She, Qingshan; Ma, Yuliang; Meng, Ming; Luo, Zhizeng

    2015-01-01

    Motor imagery electroencephalography is widely used in the brain-computer interface systems. Due to inherent characteristics of electroencephalography signals, accurate and real-time multiclass classification is always challenging. In order to solve this problem, a multiclass posterior probability solution for twin SVM is proposed by the ranking continuous output and pairwise coupling in this paper. First, two-class posterior probability model is constructed to approximate the posterior probability by the ranking continuous output techniques and Platt's estimating method. Secondly, a solution of multiclass probabilistic outputs for twin SVM is provided by combining every pair of class probabilities according to the method of pairwise coupling. Finally, the proposed method is compared with multiclass SVM and twin SVM via voting, and multiclass posterior probability SVM using different coupling approaches. The efficacy on the classification accuracy and time complexity of the proposed method has been demonstrated by both the UCI benchmark datasets and real world EEG data from BCI Competition IV Dataset 2a, respectively.

  2. Advances in Spectral-Spatial Classification of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2012-01-01

    Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation, and contrast of the spatial structures present in the image. Then, the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines (SVMs) using the available spectral information and the extracted spatial information. Spatial postprocessing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple-classifier (MC) system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral–spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.

  3. Nutritional status in sick children and adolescents is not accurately reflected by BMI-SDS.

    PubMed

    Fusch, Gerhard; Raja, Preeya; Dung, Nguyen Quang; Karaolis-Danckert, Nadina; Barr, Ronald; Fusch, Christoph

    2013-01-01

    Nutritional status provides helpful information of disease severity and treatment effectiveness. Body mass index standard deviation scores (BMI-SDS) provide an approximation of body composition and thus are frequently used to classify nutritional status of sick children and adolescents. However, the accuracy of estimating body composition in this population using BMI-SDS has not been assessed. Thus, this study aims to evaluate the accuracy of nutritional status classification in sick infants and adolescents using BMI-SDS, upon comparison to classification using percentage body fat (%BF) reference charts. BMI-SDS was calculated from anthropometric measurements and %BF was measured using dual-energy x-ray absorptiometry (DXA) for 393 sick children and adolescents (5 months-18 years). Subjects were classified by nutritional status (underweight, normal weight, overweight, and obese), using 2 methods: (1) BMI-SDS, based on age- and gender-specific percentiles, and (2) %BF reference charts (standard). Linear regression and a correlation analysis were conducted to compare agreement between both methods of nutritional status classification. %BF reference value comparisons were also made between 3 independent sources based on German, Canadian, and American study populations. Correlation between nutritional status classification by BMI-SDS and %BF agreed moderately (r (2) = 0.75, 0.76 in boys and girls, respectively). The misclassification of nutritional status in sick children and adolescents using BMI-SDS was 27% when using German %BF references. Similar rates observed when using Canadian and American %BF references (24% and 23%, respectively). Using BMI-SDS to determine nutritional status in a sick population is not considered an appropriate clinical tool for identifying individual underweight or overweight children or adolescents. However, BMI-SDS may be appropriate for longitudinal measurements or for screening purposes in large field studies. When accurate nutritional status classification of a sick patient is needed for clinical purposes, nutritional status will be assessed more accurately using methods that accurately measure %BF, such as DXA.

  4. Autonomic cardiovascular control and sports classification in Paralympic athletes with spinal cord injury.

    PubMed

    West, Christopher R; Krassioukov, Andrei V

    2017-01-01

    Purpose To investigate the relationship between the classification systems used in wheelchair sports and cardiovascular function in Paralympic athletes with spinal cord injury (SCI). Methods 26 wheelchair rugby (C3-C8) and 14 wheelchair basketball (T3-L1) were assessed for their International Wheelchair Rugby and Basketball Federation sports classification. Next, athletes were assessed for resting and reflex cardiovascular and autonomic function via the change (delta) in systolic blood pressure (SBP) and heart rate (HR) in response to sit-up, and sympathetic skin responses (SSRs), respectively. Results There were no differences in supine, seated, or delta SBP and HR between different sport classes in rugby or basketball (all p > 0.23). Athletes with autonomically complete injuries (SSR score 0-1) exhibited a lower supine SBP, seated SBP and delta SBP compared to those with autonomically incomplete injuries (SSR score >1; all p < 0.010), independent of sport played. There was no association between self-report OH and measured OH (χ 2  =   1.63, p = 0.20). Conclusion We provide definitive evidence that sports specific classification is not related to the degree of remaining autonomic cardiovascular control in Paralympic athletes with SCI. We suggest that testing for remaining autonomic function, which is closely related to the degree of cardiovascular control, should be incorporated into sporting classification. Implications for Rehabilitation Spinal cord injury is a debilitating condition that affects the function of almost every physiological system. It is becoming increasingly apparent that spinal cord injury induced changes in autonomic and cardiovascular function are important determinants of sports performance in athletes with spinal cord injury. This study shows that the current sports classification systems used in wheelchair rugby and basketball do not accurately reflect autonomic and cardiovascular function and thus are placing some athletes at a distinct disadvantage/advantage within their respective sport.

  5. Single-trial EEG RSVP classification using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shamwell, Jared; Lee, Hyungtae; Kwon, Heesung; Marathe, Amar R.; Lawhern, Vernon; Nothwang, William

    2016-05-01

    Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.

  6. [Research on identification of cabbages and weeds combining spectral imaging technology and SAM taxonomy].

    PubMed

    Zu, Qin; Zhang, Shui-fa; Cao, Yang; Zhao, Hui-yi; Dang, Chang-qing

    2015-02-01

    Weeds automatic identification is the key technique and also the bottleneck for implementation of variable spraying and precision pesticide. Therefore, accurate, rapid and non-destructive automatic identification of weeds has become a very important research direction for precision agriculture. Hyperspectral imaging system was used to capture the hyperspectral images of cabbage seedlings and five kinds of weeds such as pigweed, barnyard grass, goosegrass, crabgrass and setaria with the wavelength ranging from 1000 to 2500 nm. In ENVI, by utilizing the MNF rotation to implement the noise reduction and de-correlation of hyperspectral data and reduce the band dimensions from 256 to 11, and extracting the region of interest to get the spectral library as standard spectra, finally, using the SAM taxonomy to identify cabbages and weeds, the classification effect was good when the spectral angle threshold was set as 0. 1 radians. In HSI Analyzer, after selecting the training pixels to obtain the standard spectrum, the SAM taxonomy was used to distinguish weeds from cabbages. Furthermore, in order to measure the recognition accuracy of weeds quantificationally, the statistical data of the weeds and non-weeds were obtained by comparing the SAM classification image with the best classification effects to the manual classification image. The experimental results demonstrated that, when the parameters were set as 5-point smoothing, 0-order derivative and 7-degree spectral angle, the best classification result was acquired and the recognition rate of weeds, non-weeds and overall samples was 80%, 97.3% and 96.8% respectively. The method that combined the spectral imaging technology and the SAM taxonomy together took full advantage of fusion information of spectrum and image. By applying the spatial classification algorithms to establishing training sets for spectral identification, checking the similarity among spectral vectors in the pixel level, integrating the advantages of spectra and images meanwhile considering their accuracy and rapidity and improving weeds detection range in the full range that could detect weeds between and within crop rows, the above method contributes relevant analysis tools and means to the application field requiring the accurate information of plants in agricultural precision management

  7. Development of a tree classifier for discrimination of surface mine activity from Landsat digital data

    NASA Technical Reports Server (NTRS)

    Solomon, J. L.; Miller, W. F.; Quattrochi, D. A.

    1979-01-01

    In a cooperative project with the Geological Survey of Alabama, the Mississippi State Remote Sensing Applications Program has developed a single purpose, decision-tree classifier using band-ratioing techniques to discriminate various stages of surface mining activity. The tree classifier has four levels and employs only two channels in classification at each level. An accurate computation of the amount of disturbed land resulting from the mining activity can be made as a product of the classification output. The utilization of Landsat data provides a cost-efficient, rapid, and accurate means of monitoring surface mining activities.

  8. Algorithmic Classification of Five Characteristic Types of Paraphasias.

    PubMed

    Fergadiotis, Gerasimos; Gorman, Kyle; Bedrick, Steven

    2016-12-01

    This study was intended to evaluate a series of algorithms developed to perform automatic classification of paraphasic errors (formal, semantic, mixed, neologistic, and unrelated errors). We analyzed 7,111 paraphasias from the Moss Aphasia Psycholinguistics Project Database (Mirman et al., 2010) and evaluated the classification accuracy of 3 automated tools. First, we used frequency norms from the SUBTLEXus database (Brysbaert & New, 2009) to differentiate nonword errors and real-word productions. Then we implemented a phonological-similarity algorithm to identify phonologically related real-word errors. Last, we assessed the performance of a semantic-similarity criterion that was based on word2vec (Mikolov, Yih, & Zweig, 2013). Overall, the algorithmic classification replicated human scoring for the major categories of paraphasias studied with high accuracy. The tool that was based on the SUBTLEXus frequency norms was more than 97% accurate in making lexicality judgments. The phonological-similarity criterion was approximately 91% accurate, and the overall classification accuracy of the semantic classifier ranged from 86% to 90%. Overall, the results highlight the potential of tools from the field of natural language processing for the development of highly reliable, cost-effective diagnostic tools suitable for collecting high-quality measurement data for research and clinical purposes.

  9. Learning accurate very fast decision trees from uncertain data streams

    NASA Astrophysics Data System (ADS)

    Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo

    2015-12-01

    Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.

  10. A study of the utilization of ERTS-1 data from the Wabash River Basin

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Nine projects are defined, five ERTS data applications experiments and four supporting technology tasks. The most significant applications results were achieved in the soil association mapping, earth surface feature identification, and urban land use mapping efforts. Four soil association boundaries were accurately delineated from ERTS-1 imagery. A data bank has been developed to test surface feature classifications obtained from ERTS-1 data. Preliminary forest cover classifications indicated that the number of acres estimated tended to be greater than actually existed by 25%. Urban land use analysis of ERTS-1 data indicated highly accurate classification could be obtained for many urban catagories. The wooded residential category tended to be misclassified as woods or agricultural land. Further statistical analysis revealed that these classes could be separated using sample variance.

  11. Towards A Predictive First Principles Understanding Of Molecular Adsorption On Graphene

    DTIC Science & Technology

    2016-10-05

    used and developed state-of-the-art quantum mechanical methods to make accurate predictions about the interaction strength and adsorption structure...density functional theory, ab initio methods 16.  SECURITY CLASSIFICATION OF: 17.  LIMITATION OF ABSTRACT SAR 18.  NUMBER OF PAGES   11   19a.  NAME OF...important physical properties for a whole class of systems with weak non-covalent interactions, for example those involving the binding between water

  12. A soft computing scheme incorporating ANN and MOV energy in fault detection, classification and distance estimation of EHV transmission line with FSC.

    PubMed

    Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab

    2016-01-01

    In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.

  13. Some Observations on Nosology of Externalizing Disorders

    ERIC Educational Resources Information Center

    Sitholey, Prabhat

    2007-01-01

    The main purpose of psychiatric classifications should ultimately be of help in management of patients. Classifications do this indirectly. They help a clinician to think about a child's mental and behavioral problems, and accurately diagnose, and classify them. This in turn helps the clinician to communicate with other professionals, and devise a…

  14. Evaluation of the Unified Compensation and Classification Plan.

    ERIC Educational Resources Information Center

    Dade County Public Schools, Miami, FL. Office of Educational Accountability.

    The Unified Classification and Compensation Plan of the Dade County (Florida) Public Schools consists of four interdependent activities that include: (1) developing and maintaining accurate job descriptions, (2) conducting evaluations that recommend job worth and grade, (3) developing and maintaining rates of compensation for job values, and (4)…

  15. Content-based unconstrained color logo and trademark retrieval with color edge gradient co-occurrence histograms

    NASA Astrophysics Data System (ADS)

    Phan, Raymond; Androutsos, Dimitrios

    2008-01-01

    In this paper, we present a logo and trademark retrieval system for unconstrained color image databases that extends the Color Edge Co-occurrence Histogram (CECH) object detection scheme. We introduce more accurate information to the CECH, by virtue of incorporating color edge detection using vector order statistics. This produces a more accurate representation of edges in color images, in comparison to the simple color pixel difference classification of edges as seen in the CECH. Our proposed method is thus reliant on edge gradient information, and as such, we call this the Color Edge Gradient Co-occurrence Histogram (CEGCH). We use this as the main mechanism for our unconstrained color logo and trademark retrieval scheme. Results illustrate that the proposed retrieval system retrieves logos and trademarks with good accuracy, and outperforms the CECH object detection scheme with higher precision and recall.

  16. Extending a field-based Sonoran desert vegetation classification to a regional scale using optical and microwave satellite imagery

    NASA Astrophysics Data System (ADS)

    Shupe, Scott Marshall

    2000-10-01

    Vegetation mapping in and regions facilitates ecological studies, land management, and provides a record to which future land changes can be compared. Accurate and representative mapping of desert vegetation requires a sound field sampling program and a methodology to transform the data collected into a representative classification system. Time and cost constraints require that a remote sensing approach be used if such a classification system is to be applied on a regional scale. However, desert vegetation may be sparse and thus difficult to sense at typical satellite resolutions, especially given the problem of soil reflectance. This study was designed to address these concerns by conducting vegetation mapping research using field and satellite data from the US Army Yuma Proving Ground (USYPG) in Southwest Arizona. Line and belt transect data from the Army's Land Condition Trend Analysis (LCTA) Program were transformed into relative cover and relative density classification schemes using cluster analysis. Ordination analysis of the same data produced two and three-dimensional graphs on which the homogeneity of each vegetation class could be examined. It was found that the use of correspondence analysis (CA), detrended correspondence analysis (DCA), and non-metric multidimensional scaling (NMS) ordination methods was superior to the use of any single ordination method for helping to clarify between-class and within-class relationships in vegetation composition. Analysis of these between-class and within-class relationships were of key importance in examining how well relative cover and relative density schemes characterize the USYPG vegetation. Using these two classification schemes as reference data, maximum likelihood and artificial neural net classifications were then performed on a coregistered dataset consisting of a summer Landsat Thematic Mapper (TM) image, one spring and one summer ERS-1 microwave image, and elevation, slope, and aspect layers. Classifications using a combination of ERS-1 imagery and elevation, slope, and aspect data were superior to classifications carried out using Landsat TM data alone. In all classification iterations it was consistently found that the highest classification accuracy was obtained by using a combination of Landsat TM, ERS-1, and elevation, slope, and aspect data. Maximum likelihood classification accuracy was found to be higher than artificial neural net classification in all cases.

  17. Successional stage of biological soil crusts: an accurate indicator of ecohydrological condition

    USGS Publications Warehouse

    Belnap, Jayne; Wilcox, Bradford P.; Van Scoyoc, Matthew V.; Phillips, Susan L.

    2013-01-01

    Biological soil crusts are a key component of many dryland ecosystems. Following disturbance, biological soil crusts will recover in stages. Recently, a simple classification of these stages has been developed, largely on the basis of external features of the crusts, which reflects their level of development (LOD). The classification system has six LOD classes, from low (1) to high (6). To determine whether the LOD of a crust is related to its ecohydrological function, we used rainfall simulation to evaluate differences in infiltration, runoff, and erosion among crusts in the various LODs, across a range of soil depths and with different wetting pre-treatments. We found large differences between the lowest and highest LODs, with runoff and erosion being greatest from the lowest LOD. Under dry antecedent conditions, about 50% of the water applied ran off the lowest LOD plots, whereas less than 10% ran off the plots of the two highest LODs. Similarly, sediment loss was 400 g m-2 from the lowest LOD and almost zero from the higher LODs. We scaled up the results from these simulations using the Rangeland Hydrology and Erosion Model. Modelling results indicate that erosion increases dramatically as slope length and gradient increase, especially beyond the threshold values of 10 m for slope length and 10% for slope gradient. Our findings confirm that the LOD classification is a quick, easy, nondestructive, and accurate index of hydrological condition and should be incorporated in field and modelling assessments of ecosystem health.

  18. Inter-observer reliability of radiographic classifications and measurements in the assessment of Perthes' disease.

    PubMed

    Wiig, Ola; Terjesen, Terje; Svenningsen, Svein

    2002-10-01

    We evaluated the inter-observer agreement of radiographic methods when evaluating patients with Perthes' disease. The radiographs were assessed at the time of diagnosis and at the 1-year follow-up by local orthopaedic surgeons (O) and 2 experienced pediatric orthopedic surgeons (TT and SS). The Catterall, Salter-Thompson, and Herring lateral pillar classifications were compared, and the femoral head coverage (FHC), center-edge angle (CE-angle), and articulo-trochanteric distance (ATD) were measured in the affected and normal hips. On the primary evaluation, the lateral pillar and Salter-Thompson classifications had a higher level of agreement among the observers than the Catterall classification, but none of the classifications showed good agreement (weighted kappa values between O and SS 0.56, 0.54, 0.49, respectively). Combining Catterall groups 1 and 2 into one group, and groups 3 and 4 into another resulted in better agreement (kappa 0.55) than with the original 4-group system. The agreement was also better (kappa 0.62-0.70) between experienced than between less experienced examiners for all classifications. The femoral head coverage was a more reliable and accurate measure than the CE-angle for quantifying the acetabular covering of the femoral head, as indicated by higher intraclass correlation coefficients (ICC) and smaller inter-observer differences. The ATD showed good agreement in all comparisons and had low interobserver differences. We conclude that all classifications of femoral head involvement are adequate in clinical work if the radiographic assessment is done by experienced examiners. When they are less experienced examiners, a 2-group classification or the lateral pillar classification is more reliable. For evaluation of containment of the femoral head, FHC is more appropriate than the CE-angle.

  19. A comparative study of machine learning models for ethnicity classification

    NASA Astrophysics Data System (ADS)

    Trivedi, Advait; Bessie Amali, D. Geraldine

    2017-11-01

    This paper endeavours to adopt a machine learning approach to solve the problem of ethnicity recognition. Ethnicity identification is an important vision problem with its use cases being extended to various domains. Despite the multitude of complexity involved, ethnicity identification comes naturally to humans. This meta information can be leveraged to make several decisions, be it in target marketing or security. With the recent development of intelligent systems a sub module to efficiently capture ethnicity would be useful in several use cases. Several attempts to identify an ideal learning model to represent a multi-ethnic dataset have been recorded. A comparative study of classifiers such as support vector machines, logistic regression has been documented. Experimental results indicate that the logical classifier provides a much accurate classification than the support vector machine.

  20. Integrating Human and Machine Intelligence in Galaxy Morphology Classification Tasks

    NASA Astrophysics Data System (ADS)

    Beck, Melanie Renee

    The large flood of data flowing from observatories presents significant challenges to astronomy and cosmology--challenges that will only be magnified by projects currently under development. Growth in both volume and velocity of astrophysics data is accelerating: whereas the Sloan Digital Sky Survey (SDSS) has produced 60 terabytes of data in the last decade, the upcoming Large Synoptic Survey Telescope (LSST) plans to register 30 terabytes per night starting in the year 2020. Additionally, the Euclid Mission will acquire imaging for 5 x 107 resolvable galaxies. The field of galaxy evolution faces a particularly challenging future as complete understanding often cannot be reached without analysis of detailed morphological galaxy features. Historically, morphological analysis has relied on visual classification by astronomers, accessing the human brains capacity for advanced pattern recognition. However, this accurate but inefficient method falters when confronted with many thousands (or millions) of images. In the SDSS era, efforts to automate morphological classifications of galaxies (e.g., Conselice et al., 2000; Lotz et al., 2004) are reasonably successful and can distinguish between elliptical and disk-dominated galaxies with accuracies of 80%. While this is statistically very useful, a key problem with these methods is that they often cannot say which 80% of their samples are accurate. Furthermore, when confronted with the more complex task of identifying key substructure within galaxies, automated classification algorithms begin to fail. The Galaxy Zoo project uses a highly innovative approach to solving the scalability problem of visual classification. Displaying images of SDSS galaxies to volunteers via a simple and engaging web interface, www.galaxyzoo.org asks people to classify images by eye. Within the first year hundreds of thousands of members of the general public had classified each of the 1 million SDSS galaxies an average of 40 times. Galaxy Zoo thus solved both the visual classification problem of time efficiency and improved accuracy by producing a distribution of independent classifications for each galaxy. While crowd-sourced galaxy classifications have proven their worth, challenges remain before establishing this method as a critical and standard component of the data processing pipelines for the next generation of surveys. In particular, though innovative, crowd-sourcing techniques do not have the capacity to handle the data volume and rates expected in the next generation of surveys. These algorithms will be delegated to handle the majority of the classification tasks, freeing citizen scientists to contribute their efforts on subtler and more complex assignments. This thesis presents a solution through an integration of visual and automated classifications, preserving the best features of both human and machine. We demonstrate the effectiveness of such a system through a re-analysis of visual galaxy morphology classifications collected during the Galaxy Zoo 2 (GZ2) project. We reprocess the top-level question of the GZ2 decision tree with a Bayesian classification aggregation algorithm dubbed SWAP, originally developed for the Space Warps gravitational lens project. Through a simple binary classification scheme we increase the classification rate nearly 5-fold classifying 226,124 galaxies in 92 days of GZ2 project time while reproducing labels derived from GZ2 classification data with 95.7% accuracy. We next combine this with a Random Forest machine learning algorithm that learns on a suite of non-parametric morphology indicators widely used for automated morphologies. We develop a decision engine that delegates tasks between human and machine and demonstrate that the combined system provides a factor of 11.4 increase in the classification rate, classifying 210,803 galaxies in just 32 days of GZ2 project time with 93.1% accuracy. As the Random Forest algorithm requires a minimal amount of computational cost, this result has important implications for galaxy morphology identification tasks in the era of Euclid and other large-scale surveys.

  1. Utilizing feedback in adaptive SAR ATR systems

    NASA Astrophysics Data System (ADS)

    Horsfield, Owen; Blacknell, David

    2009-05-01

    Existing SAR ATR systems are usually trained off-line with samples of target imagery or CAD models, prior to conducting a mission. If the training data is not representative of mission conditions, then poor performance may result. In addition, it is difficult to acquire suitable training data for the many target types of interest. The Adaptive SAR ATR Problem Set (AdaptSAPS) program provides a MATLAB framework and image database for developing systems that adapt to mission conditions, meaning less reliance on accurate training data. A key function of an adaptive system is the ability to utilise truth feedback to improve performance, and it is this feature which AdaptSAPS is intended to exploit. This paper presents a new method for SAR ATR that does not use training data, based on supervised learning. This is achieved by using feature-based classification, and several new shadow features have been developed for this purpose. These features allow discrimination of vehicles from clutter, and classification of vehicles into two classes: targets, comprising military combat types, and non-targets, comprising bulldozers and trucks. The performance of the system is assessed using three baseline missions provided with AdaptSAPS, as well as three additional missions. All performance metrics indicate a distinct learning trend over the course of a mission, with most third and fourth quartile performance levels exceeding 85% correct classification. It has been demonstrated that these performance levels can be maintained even when truth feedback rates are reduced by up to 55% over the course of a mission.

  2. Classification of deadlift biomechanics with wearable inertial measurement units.

    PubMed

    O'Reilly, Martin A; Whelan, Darragh F; Ward, Tomas E; Delahunt, Eamonn; Caulfield, Brian M

    2017-06-14

    The deadlift is a compound full-body exercise that is fundamental in resistance training, rehabilitation programs and powerlifting competitions. Accurate quantification of deadlift biomechanics is important to reduce the risk of injury and ensure training and rehabilitation goals are achieved. This study sought to develop and evaluate deadlift exercise technique classification systems utilising Inertial Measurement Units (IMUs), recording at 51.2Hz, worn on the lumbar spine, both thighs and both shanks. It also sought to compare classification quality when these IMUs are worn in combination and in isolation. Two datasets of IMU deadlift data were collected. Eighty participants first completed deadlifts with acceptable technique and 5 distinct, deliberately induced deviations from acceptable form. Fifty-five members of this group also completed a fatiguing protocol (3-Repition Maximum test) to enable the collection of natural deadlift deviations. For both datasets, universal and personalised random-forests classifiers were developed and evaluated. Personalised classifiers outperformed universal classifiers in accuracy, sensitivity and specificity in the binary classification of acceptable or aberrant technique and in the multi-label classification of specific deadlift deviations. Whilst recent research has favoured universal classifiers due to the reduced overhead in setting them up for new system users, this work demonstrates that such techniques may not be appropriate for classifying deadlift technique due to the poor accuracy achieved. However, personalised classifiers perform very well in assessing deadlift technique, even when using data derived from a single lumbar-worn IMU to detect specific naturally occurring technique mistakes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. sEMG feature evaluation for identification of elbow angle resolution in graded arm movement.

    PubMed

    Castro, Maria Claudia F; Colombini, Esther L; Aquino, Plinio T; Arjunan, Sridhar P; Kumar, Dinesh K

    2014-11-25

    Automatic and accurate identification of elbow angle from surface electromyogram (sEMG) is essential for myoelectric controlled upper limb exoskeleton systems. This requires appropriate selection of sEMG features, and identifying the limitations of such a system.This study has demonstrated that it is possible to identify three discrete positions of the elbow; full extension, right angle, and mid-way point, with window size of only 200 milliseconds. It was seen that while most features were suitable for this purpose, Power Spectral Density Averages (PSD-Av) performed best. The system correctly classified the sEMG against the elbow angle for 100% cases when only two discrete positions (full extension and elbow at right angle) were considered, while correct classification was 89% when there were three discrete positions. However, sEMG was unable to accurately determine the elbow position when five discrete angles were considered. It was also observed that there was no difference for extension or flexion phases.

  4. Crowdsourced validation of a machine-learning classification system for autism and ADHD.

    PubMed

    Duda, M; Haber, N; Daniels, J; Wall, D P

    2017-05-16

    Autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD) together affect >10% of the children in the United States, but considerable behavioral overlaps between the two disorders can often complicate differential diagnosis. Currently, there is no screening test designed to differentiate between the two disorders, and with waiting times from initial suspicion to diagnosis upwards of a year, methods to quickly and accurately assess risk for these and other developmental disorders are desperately needed. In a previous study, we found that four machine-learning algorithms were able to accurately (area under the curve (AUC)>0.96) distinguish ASD from ADHD using only a small subset of items from the Social Responsiveness Scale (SRS). Here, we expand upon our prior work by including a novel crowdsourced data set of responses to our predefined top 15 SRS-derived questions from parents of children with ASD (n=248) or ADHD (n=174) to improve our model's capability to generalize to new, 'real-world' data. By mixing these novel survey data with our initial archival sample (n=3417) and performing repeated cross-validation with subsampling, we created a classification algorithm that performs with AUC=0.89±0.01 using only 15 questions.

  5. CAD-RADS - a new clinical decision support tool for coronary computed tomography angiography.

    PubMed

    Foldyna, Borek; Szilveszter, Bálint; Scholtz, Jan-Erik; Banerji, Dahlia; Maurovich-Horvat, Pál; Hoffmann, Udo

    2018-04-01

    Coronary computed tomography angiography (CTA) has been established as an accurate method to non-invasively assess coronary artery disease (CAD). The proposed 'Coronary Artery Disease Reporting and Data System' (CAD-RADS) may enable standardised reporting of the broad spectrum of coronary CTA findings related to the presence, extent and composition of coronary atherosclerosis. The CAD-RADS classification is a comprehensive tool for summarising findings on a per-patient-basis dependent on the highest-grade coronary artery lesion, ranging from CAD-RADS 0 (absence of CAD) to CAD-RADS 5 (total occlusion of a coronary artery). In addition, it provides suggestions for clinical management for each classification, including further testing and therapeutic options. Despite some limitations, CAD-RADS may facilitate improved communication between imagers and patient caregivers. As such, CAD-RADS may enable a more efficient use of coronary CTA leading to more accurate utilisation of invasive coronary angiograms. Furthermore, widespread use of CAD-RADS may facilitate registry-based research of diagnostic and prognostic aspects of CTA. • CAD-RADS is a tool for standardising coronary CTA reports. • CAD-RADS includes clinical treatment recommendations based on CTA findings. • CAD-RADS has the potential to reduce variability of CTA reports.

  6. Acetabular Cup Revision.

    PubMed

    Kim, Young-Ho

    2017-09-01

    The use of acetabular cup revision arthroplasty is on the rise as demands for total hip arthroplasty, improved life expectancies, and the need for individual activity increase. For an acetabular cup revision to be successful, the cup should gain stable fixation within the remaining supportive bone of the acetabulum. Since the patient's remaining supportive acetabular bone stock plays an important role in the success of revision, accurate classification of the degree of acetabular bone defect is necessary. The Paprosky classification system is most commonly used when determining the location and degree of acetabular bone loss. Common treatment options include: acetabular liner exchange, high hip center, oblong cup, trabecular metal cup with augment, bipolar cup, bulk structural graft, cemented cup, uncemented cup including jumbo cup, acetabular reinforcement device (cage), trabecular metal cup cage. The optimal treatment option is dependent upon the degree of the discontinuity, the amount of available bone stock and the likelihood of achieving stable fixation upon supportive host bone. To achieve successful acetabular cup revision, accurate evaluation of bone defect preoperatively and intraoperatively, proper choice of method of acetabular revision according to the evaluation of acetabular bone deficiency, proper technique to get primary stability of implant such as precise grafting technique, and stable fixation of implant are mandatory.

  7. A Land System representation for global assessments and land-use modeling.

    PubMed

    van Asselen, Sanneke; Verburg, Peter H

    2012-10-01

    Current global scale land-change models used for integrated assessments and climate modeling are based on classifications of land cover. However, land-use management intensity and livestock keeping are also important aspects of land use, and are an integrated part of land systems. This article aims to classify, map, and to characterize Land Systems (LS) at a global scale and analyze the spatial determinants of these systems. Besides proposing such a classification, the article tests if global assessments can be based on globally uniform allocation rules. Land cover, livestock, and agricultural intensity data are used to map LS using a hierarchical classification method. Logistic regressions are used to analyze variation in spatial determinants of LS. The analysis of the spatial determinants of LS indicates strong associations between LS and a range of socioeconomic and biophysical indicators of human-environment interactions. The set of identified spatial determinants of a LS differs among regions and scales, especially for (mosaic) cropland systems, grassland systems with livestock, and settlements. (Semi-)Natural LS have more similar spatial determinants across regions and scales. Using LS in global models is expected to result in a more accurate representation of land use capturing important aspects of land systems and land architecture: the variation in land cover and the link between land-use intensity and landscape composition. Because the set of most important spatial determinants of LS varies among regions and scales, land-change models that include the human drivers of land change are best parameterized at sub-global level, where similar biophysical, socioeconomic and cultural conditions prevail in the specific regions. © 2012 Blackwell Publishing Ltd.

  8. Central Sensitization-Based Classification for Temporomandibular Disorders: A Pathogenetic Hypothesis

    PubMed Central

    Cattaneo, Ruggero; Marci, Maria Chiara; Pietropaoli, Davide; Ortu, Eleonora

    2017-01-01

    Dysregulation of Autonomic Nervous System (ANS) and central pain pathways in temporomandibular disorders (TMD) is a growing evidence. Authors include some forms of TMD among central sensitization syndromes (CSS), a group of pathologies characterized by central morphofunctional alterations. Central Sensitization Inventory (CSI) is useful for clinical diagnosis. Clinical examination and CSI cannot identify the central site(s) affected in these diseases. Ultralow frequency transcutaneous electrical nerve stimulation (ULFTENS) is extensively used in TMD and in dental clinical practice, because of its effects on descending pain modulation pathways. The Diagnostic Criteria for TMD (DC/TMD) are the most accurate tool for diagnosis and classification of TMD. However, it includes CSI to investigate central aspects of TMD. Preliminary data on sensory ULFTENS show it is a reliable tool for the study of central and autonomic pathways in TMD. An alternative classification based on the presence of Central Sensitization and on individual response to sensory ULFTENS is proposed. TMD may be classified into 4 groups: (a) TMD with Central Sensitization ULFTENS Responders; (b) TMD with Central Sensitization ULFTENS Nonresponders; (c) TMD without Central Sensitization ULFTENS Responders; (d) TMD without Central Sensitization ULFTENS Nonresponders. This pathogenic classification of TMD may help to differentiate therapy and aetiology. PMID:28932132

  9. Conflicts in wound classification of neonatal operations.

    PubMed

    Vu, Lan T; Nobuhara, Kerilyn K; Lee, Hanmin; Farmer, Diana L

    2009-06-01

    This study sought to determine the reliability of wound classification guidelines when applied to neonatal operations. This study is a cross-sectional web-based survey of pediatric surgeons. From a random sample of 22 neonatal operations, participants classified each operation as "clean," "clean-contaminated," "contaminated," or "dirty or infected," and specified duration of perioperative antibiotics as "none," "single preoperative," "24 hours," or ">24 hours." Unweighted kappa score was calculated to estimate interrater reliability. Overall interrater reliability for wound classification was poor (kappa = 0.30). The following operations were classified as clean: pyloromyotomy, resection of sequestration, resection of sacrococcygeal teratoma, oophorectomy, and immediate repair of omphalocele; as clean-contaminated: Ladd procedure, bowel resection for midgut volvulus and meconium peritonitis, fistula ligation of tracheoesophageal fistula, primary esophageal anastomosis of esophageal atresia, thoracic lobectomy, staged closure of gastroschisis, delayed repair and primary closure of omphalocele, perineal anoplasty and diverting colostomy for imperforate anus, anal pull-through for Hirschsprung disease, and colostomy closure; and as dirty: perforated necrotizing enterocolitis. There is poor consensus on how neonatal operations are classified based on contamination. An improved classification system will provide more accurate risk assessment for development of surgical site infections and identify neonates who would benefit from antibiotic prophylaxis.

  10. The influence of radiographic viewing perspective and demographics on the Critical Shoulder Angle

    PubMed Central

    Suter, Thomas; Popp, Ariane Gerber; Zhang, Yue; Zhang, Chong; Tashjian, Robert Z.; Henninger, Heath B.

    2014-01-01

    Background Accurate assessment of the critical shoulder angle (CSA) is important in clinical evaluation of degenerative rotator cuff tears. This study analyzed the influence of radiographic viewing perspective on the CSA, developed a classification system to identify malpositioned radiographs, and assessed the relationship between the CSA and demographic factors. Methods Glenoid height, width and retroversion were measured on 3D CT reconstructions of 68 cadaver scapulae. A digitally reconstructed radiograph was aligned perpendicular to the scapular plane, and retroversion was corrected to obtain a true antero-posterior (AP) view. In 10 scapulae, incremental anteversion/retroversion and flexion/extension views were generated. The CSA was measured and a clinically applicable classification system was developed to detect views with >2° change in CSA versus true AP. Results The average CSA was 33±4°. Intra- and inter-observer reliability was high (ICC≥0.81) but decreased with increasing viewing angle. Views beyond 5° anteversion, 8° retroversion, 15° flexion and 26° extension resulted in >2° deviation of the CSA compared to true AP. The classification system was capable of detecting aberrant viewing perspectives with sensitivity of 95% and specificity of 53%. Correlations between glenoid size and CSA were small (R≤0.3), and CSA did not vary by gender (p=0.426) or side (p=0.821). Conclusions The CSA was most susceptible to malposition in ante/retroversion. Deviations as little as 5° in anteversion resulted in a CSA >2° from true AP. A new classification system refines the ability to collect true AP radiographs of the scapula. The CSA was unaffected by demographic factors. PMID:25591458

  11. Prognostication in eye cancer: the latest tumor, node, metastasis classification and beyond

    PubMed Central

    Kivelä, T; Kujala, E

    2013-01-01

    The tumour, node, metastasis (TNM) classification is a universal cancer staging system, which has been used for five decades. The current seventh edition became effective in 2010 and covers six ophthalmic sites: eyelids, conjunctiva, uvea, retina, orbit, and lacrimal gland; and five cancer types: carcinoma, sarcoma, melanoma, retinoblastoma, and lymphoma. The TNM categories are based on the anatomic extent of the primary tumour (T), regional lymph node metastases (N), and systemic metastases (M). The T categories of ophthalmic cancers are based on the size of the primary tumour and any invasion of periocular structures. The anatomic category is used to determine the TNM stage that correlates with survival. Such staging is currently implemented only for carcinoma of the eyelid and melanoma of the uvea. The classification of ciliary body and choroidal melanoma is the only one based on clinical evidence so far: a database of 7369 patients analysed by the European Ophthalmic Oncology Group. It spans a prognosis from 96% 5-year survival for stage I to 97% 5-year mortality for stage IV. The most accurate criterion for prognostication in uveal melanoma is, however, analysis of chromosomal alterations and gene expression. When such data are available, the TNM stage may be used for further stratification. Prognosis in retinoblastoma is frequently assigned by using an international classification, which predicts conservation of the eye and vision, and an international staging separate from the TNM system, which predicts survival. The TNM cancer staging manual is a useful tool for all ophthalmologists managing eye cancer. PMID:23258307

  12. Bayesian classification for the selection of in vitro human embryos using morphological and clinical data.

    PubMed

    Morales, Dinora Araceli; Bengoetxea, Endika; Larrañaga, Pedro; García, Miguel; Franco, Yosu; Fresnada, Mónica; Merino, Marisa

    2008-05-01

    In vitro fertilization (IVF) is a medically assisted reproduction technique that enables infertile couples to achieve successful pregnancy. Given the uncertainty of the treatment, we propose an intelligent decision support system based on supervised classification by Bayesian classifiers to aid to the selection of the most promising embryos that will form the batch to be transferred to the woman's uterus. The aim of the supervised classification system is to improve overall success rate of each IVF treatment in which a batch of embryos is transferred each time, where the success is achieved when implantation (i.e. pregnancy) is obtained. Due to ethical reasons, different legislative restrictions apply in every country on this technique. In Spain, legislation allows a maximum of three embryos to form each transfer batch. As a result, clinicians prefer to select the embryos by non-invasive embryo examination based on simple methods and observation focused on morphology and dynamics of embryo development after fertilization. This paper proposes the application of Bayesian classifiers to this embryo selection problem in order to provide a decision support system that allows a more accurate selection than with the actual procedures which fully rely on the expertise and experience of embryologists. For this, we propose to take into consideration a reduced subset of feature variables related to embryo morphology and clinical data of patients, and from this data to induce Bayesian classification models. Results obtained applying a filter technique to choose the subset of variables, and the performance of Bayesian classifiers using them, are presented.

  13. Comparing Features for Classification of MEG Responses to Motor Imagery

    PubMed Central

    Halme, Hanna-Leena; Parkkonen, Lauri

    2016-01-01

    Background Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. Methods MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio—spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. Results The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. Conclusions We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system. PMID:27992574

  14. Classification of earth terrain using polarimetric synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Lim, H. H.; Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Shin, R. T.; Van Zyl, J. J.

    1989-01-01

    Supervised and unsupervised classification techniques are developed and used to classify the earth terrain components from SAR polarimetric images of San Francisco Bay and Traverse City, Michigan. The supervised techniques include the Bayes classifiers, normalized polarimetric classification, and simple feature classification using discriminates such as the absolute and normalized magnitude response of individual receiver channel returns and the phase difference between receiver channels. An algorithm is developed as an unsupervised technique which classifies terrain elements based on the relationship between the orientation angle and the handedness of the transmitting and receiving polariation states. It is found that supervised classification produces the best results when accurate classifier training data are used, while unsupervised classification may be applied when training data are not available.

  15. The ability of video image analysis to predict lean meat yield and EUROP score of lamb carcasses.

    PubMed

    Einarsson, E; Eythórsdóttir, E; Smith, C R; Jónmundsson, J V

    2014-07-01

    A total of 862 lamb carcasses that were evaluated by both the VIAscan® and the current EUROP classification system were deboned and the actual yield was measured. Models were derived for predicting lean meat yield of the legs (Leg%), loin (Loin%) and shoulder (Shldr%) using the best VIAscan® variables selected by stepwise regression analysis of a calibration data set (n=603). The equations were tested on validation data set (n=259). The results showed that the VIAscan® predicted lean meat yield in the leg, loin and shoulder with an R 2 of 0.60, 0.31 and 0.47, respectively, whereas the current EUROP system predicted lean yield with an R 2 of 0.57, 0.32 and 0.37, respectively, for the three carcass parts. The VIAscan® also predicted the EUROP score of the trial carcasses, using a model derived from an earlier trial. The EUROP classification from VIAscan® and the current system were compared for their ability to explain the variation in lean yield of the whole carcass (LMY%) and trimmed fat (FAT%). The predicted EUROP scores from the VIAscan® explained 36% of the variation in LMY% and 60% of the variation in FAT%, compared with the current EUROP system that explained 49% and 72%, respectively. The EUROP classification obtained by the VIAscan® was tested against a panel of three expert classifiers (n=696). The VIAscan® classification agreed with 82% of conformation and 73% of the fat classes assigned by a panel of expert classifiers. It was concluded that VIAscan® provides a technology that can directly predict LMY% of lamb carcasses with more accuracy than the current EUROP classification system. The VIAscan® is also capable of classifying lamb carcasses into EUROP classes with an accuracy that fulfils minimum demands for the Icelandic sheep industry. Although the VIAscan® prediction of the Loin% is low, it is comparable to the current EUROP system, and should not hinder the adoption of the technology to estimate the yield of Icelandic lambs as it delivered a more accurate prediction for the Leg%, Shldr% and overall LMY% with negligible prediction bias.

  16. Vesicular stomatitis forecasting based on Google Trends

    PubMed Central

    Lu, Yi; Zhou, GuangYa; Chen, Qin

    2018-01-01

    Background Vesicular stomatitis (VS) is an important viral disease of livestock. The main feature of VS is irregular blisters that occur on the lips, tongue, oral mucosa, hoof crown and nipple. Humans can also be infected with vesicular stomatitis and develop meningitis. This study analyses 2014 American VS outbreaks in order to accurately predict vesicular stomatitis outbreak trends. Methods American VS outbreaks data were collected from OIE. The data for VS keywords were obtained by inputting 24 disease-related keywords into Google Trends. After calculating the Pearson and Spearman correlation coefficients, it was found that there was a relationship between outbreaks and keywords derived from Google Trends. Finally, the predicted model was constructed based on qualitative classification and quantitative regression. Results For the regression model, the Pearson correlation coefficients between the predicted outbreaks and actual outbreaks are 0.953 and 0.948, respectively. For the qualitative classification model, we constructed five classification predictive models and chose the best classification predictive model as the result. The results showed, SN (sensitivity), SP (specificity) and ACC (prediction accuracy) values of the best classification predictive model are 78.52%,72.5% and 77.14%, respectively. Conclusion This study applied Google search data to construct a qualitative classification model and a quantitative regression model. The results show that the method is effective and that these two models obtain more accurate forecast. PMID:29385198

  17. Automated classification of maxillofacial cysts in cone beam CT images using contourlet transformation and Spherical Harmonics.

    PubMed

    Abdolali, Fatemeh; Zoroofi, Reza Aghaeizadeh; Otake, Yoshito; Sato, Yoshinobu

    2017-02-01

    Accurate detection of maxillofacial cysts is an essential step for diagnosis, monitoring and planning therapeutic intervention. Cysts can be of various sizes and shapes and existing detection methods lead to poor results. Customizing automatic detection systems to gain sufficient accuracy in clinical practice is highly challenging. For this purpose, integrating the engineering knowledge in efficient feature extraction is essential. This paper presents a novel framework for maxillofacial cysts detection. A hybrid methodology based on surface and texture information is introduced. The proposed approach consists of three main steps as follows: At first, each cystic lesion is segmented with high accuracy. Then, in the second and third steps, feature extraction and classification are performed. Contourlet and SPHARM coefficients are utilized as texture and shape features which are fed into the classifier. Two different classifiers are used in this study, i.e. support vector machine and sparse discriminant analysis. Generally SPHARM coefficients are estimated by the iterative residual fitting (IRF) algorithm which is based on stepwise regression method. In order to improve the accuracy of IRF estimation, a method based on extra orthogonalization is employed to reduce linear dependency. We have utilized a ground-truth dataset consisting of cone beam CT images of 96 patients, belonging to three maxillofacial cyst categories: radicular cyst, dentigerous cyst and keratocystic odontogenic tumor. Using orthogonalized SPHARM, residual sum of squares is decreased which leads to a more accurate estimation. Analysis of the results based on statistical measures such as specificity, sensitivity, positive predictive value and negative predictive value is reported. The classification rate of 96.48% is achieved using sparse discriminant analysis and orthogonalized SPHARM features. Classification accuracy at least improved by 8.94% with respect to conventional features. This study demonstrated that our proposed methodology can improve the computer assisted diagnosis (CAD) performance by incorporating more discriminative features. Using orthogonalized SPHARM is promising in computerized cyst detection and may have a significant impact in future CAD systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Suicide Surveillance in the U.S. Military?Reporting and Classification Biases in Rate Calculations

    ERIC Educational Resources Information Center

    Carr, Joel R.; Hoge, Charles W.; Gardner, John; Potter, Robert

    2004-01-01

    The military has a well-defined population with suicide prevention programs that have been recognized as possible models for civilian suicide prevention efforts. Monitoring prevention programs requires accurate reporting. In civilian settings, several studies have confirmed problems in the reporting and classification of suicides. This analysis…

  19. Texture as a basis for acoustic classification of substrate in the nearshore region

    NASA Astrophysics Data System (ADS)

    Dennison, A.; Wattrus, N. J.

    2016-12-01

    Segmentation and classification of substrate type from two locations in Lake Superior, are predicted using multivariate statistical processing of textural measures derived from shallow-water, high-resolution multibeam bathymetric data. During a multibeam sonar survey, both bathymetric and backscatter data are collected. It is well documented that the statistical characteristic of a sonar backscatter mosaic is dependent on substrate type. While classifying the bottom-type on the basis on backscatter alone can accurately predict and map bottom-type, it lacks the ability to resolve and capture fine textural details, an important factor in many habitat mapping studies. Statistical processing can capture the pertinent details about the bottom-type that are rich in textural information. Further multivariate statistical processing can then isolate characteristic features, and provide the basis for an accurate classification scheme. Preliminary results from an analysis of bathymetric data and ground-truth samples collected from the Amnicon River, Superior, Wisconsin, and the Lester River, Duluth, Minnesota, demonstrate the ability to process and develop a novel classification scheme of the bottom type in two geomorphologically distinct areas.

  20. Comparative study of wine tannin classification using Fourier transform mid-infrared spectrometry and sensory analysis.

    PubMed

    Fernández, Katherina; Labarca, Ximena; Bordeu, Edmundo; Guesalaga, Andrés; Agosin, Eduardo

    2007-11-01

    Wine tannins are fundamental to the determination of wine quality. However, the chemical and sensorial analysis of these compounds is not straightforward and a simple and rapid technique is necessary. We analyzed the mid-infrared spectra of white, red, and model wines spiked with known amounts of skin or seed tannins, collected using Fourier transform mid-infrared (FT-MIR) transmission spectroscopy (400-4000 cm(-1)). The spectral data were classified according to their tannin source, skin or seed, and tannin concentration by means of discriminant analysis (DA) and soft independent modeling of class analogy (SIMCA) to obtain a probabilistic classification. Wines were also classified sensorially by a trained panel and compared with FT-MIR. SIMCA models gave the most accurate classification (over 97%) and prediction (over 60%) among the wine samples. The prediction was increased (over 73%) using the leave-one-out cross-validation technique. Sensory classification of the wines was less accurate than that obtained with FT-MIR and SIMCA. Overall, these results show the potential of FT-MIR spectroscopy, in combination with adequate statistical tools, to discriminate wines with different tannin levels.

  1. Teacher, parent, and peer reports of early aggression as screening measures for long-term maladaptive outcomes: who provides the most useful information?

    PubMed

    Clemans, Katherine H; Musci, Rashelle J; Leoutsakos, Jeannie-Marie S; Ialongo, Nicholas S

    2014-04-01

    This study compared the ability of teacher, parent, and peer reports of aggressive behavior in early childhood to accurately classify cases of maladaptive outcomes in late adolescence and early adulthood. Weighted kappa analyses determined optimal cut points and relative classification accuracy among teacher, parent, and peer reports of aggression assessed for 691 students (54% male; 84% African American and 13% White) in the fall of first grade. Outcomes included antisocial personality, substance use, incarceration history, risky sexual behavior, and failure to graduate from high school on time. Peer reports were the most accurate classifier of all outcomes in the full sample. For most outcomes, the addition of teacher or parent reports did not improve overall classification accuracy once peer reports were accounted for. Additional gender-specific and adjusted kappa analyses supported the superior classification utility of the peer report measure. The results suggest that peer reports provided the most useful classification information of the 3 aggression measures. Implications for targeted intervention efforts in which screening measures are used to identify at-risk children are discussed.

  2. Neurofeedback Training for BCI Control

    NASA Astrophysics Data System (ADS)

    Neuper, Christa; Pfurtscheller, Gert

    Brain-computer interface (BCI) systems detect changes in brain signals that reflect human intention, then translate these signals to control monitors or external devices (for a comprehensive review, see [1]). BCIs typically measure electrical signals resulting from neural firing (i.e. neuronal action potentials, Electroencephalogram (ECoG), or Electroencephalogram (EEG)). Sophisticated pattern recognition and classification algorithms convert neural activity into the required control signals. BCI research has focused heavily on developing powerful signal processing and machine learning techniques to accurately classify neural activity [2-4].

  3. Automatic Analysis of Pronunciations for Children with Speech Sound Disorders.

    PubMed

    Dudy, Shiran; Bedrick, Steven; Asgari, Meysam; Kain, Alexander

    2018-07-01

    Computer-Assisted Pronunciation Training (CAPT) systems aim to help a child learn the correct pronunciations of words. However, while there are many online commercial CAPT apps, there is no consensus among Speech Language Therapists (SLPs) or non-professionals about which CAPT systems, if any, work well. The prevailing assumption is that practicing with such programs is less reliable and thus does not provide the feedback necessary to allow children to improve their performance. The most common method for assessing pronunciation performance is the Goodness of Pronunciation (GOP) technique. Our paper proposes two new GOP techniques. We have found that pronunciation models that use explicit knowledge about error pronunciation patterns can lead to more accurate classification whether a phoneme was correctly pronounced or not. We evaluate the proposed pronunciation assessment methods against a baseline state of the art GOP approach, and show that the proposed techniques lead to classification performance that is more similar to that of a human expert.

  4. Non-Gated Laser Induced Breakdown Spectroscopy Provides a Powerful Segmentation Tool on Concomitant Treatment of Characteristic and Continuum Emission

    PubMed Central

    Dasari, Ramachandra Rao; Barman, Ishan; Gundawar, Manoj Kumar

    2014-01-01

    We demonstrate the application of non-gated laser induced breakdown spectroscopy (LIBS) for characterization and classification of organic materials with similar chemical composition. While use of such a system introduces substantive continuum background in the spectral dataset, we show that appropriate treatment of the continuum and characteristic emission results in accurate discrimination of pharmaceutical formulations of similar stoichiometry. Specifically, our results suggest that near-perfect classification can be obtained by employing suitable multivariate analysis on the acquired spectra, without prior removal of the continuum background. Indeed, we conjecture that pre-processing in the form of background removal may introduce spurious features in the signal. Our findings in this report significantly advance the prior results in time-integrated LIBS application and suggest the possibility of a portable, non-gated LIBS system as a process analytical tool, given its simple instrumentation needs, real-time capability and lack of sample preparation requirements. PMID:25084522

  5. Refining diagnosis of Parkinson's disease with deep learning-based interpretation of dopamine transporter imaging.

    PubMed

    Choi, Hongyoon; Ha, Seunggyun; Im, Hyung Jun; Paek, Sun Ha; Lee, Dong Soo

    2017-01-01

    Dopaminergic degeneration is a pathologic hallmark of Parkinson's disease (PD), which can be assessed by dopamine transporter imaging such as FP-CIT SPECT. Until now, imaging has been routinely interpreted by human though it can show interobserver variability and result in inconsistent diagnosis. In this study, we developed a deep learning-based FP-CIT SPECT interpretation system to refine the imaging diagnosis of Parkinson's disease. This system trained by SPECT images of PD patients and normal controls shows high classification accuracy comparable with the experts' evaluation referring quantification results. Its high accuracy was validated in an independent cohort composed of patients with PD and nonparkinsonian tremor. In addition, we showed that some patients clinically diagnosed as PD who have scans without evidence of dopaminergic deficit (SWEDD), an atypical subgroup of PD, could be reclassified by our automated system. Our results suggested that the deep learning-based model could accurately interpret FP-CIT SPECT and overcome variability of human evaluation. It could help imaging diagnosis of patients with uncertain Parkinsonism and provide objective patient group classification, particularly for SWEDD, in further clinical studies.

  6. Classification of US hydropower dams by their modes of operation

    DOE PAGES

    McManamay, Ryan A.; Oigbokie, II, Clement O.; Kao, Shih -Chieh; ...

    2016-02-19

    A key challenge to understanding ecohydrologic responses to dam regulation is the absence of a universally transferable classification framework for how dams operate. In the present paper, we develop a classification system to organize the modes of operation (MOPs) for U.S. hydropower dams and powerplants. To determine the full diversity of MOPs, we mined federal documents, open-access data repositories, and internet sources. W then used CART classification trees to predict MOPs based on physical characteristics, regulation, and project generation. Finally, we evaluated how much variation MOPs explained in sub-daily discharge patterns for stream gages downstream of hydropower dams. After reviewingmore » information for 721 dams and 597 power plants, we developed a 2-tier hierarchical classification based on 1) the storage and control of flows to powerplants, and 2) the presence of a diversion around the natural stream bed. This resulted in nine tier-1 MOPs representing a continuum of operations from strictly peaking, to reregulating, to run-of-river, and two tier-2 MOPs, representing diversion and integral dam-powerhouse configurations. Although MOPs differed in physical characteristics and energy production, classification trees had low accuracies (<62%), which suggested accurate evaluations of MOPs may require individual attention. MOPs and dam storage explained 20% of the variation in downstream subdaily flow characteristics and showed consistent alterations in subdaily flow patterns from reference streams. Lastly, this standardized classification scheme is important for future research including estimating reservoir operations for large-scale hydrologic models and evaluating project economics, environmental impacts, and mitigation.« less

  7. Classification of US hydropower dams by their modes of operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A.; Oigbokie, II, Clement O.; Kao, Shih -Chieh

    A key challenge to understanding ecohydrologic responses to dam regulation is the absence of a universally transferable classification framework for how dams operate. In the present paper, we develop a classification system to organize the modes of operation (MOPs) for U.S. hydropower dams and powerplants. To determine the full diversity of MOPs, we mined federal documents, open-access data repositories, and internet sources. W then used CART classification trees to predict MOPs based on physical characteristics, regulation, and project generation. Finally, we evaluated how much variation MOPs explained in sub-daily discharge patterns for stream gages downstream of hydropower dams. After reviewingmore » information for 721 dams and 597 power plants, we developed a 2-tier hierarchical classification based on 1) the storage and control of flows to powerplants, and 2) the presence of a diversion around the natural stream bed. This resulted in nine tier-1 MOPs representing a continuum of operations from strictly peaking, to reregulating, to run-of-river, and two tier-2 MOPs, representing diversion and integral dam-powerhouse configurations. Although MOPs differed in physical characteristics and energy production, classification trees had low accuracies (<62%), which suggested accurate evaluations of MOPs may require individual attention. MOPs and dam storage explained 20% of the variation in downstream subdaily flow characteristics and showed consistent alterations in subdaily flow patterns from reference streams. Lastly, this standardized classification scheme is important for future research including estimating reservoir operations for large-scale hydrologic models and evaluating project economics, environmental impacts, and mitigation.« less

  8. Novel gene sets improve set-level classification of prokaryotic gene expression data.

    PubMed

    Holec, Matěj; Kuželka, Ondřej; Železný, Filip

    2015-10-28

    Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.

  9. Classification System for the Sudden Unexpected Infant Death Case Registry and its Application

    PubMed Central

    Shapiro-Mendoza, Carrie K.; Camperlengo, Lena; Ludvigsen, Rebecca; Cottengim, Carri; Anderson, Robert N.; Andrew, Thomas; Covington, Theresa; Hauck, Fern R.; Kemp, James; MacDorman, Marian

    2015-01-01

    Sudden unexpected infant deaths (SUID) accounted for 1 in 3 postneonatal deaths in 2010. Sudden infant death syndrome and accidental sleep-related suffocation are among the most frequently reported types of SUID. The causes of these SUID usually are not obvious before a medico-legal investigation and may remain unexplained even after investigation. Lack of consistent investigation practices and an autopsy marker make it difficult to distinguish sudden infant death syndrome from other SUID. Standardized categories might assist in differentiating SUID subtypes and allow for more accurate monitoring of the magnitude of SUID, as well as an enhanced ability to characterize the highest risk groups. To capture information about the extent to which cases are thoroughly investigated and how factors like unsafe sleep may contribute to deaths, CDC created a multistate SUID Case Registry in 2009. As part of the registry, the Centers for Disease Control and Prevention developed a classification system that recognizes the uncertainty about how suffocation or asphyxiation may contribute to death and that accounts for unknown and incomplete information about the death scene and autopsy. This report describes the classification system, including its definitions and decision-making algorithm, and applies the system to 436 US SUID cases that occurred in 2011 and were reported to the registry. These categories, although not replacing official cause-of-death determinations, allow local and state programs to track SUID subtypes, creating a valuable tool to identify gaps in investigation and inform SUID reduction strategies. PMID:24913798

  10. Comparison of transect sampling and object-oriented image classification methods of urbanizing catchments

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Tenenbaum, D. E.

    2009-12-01

    The process of urbanization has major effects on both human and natural systems. In order to monitor these changes and better understand how urban ecological systems work, urban spatial structure and the variation needs to be first quantified at a fine scale. Because the land-use and land-cover (LULC) in urbanizing areas is highly heterogeneous, the classification of urbanizing environments is the most challenging field in remote sensing. Although a pixel-based method is a common way to do classification, the results are not good enough for many research objectives which require more accurate classification data in fine scales. Transect sampling and object-oriented classification methods are more appropriate for urbanizing areas. Tenenbaum used a transect sampling method using a computer-based facility within a widely available commercial GIS in the Glyndon Catchment and the Upper Baismans Run Catchment, Baltimore, Maryland. It was a two-tiered classification system, including a primary level (which includes 7 classes) and a secondary level (which includes 37 categories). The statistical information of LULC was collected. W. Zhou applied an object-oriented method at the parcel level in Gwynn’s Falls Watershed which includes the two previously mentioned catchments and six classes were extracted. The two urbanizing catchments are located in greater Baltimore, Maryland and drain into Chesapeake Bay. In this research, the two different methods are compared for 6 classes (woody, herbaceous, water, ground, pavement and structure). The comparison method uses the segments in the transect method to extract LULC information from the results of the object-oriented method. Classification results were compared in order to evaluate the difference between the two methods. The overall proportions of LULC classes from the two studies show that there is overestimation of structures in the object-oriented method. For the other five classes, the results from the two methods are similar, except for a difference in the proportions of the woody class. The segment to segment comparison shows that the resolution of the light detection and ranging (LIDAR) data used in the object-oriented method does affect the accuracy of the classification. Shadows of trees and structures are still a big problem in the object-oriented method. For classes that make up a small proportion of the catchments, such as water, neither method was capable of detecting them.

  11. Application of Machine Learning Approaches for Classifying Sitting Posture Based on Force and Acceleration Sensors.

    PubMed

    Zemp, Roland; Tanadini, Matteo; Plüss, Stefan; Schnüriger, Karin; Singh, Navrag B; Taylor, William R; Lorenzetti, Silvio

    2016-01-01

    Occupational musculoskeletal disorders, particularly chronic low back pain (LBP), are ubiquitous due to prolonged static sitting or nonergonomic sitting positions. Therefore, the aim of this study was to develop an instrumented chair with force and acceleration sensors to determine the accuracy of automatically identifying the user's sitting position by applying five different machine learning methods (Support Vector Machines, Multinomial Regression, Boosting, Neural Networks, and Random Forest). Forty-one subjects were requested to sit four times in seven different prescribed sitting positions (total 1148 samples). Sixteen force sensor values and the backrest angle were used as the explanatory variables (features) for the classification. The different classification methods were compared by means of a Leave-One-Out cross-validation approach. The best performance was achieved using the Random Forest classification algorithm, producing a mean classification accuracy of 90.9% for subjects with which the algorithm was not familiar. The classification accuracy varied between 81% and 98% for the seven different sitting positions. The present study showed the possibility of accurately classifying different sitting positions by means of the introduced instrumented office chair combined with machine learning analyses. The use of such novel approaches for the accurate assessment of chair usage could offer insights into the relationships between sitting position, sitting behaviour, and the occurrence of musculoskeletal disorders.

  12. PI2GIS: processing image to geographical information systems, a learning tool for QGIS

    NASA Astrophysics Data System (ADS)

    Correia, R.; Teodoro, A.; Duarte, L.

    2017-10-01

    To perform an accurate interpretation of remote sensing images, it is necessary to extract information using different image processing techniques. Nowadays, it became usual to use image processing plugins to add new capabilities/functionalities integrated in Geographical Information System (GIS) software. The aim of this work was to develop an open source application to automatically process and classify remote sensing images from a set of satellite input data. The application was integrated in a GIS software (QGIS), automating several image processing steps. The use of QGIS for this purpose is justified since it is easy and quick to develop new plugins, using Python language. This plugin is inspired in the Semi-Automatic Classification Plugin (SCP) developed by Luca Congedo. SCP allows the supervised classification of remote sensing images, the calculation of vegetation indices such as NDVI (Normalized Difference Vegetation Index) and EVI (Enhanced Vegetation Index) and other image processing operations. When analysing SCP, it was realized that a set of operations, that are very useful in teaching classes of remote sensing and image processing tasks, were lacking, such as the visualization of histograms, the application of filters, different image corrections, unsupervised classification and several environmental indices computation. The new set of operations included in the PI2GIS plugin can be divided into three groups: pre-processing, processing, and classification procedures. The application was tested consider an image from Landsat 8 OLI from a North area of Portugal.

  13. Variations of the superficial middle cerebral vein: classification using three-dimensional CT angiography.

    PubMed

    Suzuki, Y; Matsumoto, K

    2000-05-01

    Classification of variations of the superficial middle cerebral vein (SMCV) remains ambiguous. We propose a new classification system based on embryologic development for preoperative examination. Three-dimensional CT angiography was used to evaluate 500 SMCVs (in 250 patients). The outflow vessels from the SMCV were classified into seven types on the basis of embryologic development. The 3D CT angiograms in axial stereoscopic and oblique views and multiple intensity projection images were evaluated by the same neurosurgeon on two occasions. Inconsistent interpretations were regarded as equivocal. Three-dimensional CT angiography clearly depicted the SMCV running along the lesser wing or the middle cranial fossa. However, the outflow vessel could not be confirmed as the sphenoparietal, cavernous, or emissary type in 39 (8%) of the sides. SMCVs running in the middle cranial fossa to join the transverse sinus or superior petrosal sinus were accurately identified. SMCVs were present in 456 sides: 62% entered the sphenoparietal sinus or the cavernous sinus and 12% joined the emissary vein. Nine vessels were the superior petrosal type, 10 the basal type, 12 the squamosal type, and 44 the undeveloped type. Three-dimensional CT angiography can depict the vessels and their anatomic relationship to the bone structure, allowing identification of the SMCV variant in individual patients. Preoperative planning for skull base surgery requires such information to reduce the invasiveness of the procedure. With the use of our classification system, 3D CT angiography can provide exact and practical information concerning the SMCV.

  14. Automated quasi-3D spine curvature quantification and classification

    NASA Astrophysics Data System (ADS)

    Khilari, Rupal; Puchin, Juris; Okada, Kazunori

    2018-02-01

    Scoliosis is a highly prevalent spine deformity that has traditionally been diagnosed through measurement of the Cobb angle on radiographs. More recent technology such as the commercial EOS imaging system, although more accurate, also require manual intervention for selecting the extremes of the vertebrae forming the Cobb angle. This results in a high degree of inter and intra observer error in determining the extent of spine deformity. Our primary focus is to eliminate the need for manual intervention by robustly quantifying the curvature of the spine in three dimensions, making it consistent across multiple observers. Given the vertebrae centroids, the proposed Vertebrae Sequence Angle (VSA) estimation and segmentation algorithm finds the largest angle between consecutive pairs of centroids within multiple inflection points on the curve. To exploit existing clinical diagnostic standards, the algorithm uses a quasi-3-dimensional approach considering the curvature in the coronal and sagittal projection planes of the spine. Experiments were performed with manuallyannotated ground-truth classification of publicly available, centroid-annotated CT spine datasets. This was compared with the results obtained from manual Cobb and Centroid angle estimation methods. Using the VSA, we then automatically classify the occurrence and the severity of spine curvature based on Lenke's classification for idiopathic scoliosis. We observe that the results appear promising with a scoliotic angle lying within +/- 9° of the Cobb and Centroid angle, and vertebrae positions differing by at the most one position. Our system also resulted in perfect classification of scoliotic from healthy spines with our dataset with six cases.

  15. Calibration of Multiple In Silico Tools for Predicting Pathogenicity of Mismatch Repair Gene Missense Substitutions

    PubMed Central

    Thompson, Bryony A.; Greenblatt, Marc S.; Vallee, Maxime P.; Herkert, Johanna C.; Tessereau, Chloe; Young, Erin L.; Adzhubey, Ivan A.; Li, Biao; Bell, Russell; Feng, Bingjian; Mooney, Sean D.; Radivojac, Predrag; Sunyaev, Shamil R.; Frebourg, Thierry; Hofstra, Robert M.W.; Sijmons, Rolf H.; Boucher, Ken; Thomas, Alun; Goldgar, David E.; Spurdle, Amanda B.; Tavtigian, Sean V.

    2015-01-01

    Classification of rare missense substitutions observed during genetic testing for patient management is a considerable problem in clinical genetics. The Bayesian integrated evaluation of unclassified variants is a solution originally developed for BRCA1/2. Here, we take a step toward an analogous system for the mismatch repair (MMR) genes (MLH1, MSH2, MSH6, and PMS2) that confer colon cancer susceptibility in Lynch syndrome by calibrating in silico tools to estimate prior probabilities of pathogenicity for MMR gene missense substitutions. A qualitative five-class classification system was developed and applied to 143 MMR missense variants. This identified 74 missense substitutions suitable for calibration. These substitutions were scored using six different in silico tools (Align-Grantham Variation Grantham Deviation, multivariate analysis of protein polymorphisms [MAPP], Mut-Pred, PolyPhen-2.1, Sorting Intolerant From Tolerant, and Xvar), using curated MMR multiple sequence alignments where possible. The output from each tool was calibrated by regression against the classifications of the 74 missense substitutions; these calibrated outputs are interpretable as prior probabilities of pathogenicity. MAPP was the most accurate tool and MAPP + PolyPhen-2.1 provided the best-combined model (R2 = 0.62 and area under receiver operating characteristic = 0.93). The MAPP + PolyPhen-2.1 output is sufficiently predictive to feed as a continuous variable into the quantitative Bayesian integrated evaluation for clinical classification of MMR gene missense substitutions. PMID:22949387

  16. Combining fuzzy set theory and nonlinear stretching enhancement for unsupervised classification of cotton root rot

    USDA-ARS?s Scientific Manuscript database

    Cotton root rot is a destructive disease affecting cotton production. Accurate identification of infected areas within fields is useful for cost-effective control of the disease. The uncertainties caused by various infection stages and newly infected plants make it difficult to achieve accurate clas...

  17. Implementing Legacy-C Algorithms in FPGA Co-Processors for Performance Accelerated Smart Payloads

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.; Hartzell, Christine

    2008-01-01

    Accurate, on-board classification of instrument data is used to increase science return by autonomously identifying regions of interest for priority transmission or generating summary products to conserve transmission bandwidth. Due to on-board processing constraints, such classification has been limited to using the simplest functions on a small subset of the full instrument data. FPGA co-processor designs for SVM1 classifiers will lead to significant improvement in on-board classification capability and accuracy.

  18. Moving beyond the van Krevelen Diagram: A New Stoichiometric Approach for Compound Classification in Organisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivas-Ubach, Albert; Liu, Yina; Bianchi, Thomas S.

    van Krevelen diagrams (O:C vs H:C ratios of elemental formulas) have been widely used in studies to obtain an estimation of the main compound categories present in environmental samples. However, the limits defining a specific compound category based solely on O:C and H:C ratios of elemental formulas have never been accurately listed or proposed to classify metabolites in biological samples. Furthermore, while O:C vs. H:C ratios of elemental formulas can provide an overview of the compound categories, such classification is inefficient because of the large overlap among different compound categories along both axes. We propose a more accurate compound classificationmore » for biological samples analyzed by high-resolution mass spectrometry-based on an assessment of the C:H:O:N:P stoichiometric ratios of over 130,000 elemental formulas of compounds classified in 6 main categories: lipids, peptides, amino-sugars, carbohydrates, nucleotides and phytochemical compounds (oxy-aromatic compounds). Our multidimensional stoichiometric compound classification (MSCC) constraints showed a highly accurate categorization of elemental formulas to the main compound categories in biological samples with over 98% of accuracy representing a substantial improvement over any classification based on the classic van Krevelen diagram. This method represents a significant step forward in environmental research, especially ecological stoichiometry and eco-metabolomics studies, by providing a novel and robust tool to further our understanding the ecosystem structure and function through the chemical characterization of different biological samples.« less

  19. Automatic Classification of Sub-Techniques in Classical Cross-Country Skiing Using a Machine Learning Algorithm on Micro-Sensor Data

    PubMed Central

    Seeberg, Trine M.; Tjønnås, Johannes; Haugnes, Pål; Sandbakk, Øyvind

    2017-01-01

    The automatic classification of sub-techniques in classical cross-country skiing provides unique possibilities for analyzing the biomechanical aspects of outdoor skiing. This is currently possible due to the miniaturization and flexibility of wearable inertial measurement units (IMUs) that allow researchers to bring the laboratory to the field. In this study, we aimed to optimize the accuracy of the automatic classification of classical cross-country skiing sub-techniques by using two IMUs attached to the skier’s arm and chest together with a machine learning algorithm. The novelty of our approach is the reliable detection of individual cycles using a gyroscope on the skier’s arm, while a neural network machine learning algorithm robustly classifies each cycle to a sub-technique using sensor data from an accelerometer on the chest. In this study, 24 datasets from 10 different participants were separated into the categories training-, validation- and test-data. Overall, we achieved a classification accuracy of 93.9% on the test-data. Furthermore, we illustrate how an accurate classification of sub-techniques can be combined with data from standard sports equipment including position, altitude, speed and heart rate measuring systems. Combining this information has the potential to provide novel insight into physiological and biomechanical aspects valuable to coaches, athletes and researchers. PMID:29283421

  20. Evaluation of novel computerized tomography scoring systems in human traumatic brain injury: An observational, multicenter study

    PubMed Central

    Kivisaari, Riku; Svensson, Mikael; Skrifvars, Markus B.

    2017-01-01

    Background Traumatic brain injury (TBI) is a major contributor to morbidity and mortality. Computerized tomography (CT) scanning of the brain is essential for diagnostic screening of intracranial injuries in need of neurosurgical intervention, but may also provide information concerning patient prognosis and enable baseline risk stratification in clinical trials. Novel CT scoring systems have been developed to improve current prognostic models, including the Stockholm and Helsinki CT scores, but so far have not been extensively validated. The primary aim of this study was to evaluate the Stockholm and Helsinki CT scores for predicting functional outcome, in comparison with the Rotterdam CT score and Marshall CT classification. The secondary aims were to assess which individual components of the CT scores best predict outcome and what additional prognostic value the CT scoring systems contribute to a clinical prognostic model. Methods and findings TBI patients requiring neuro-intensive care and not included in the initial creation of the Stockholm and Helsinki CT scoring systems were retrospectively included from prospectively collected data at the Karolinska University Hospital (n = 720 from 1 January 2005 to 31 December 2014) and Helsinki University Hospital (n = 395 from 1 January 2013 to 31 December 2014), totaling 1,115 patients. The Marshall CT classification and the Rotterdam, Stockholm, and Helsinki CT scores were assessed using the admission CT scans. Known outcome predictors at admission were acquired (age, pupil responsiveness, admission Glasgow Coma Scale, glucose level, and hemoglobin level) and used in univariate, and multivariable, regression models to predict long-term functional outcome (dichotomizations of the Glasgow Outcome Scale [GOS]). In total, 478 patients (43%) had an unfavorable outcome (GOS 1–3). In the combined cohort, overall prognostic performance was more accurate for the Stockholm CT score (Nagelkerke’s pseudo-R2 range 0.24–0.28) and the Helsinki CT score (0.18–0.22) than for the Rotterdam CT score (0.13–0.15) and Marshall CT classification (0.03–0.05). Moreover, the Stockholm and Helsinki CT scores added the most independent prognostic value in the presence of other known clinical outcome predictors in TBI (6% and 4%, respectively). The aggregate traumatic subarachnoid hemorrhage (tSAH) component of the Stockholm CT score was the strongest predictor of unfavorable outcome. The main limitations were the retrospective nature of the study, missing patient information, and the varying follow-up time between the centers. Conclusions The Stockholm and Helsinki CT scores provide more information on the damage sustained, and give a more accurate outcome prediction, than earlier classification systems. The strong independent predictive value of tSAH may reflect an underrated component of TBI pathophysiology. A change to these newer CT scoring systems may be warranted. PMID:28771476

  1. Second-degree atrioventricular block.

    PubMed

    Zipes, D P

    1979-09-01

    1) While it is possible only one type of second-degree AV block exists electrophysiologically, the available data do not justify such a conclusion and it would seem more appropriate to remain a "splitter," and advocate separation and definition of multiple mechanisms, than to be a "lumper," and embrace a unitary concept. 2) The clinical classification of type I and type II AV block, based on present scalar electrocardiographic criteria, for the most part accurately differentiates clinically important categories of patients. Such a classification is descriptive, but serves a useful function and should be preserved, taking into account the caveats mentioned above. The site of block generally determines the clinical course for the patient. For most examples of AV block, the type I and type II classification in present use is based on the site of block. Because block in the His-Purkinje system is preceded by small or nonmeasurable increments, it is called type II AV block; but the very fact that it is preceded by small increments is because it occurs in the His-Purkinje system. Similar logic can be applied to type I AV block in the AV node. Exceptions do occur. If the site of AV block cannot be distinguished with certainity from the scalar ECG, an electrophysiologic study will generally reveal the answer.

  2. Factors Affecting the Item Parameter Estimation and Classification Accuracy of the DINA Model

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Hong, Yuan; Deng, Weiling

    2010-01-01

    To better understand the statistical properties of the deterministic inputs, noisy "and" gate cognitive diagnosis (DINA) model, the impact of several factors on the quality of the item parameter estimates and classification accuracy was investigated. Results of the simulation study indicate that the fully Bayes approach is most accurate when the…

  3. Classification methods to detect sleep apnea in adults based on respiratory and oximetry signals: a systematic review.

    PubMed

    Uddin, M B; Chow, C M; Su, S W

    2018-03-26

    Sleep apnea (SA), a common sleep disorder, can significantly decrease the quality of life, and is closely associated with major health risks such as cardiovascular disease, sudden death, depression, and hypertension. The normal diagnostic process of SA using polysomnography is costly and time consuming. In addition, the accuracy of different classification methods to detect SA varies with the use of different physiological signals. If an effective, reliable, and accurate classification method is developed, then the diagnosis of SA and its associated treatment will be time-efficient and economical. This study aims to systematically review the literature and present an overview of classification methods to detect SA using respiratory and oximetry signals and address the automated detection approach. Sixty-two included studies revealed the application of single and multiple signals (respiratory and oximetry) for the diagnosis of SA. Both airflow and oxygen saturation signals alone were effective in detecting SA in the case of binary decision-making, whereas multiple signals were good for multi-class detection. In addition, some machine learning methods were superior to the other classification methods for SA detection using respiratory and oximetry signals. To deal with the respiratory and oximetry signals, a good choice of classification method as well as the consideration of associated factors would result in high accuracy in the detection of SA. An accurate classification method should provide a high detection rate with an automated (independent of human action) analysis of respiratory and oximetry signals. Future high-quality automated studies using large samples of data from multiple patient groups or record batches are recommended.

  4. Reliability of a four-column classification for tibial plateau fractures.

    PubMed

    Martínez-Rondanelli, Alfredo; Escobar-González, Sara Sofía; Henao-Alzate, Alejandro; Martínez-Cano, Juan Pablo

    2017-09-01

    A four-column classification system offers a different way of evaluating tibial plateau fractures. The aim of this study is to compare the intra-observer and inter-observer reliability between four-column and classic classifications. This is a reliability study, which included patients presenting with tibial plateau fractures between January 2013 and September 2015 in a level-1 trauma centre. Four orthopaedic surgeons blindly classified each fracture according to four different classifications: AO, Schatzker, Duparc and four-column. Kappa, intra-observer and inter-observer concordance were calculated for the reliability analysis. Forty-nine patients were included. The mean age was 39 ± 14.2 years, with no gender predominance (men: 51%; women: 49%), and 67% of the fractures included at least one of the posterior columns. The intra-observer and inter-observer concordance were calculated for each classification: four-column (84%/79%), Schatzker (60%/71%), AO (50%/59%) and Duparc (48%/58%), with a statistically significant difference among them (p = 0.001/p = 0.003). Kappa coefficient for intr-aobserver and inter-observer evaluations: Schatzker 0.48/0.39, four-column 0.61/0.34, Duparc 0.37/0.23, and AO 0.34/0.11. The proposed four-column classification showed the highest intra and inter-observer agreement. When taking into account the agreement that occurs by chance, Schatzker classification showed the highest inter-observer kappa, but again the four-column had the highest intra-observer kappa value. The proposed classification is a more inclusive classification for the posteromedial and posterolateral fractures. We suggest, therefore, that it be used in addition to one of the classic classifications in order to better understand the fracture pattern, as it allows more attention to be paid to the posterior columns, it improves the surgical planning and allows the surgical approach to be chosen more accurately.

  5. New workflow for classification of genetic variants' pathogenicity applied to hereditary recurrent fevers by the International Study Group for Systemic Autoinflammatory Diseases (INSAID).

    PubMed

    Van Gijn, Marielle E; Ceccherini, Isabella; Shinar, Yael; Carbo, Ellen C; Slofstra, Mariska; Arostegui, Juan I; Sarrabay, Guillaume; Rowczenio, Dorota; Omoyımnı, Ebun; Balci-Peynircioglu, Banu; Hoffman, Hal M; Milhavet, Florian; Swertz, Morris A; Touitou, Isabelle

    2018-03-29

    Hereditary recurrent fevers (HRFs) are rare inflammatory diseases sharing similar clinical symptoms and effectively treated with anti-inflammatory biological drugs. Accurate diagnosis of HRF relies heavily on genetic testing. This study aimed to obtain an experts' consensus on the clinical significance of gene variants in four well-known HRF genes: MEFV , TNFRSF1A , NLRP3 and MVK . We configured a MOLGENIS web platform to share and analyse pathogenicity classifications of the variants and to manage a consensus-based classification process. Four experts in HRF genetics submitted independent classifications of 858 variants. Classifications were driven to consensus by recruiting four more expert opinions and by targeting discordant classifications in five iterative rounds. Consensus classification was reached for 804/858 variants (94%). None of the unsolved variants (6%) remained with opposite classifications (eg, pathogenic vs benign). New mutational hotspots were found in all genes. We noted a lower pathogenic variant load and a higher fraction of variants with unknown or unsolved clinical significance in the MEFV gene. Applying a consensus-driven process on the pathogenicity assessment of experts yielded rapid classification of almost all variants of four HRF genes. The high-throughput database will profoundly assist clinicians and geneticists in the diagnosis of HRFs. The configured MOLGENIS platform and consensus evolution protocol are usable for assembly of other variant pathogenicity databases. The MOLGENIS software is available for reuse at http://github.com/molgenis/molgenis; the specific HRF configuration is available at http://molgenis.org/said/. The HRF pathogenicity classifications will be published on the INFEVERS database at https://fmf.igh.cnrs.fr/ISSAID/infevers/. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Validation of accelerometer cut points in toddlers with and without cerebral palsy.

    PubMed

    Oftedal, Stina; Bell, Kristie L; Davies, Peter S W; Ware, Robert S; Boyd, Roslyn N

    2014-09-01

    The purpose of this study was to validate uni- and triaxial ActiGraph cut points for sedentary time in toddlers with cerebral palsy (CP) and typically developing children (TDC). Children (n = 103, 61 boys, mean age = 2 yr, SD = 6 months, range = 1 yr 6 months-3 yr) were divided into calibration (n = 65) and validation (n = 38) samples with separate analyses for TDC (n = 28) and ambulant (Gross Motor Function Classification System I-III, n = 51) and nonambulant (Gross Motor Function Classification System IV-V, n = 25) children with CP. An ActiGraph was worn during a videotaped assessment. Behavior was coded as sedentary or nonsedentary. Receiver operating characteristic-area under the curve analysis determined the classification accuracy of accelerometer data. Predictive validity was determined using the Bland-Altman analysis. Classification accuracy for uniaxial data was fair for the ambulatory CP and TDC group but poor for the nonambulatory CP group. Triaxial data showed good classification accuracy for all groups. The uniaxial ambulatory CP and TDC cut points significantly overestimated sedentary time (bias = -10.5%, 95% limits of agreement [LoA] = -30.2% to 9.1%; bias = -17.3%, 95% LoA = -44.3% to 8.3%). The triaxial ambulatory and nonambulatory CP and TDC cut points provided accurate group-level measures of sedentary time (bias = -1.5%, 95% LoA = -20% to 16.8%; bias = 2.1%, 95% LoA = -17.3% to 21.5%; bias = -5.1%, 95% LoA = -27.5% to 16.1%). Triaxial accelerometers provide useful group-level measures of sedentary time in children with CP across the spectrum of functional abilities and TDC. Uniaxial cut points are not recommended.

  7. An accurate sleep stages classification system using a new class of optimally time-frequency localized three-band wavelet filter bank.

    PubMed

    Sharma, Manish; Goyal, Deepanshu; Achuth, P V; Acharya, U Rajendra

    2018-07-01

    Sleep related disorder causes diminished quality of lives in human beings. Sleep scoring or sleep staging is the process of classifying various sleep stages which helps to detect the quality of sleep. The identification of sleep-stages using electroencephalogram (EEG) signals is an arduous task. Just by looking at an EEG signal, one cannot determine the sleep stages precisely. Sleep specialists may make errors in identifying sleep stages by visual inspection. To mitigate the erroneous identification and to reduce the burden on doctors, a computer-aided EEG based system can be deployed in the hospitals, which can help identify the sleep stages, correctly. Several automated systems based on the analysis of polysomnographic (PSG) signals have been proposed. A few sleep stage scoring systems using EEG signals have also been proposed. But, still there is a need for a robust and accurate portable system developed using huge dataset. In this study, we have developed a new single-channel EEG based sleep-stages identification system using a novel set of wavelet-based features extracted from a large EEG dataset. We employed a novel three-band time-frequency localized (TBTFL) wavelet filter bank (FB). The EEG signals are decomposed using three-level wavelet decomposition, yielding seven sub-bands (SBs). This is followed by the computation of discriminating features namely, log-energy (LE), signal-fractal-dimensions (SFD), and signal-sample-entropy (SSE) from all seven SBs. The extracted features are ranked and fed to the support vector machine (SVM) and other supervised learning classifiers. In this study, we have considered five different classification problems (CPs), (two-class (CP-1), three-class (CP-2), four-class (CP-3), five-class (CP-4) and six-class (CP-5)). The proposed system yielded accuracies of 98.3%, 93.9%, 92.1%, 91.7%, and 91.5% for CP-1 to CP-5, respectively, using 10-fold cross validation (CV) technique. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Exhibits Recognition System for Combining Online Services and Offline Services

    NASA Astrophysics Data System (ADS)

    Ma, He; Liu, Jianbo; Zhang, Yuan; Wu, Xiaoyu

    2017-10-01

    In order to achieve a more convenient and accurate digital museum navigation, we have developed a real-time and online-to-offline museum exhibits recognition system using image recognition method based on deep learning. In this paper, the client and server of the system are separated and connected through the HTTP. Firstly, by using the client app in the Android mobile phone, the user can take pictures and upload them to the server. Secondly, the features of the picture are extracted using the deep learning network in the server. With the help of the features, the pictures user uploaded are classified with a well-trained SVM. Finally, the classification results are sent to the client and the detailed exhibition’s introduction corresponding to the classification results are shown in the client app. Experimental results demonstrate that the recognition accuracy is close to 100% and the computing time from the image uploading to the exhibit information show is less than 1S. By means of exhibition image recognition algorithm, our implemented exhibits recognition system can combine online detailed exhibition information to the user in the offline exhibition hall so as to achieve better digital navigation.

  9. The comprehensiveness of the ESHRE/ESGE classification of female genital tract congenital anomalies: a systematic review of cases not classified by the AFS system.

    PubMed

    Di Spiezio Sardo, A; Campo, R; Gordts, S; Spinelli, M; Cosimato, C; Tanos, V; Brucker, S; Li, T C; Gergolet, M; De Angelis, C; Gianaroli, L; Grimbizis, G

    2015-05-01

    How comprehensive is the recently published European Society of Human Reproduction and Embryology (ESHRE)/European Society for Gynaecological Endoscopy (ESGE) classification system of female genital anomalies? The ESHRE/ESGE classification provides a comprehensive description and categorization of almost all of the currently known anomalies that could not be classified properly with the American Fertility Society (AFS) system. Until now, the more accepted classification system, namely that of the AFS, is associated with serious limitations in effective categorization of female genital anomalies. Many cases published in the literature could not be properly classified using the AFS system, yet a clear and accurate classification is a prerequisite for treatment. The CONUTA (CONgenital UTerine Anomalies) ESHRE/ESGE group conducted a systematic review of the literature to examine if those types of anomalies that could not be properly classified with the AFS system could be effectively classified with the use of the new ESHRE/ESGE system. An electronic literature search through Medline, Embase and Cochrane library was carried out from January 1988 to January 2014. Three participants independently screened, selected articles of potential interest and finally extracted data from all the included studies. Any disagreement was discussed and resolved after consultation with a fourth reviewer and the results were assessed independently and approved by all members of the CONUTA group. Among the 143 articles assessed in detail, 120 were finally selected reporting 140 cases that could not properly fit into a specific class of the AFS system. Those 140 cases were clustered in 39 different types of anomalies. The congenital anomaly involved a single organ in 12 (30.8%) out of the 39 types of anomalies, while multiple organs and/or segments of Müllerian ducts (complex anomaly) were involved in 27 (69.2%) types. Uterus was the organ most frequently involved (30/39: 76.9%), followed by cervix (26/39: 66.7%) and vagina (23/39: 59%). In all 39 types, the ESHRE/ESGE classification system provided a comprehensive description of each single or complex anomaly. A precise categorization was reached in 38 out of 39 types studied. Only one case of a bizarre uterine anomaly, with no clear embryological defect, could not be categorized and thus was placed in Class 6 (un-classified) of the ESHRE/ESGE system. The review of the literature was thorough but we cannot rule out the possibility that other defects exist which will also require testing in the new ESHRE/ESGE system. These anomalies, however, must be rare. The comprehensiveness of the ESHRE/ESGE classification adds objective scientific validity to its use. This may, therefore, promote its further dissemination and acceptance, which will have a positive outcome in clinical care and research. None. © The Author 2015. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology.

  10. Diagnosis of Helicobacter pylori-related chronic gastritis, gastric adenoma and early gastric cancer by magnifying endoscopy.

    PubMed

    Soma, Nei

    2016-10-01

    Evaluating the prevalence and severity of gastritis by endoscopy is useful for estimating the risk of gastric cancer (GC). Moreover, understanding the endoscopic appearances of gastritis is important for diagnosing GC due to the fact that superficial mucosal lesions mimicing gastritis (gastritis-like lesions) are quite difficult to be detected even with optimum preparation and the best technique, and in such cases tissue biopsy is often not very accurate for the diagnosis of gastric epithelial neoplasia. Magnifying endoscopy is a highly accurate technique for the detection of early gastric cancer (EGC). Recent reports have described that various novel endoscopic markers which, visualized by magnifying endoscopy with image-enhanced system (ME-IEE), can predict specific histopathological findings. Using ME-IEE with vessels and surface classification system (VSCS) may represent an excellent diagnostic performance with high confidence and good reproducibility to the endoscopists if performed under consistent conditions, including observation under maximal magnification. The aim of this review was to discuss how to identify high-risk groups for GC by endoscopy, and how to detect effectively signs of suspicious lesions by conventional white light imaging (C-WLI) or chromoendoscopy (CE). Furthermore, to characterize suspicious lesions using ME-IEE using the criteria and classification of EGC based upon VSCS. © 2016 Chinese Medical Association Shanghai Branch, Chinese Society of Gastroenterology, Renji Hospital Affiliated to Shanghai Jiaotong University School of Medicine and John Wiley & Sons Australia, Ltd.

  11. An Accurate Direction Finding Scheme Using Virtual Antenna Array via Smartphones.

    PubMed

    Wang, Xiaopu; Xiong, Yan; Huang, Wenchao

    2016-10-29

    With the development of localization technologies, researchers solve the indoor localization problems using diverse methods and equipment. Most localization techniques require either specialized devices or fingerprints, which are inconvenient for daily use. Therefore, we propose and implement an accurate, efficient and lightweight system for indoor direction finding using common smartphones and loudspeakers. Our method is derived from a key insight: By moving a smartphone in regular patterns, we can effectively emulate the sensitivity and functionality of a Uniform Antenna Array to estimate the angle of arrival of the target signal. Specifically, a user only needs to hold his smartphone still in front of him, and then rotate his body around 360 ∘ duration with the smartphone at an approximate constant velocity. Then, our system can provide accurate directional guidance and lead the user to their destinations (normal loudspeakers we preset in the indoor environment transmitting high frequency acoustic signals) after a few measurements. Major challenges in implementing our system are not only imitating a virtual antenna array by ordinary smartphones but also overcoming the detection difficulties caused by the complex indoor environment. In addition, we leverage the gyroscope of the smartphone to reduce the impact of a user's motion pattern change to the accuracy of our system. In order to get rid of the multipath effect, we leverage multiple signal classification to calculate the direction of the target signal, and then design and deploy our system in various indoor scenes. Extensive comparative experiments show that our system is reliable under various circumstances.

  12. A Robust Deep Model for Improved Classification of AD/MCI Patients

    PubMed Central

    Li, Feng; Tran, Loc; Thung, Kim-Han; Ji, Shuiwang; Shen, Dinggang; Li, Jiang

    2015-01-01

    Accurate classification of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight co-adaptation, which is a typical cause of over-fitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multi-task learning strategy into the deep learning framework. We applied the proposed method to the ADNI data set and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods. PMID:25955998

  13. A novel Multi-Agent Ada-Boost algorithm for predicting protein structural class with the information of protein secondary structure.

    PubMed

    Fan, Ming; Zheng, Bin; Li, Lihua

    2015-10-01

    Knowledge of the structural class of a given protein is important for understanding its folding patterns. Although a lot of efforts have been made, it still remains a challenging problem for prediction of protein structural class solely from protein sequences. The feature extraction and classification of proteins are the main problems in prediction. In this research, we extended our earlier work regarding these two aspects. In protein feature extraction, we proposed a scheme by calculating the word frequency and word position from sequences of amino acid, reduced amino acid, and secondary structure. For an accurate classification of the structural class of protein, we developed a novel Multi-Agent Ada-Boost (MA-Ada) method by integrating the features of Multi-Agent system into Ada-Boost algorithm. Extensive experiments were taken to test and compare the proposed method using four benchmark datasets in low homology. The results showed classification accuracies of 88.5%, 96.0%, 88.4%, and 85.5%, respectively, which are much better compared with the existing methods. The source code and dataset are available on request.

  14. Cloud Type Classification (cldtype) Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flynn, Donna; Shi, Yan; Lim, K-S

    The Cloud Type (cldtype) value-added product (VAP) provides an automated cloud type classification based on macrophysical quantities derived from vertically pointing lidar and radar. Up to 10 layers of clouds are classified into seven cloud types based on predetermined and site-specific thresholds of cloud top, base and thickness. Examples of thresholds for selected U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility sites are provided in Tables 1 and 2. Inputs for the cldtype VAP include lidar and radar cloud boundaries obtained from the Active Remotely Sensed Cloud Location (ARSCL) and Surface Meteorological Systems (MET) data. Rainmore » rates from MET are used to determine when radar signal attenuation precludes accurate cloud detection. Temporal resolution and vertical resolution for cldtype are 1 minute and 30 m respectively and match the resolution of ARSCL. The cldtype classification is an initial step for further categorization of clouds. It was developed for use by the Shallow Cumulus VAP to identify potential periods of interest to the LASSO model and is intended to find clouds of interest for a variety of users.« less

  15. A machine learning approach for viral genome classification.

    PubMed

    Remita, Mohamed Amine; Halioui, Ahmed; Malick Diouara, Abou Abdallah; Daigle, Bruno; Kiani, Golrokh; Diallo, Abdoulaye Baniré

    2017-04-11

    Advances in cloning and sequencing technology are yielding a massive number of viral genomes. The classification and annotation of these genomes constitute important assets in the discovery of genomic variability, taxonomic characteristics and disease mechanisms. Existing classification methods are often designed for specific well-studied family of viruses. Thus, the viral comparative genomic studies could benefit from more generic, fast and accurate tools for classifying and typing newly sequenced strains of diverse virus families. Here, we introduce a virus classification platform, CASTOR, based on machine learning methods. CASTOR is inspired by a well-known technique in molecular biology: restriction fragment length polymorphism (RFLP). It simulates, in silico, the restriction digestion of genomic material by different enzymes into fragments. It uses two metrics to construct feature vectors for machine learning algorithms in the classification step. We benchmark CASTOR for the classification of distinct datasets of human papillomaviruses (HPV), hepatitis B viruses (HBV) and human immunodeficiency viruses type 1 (HIV-1). Results reveal true positive rates of 99%, 99% and 98% for HPV Alpha species, HBV genotyping and HIV-1 M subtyping, respectively. Furthermore, CASTOR shows a competitive performance compared to well-known HIV-1 specific classifiers (REGA and COMET) on whole genomes and pol fragments. The performance of CASTOR, its genericity and robustness could permit to perform novel and accurate large scale virus studies. The CASTOR web platform provides an open access, collaborative and reproducible machine learning classifiers. CASTOR can be accessed at http://castor.bioinfo.uqam.ca .

  16. A multi-label, semi-supervised classification approach applied to personality prediction in social media.

    PubMed

    Lima, Ana Carolina E S; de Castro, Leandro Nunes

    2014-10-01

    Social media allow web users to create and share content pertaining to different subjects, exposing their activities, opinions, feelings and thoughts. In this context, online social media has attracted the interest of data scientists seeking to understand behaviours and trends, whilst collecting statistics for social sites. One potential application for these data is personality prediction, which aims to understand a user's behaviour within social media. Traditional personality prediction relies on users' profiles, their status updates, the messages they post, etc. Here, a personality prediction system for social media data is introduced that differs from most approaches in the literature, in that it works with groups of texts, instead of single texts, and does not take users' profiles into account. Also, the proposed approach extracts meta-attributes from texts and does not work directly with the content of the messages. The set of possible personality traits is taken from the Big Five model and allows the problem to be characterised as a multi-label classification task. The problem is then transformed into a set of five binary classification problems and solved by means of a semi-supervised learning approach, due to the difficulty in annotating the massive amounts of data generated in social media. In our implementation, the proposed system was trained with three well-known machine-learning algorithms, namely a Naïve Bayes classifier, a Support Vector Machine, and a Multilayer Perceptron neural network. The system was applied to predict the personality of Tweets taken from three datasets available in the literature, and resulted in an approximately 83% accurate prediction, with some of the personality traits presenting better individual classification rates than others. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Identification of an Efficient Gene Expression Panel for Glioblastoma Classification

    PubMed Central

    Zelaya, Ivette; Laks, Dan R.; Zhao, Yining; Kawaguchi, Riki; Gao, Fuying; Kornblum, Harley I.; Coppola, Giovanni

    2016-01-01

    We present here a novel genetic algorithm-based random forest (GARF) modeling technique that enables a reduction in the complexity of large gene disease signatures to highly accurate, greatly simplified gene panels. When applied to 803 glioblastoma multiforme samples, this method allowed the 840-gene Verhaak et al. gene panel (the standard in the field) to be reduced to a 48-gene classifier, while retaining 90.91% classification accuracy, and outperforming the best available alternative methods. Additionally, using this approach we produced a 32-gene panel which allows for better consistency between RNA-seq and microarray-based classifications, improving cross-platform classification retention from 69.67% to 86.07%. A webpage producing these classifications is available at http://simplegbm.semel.ucla.edu. PMID:27855170

  18. The research on medical image classification algorithm based on PLSA-BOW model.

    PubMed

    Cao, C H; Cao, H L

    2016-04-29

    With the rapid development of modern medical imaging technology, medical image classification has become more important for medical diagnosis and treatment. To solve the existence of polysemous words and synonyms problem, this study combines the word bag model with PLSA (Probabilistic Latent Semantic Analysis) and proposes the PLSA-BOW (Probabilistic Latent Semantic Analysis-Bag of Words) model. In this paper we introduce the bag of words model in text field to image field, and build the model of visual bag of words model. The method enables the word bag model-based classification method to be further improved in accuracy. The experimental results show that the PLSA-BOW model for medical image classification can lead to a more accurate classification.

  19. Defect detection and classification of machined surfaces under multiple illuminant directions

    NASA Astrophysics Data System (ADS)

    Liao, Yi; Weng, Xin; Swonger, C. W.; Ni, Jun

    2010-08-01

    Continuous improvement of product quality is crucial to the successful and competitive automotive manufacturing industry in the 21st century. The presence of surface porosity located on flat machined surfaces such as cylinder heads/blocks and transmission cases may allow leaks of coolant, oil, or combustion gas between critical mating surfaces, thus causing damage to the engine or transmission. Therefore 100% inline inspection plays an important role for improving product quality. Although the techniques of image processing and machine vision have been applied to machined surface inspection and well improved in the past 20 years, in today's automotive industry, surface porosity inspection is still done by skilled humans, which is costly, tedious, time consuming and not capable of reliably detecting small defects. In our study, an automated defect detection and classification system for flat machined surfaces has been designed and constructed. In this paper, the importance of the illuminant direction in a machine vision system was first emphasized and then the surface defect inspection system under multiple directional illuminations was designed and constructed. After that, image processing algorithms were developed to realize 5 types of 2D or 3D surface defects (pore, 2D blemish, residue dirt, scratch, and gouge) detection and classification. The steps of image processing include: (1) image acquisition and contrast enhancement (2) defect segmentation and feature extraction (3) defect classification. An artificial machined surface and an actual automotive part: cylinder head surface were tested and, as a result, microscopic surface defects can be accurately detected and assigned to a surface defect class. The cycle time of this system can be sufficiently fast that implementation of 100% inline inspection is feasible. The field of view of this system is 150mm×225mm and the surfaces larger than the field of view can be stitched together in software.

  20. The value of CT and MRI in the classification and surgical decision-making among spine surgeons in thoracolumbar spinal injuries.

    PubMed

    Rajasekaran, Shanmuganathan; Vaccaro, Alexander R; Kanna, Rishi Mugesh; Schroeder, Gregory D; Oner, Frank Cumhur; Vialle, Luiz; Chapman, Jens; Dvorak, Marcel; Fehlings, Michael; Shetty, Ajoy Prasad; Schnake, Klaus; Maheshwaran, Anupama; Kandziora, Frank

    2017-05-01

    Although imaging has a major role in evaluation and management of thoracolumbar spinal trauma by spine surgeons, the exact role of computed tomography (CT) and magnetic resonance imaging (MRI) in addition to radiographs for fracture classification and surgical decision-making is unclear. Spine surgeons (n = 41) from around the world classified 30 thoracolumbar fractures. The cases were presented in a three-step approach: first plain radiographs, followed by CT and MRI images. Surgeons were asked to classify according to the AOSpine classification system and choose management in each of the three steps. Surgeons correctly classified 43.4 % of fractures with plain radiographs alone; after, additionally, evaluating CT and MRI images, this percentage increased by further 18.2 and 2.2 %, respectively. AO type A fractures were identified in 51.7 % of fractures with radiographs, while the number of type B fractures increased after CT and MRI. The number of type C fractures diagnosed was constant across the three steps. Agreement between radiographs and CT was fair for A-type (k = 0.31), poor for B-type (k = 0.19), but it was excellent between CT and MRI (k > 0.87). CT and MRI had similar sensitivity in identifying fracture subtypes except that MRI had a higher sensitivity (56.5 %) for B2 fractures (p < 0.001). The need for surgical fixation was deemed present in 72 % based on radiographs alone and increased to 81.7 % with CT images (p < 0.0001). The assessment for need of surgery did not change after an MRI (p = 0.77). For accurate classification, radiographs alone were insufficient except for C-type injuries. CT is mandatory for accurately classifying thoracolumbar fractures. Though MRI did confer a modest gain in sensitivity in B2 injuries, the study does not support the need for routine MRI in patients for classification, assessing instability or need for surgery.

  1. Bilateral weighted radiographs are required for accurate classification of acromioclavicular separation: an observational study of 59 cases.

    PubMed

    Ibrahim, E F; Forrest, N P; Forester, A

    2015-10-01

    Misinterpretation of the Rockwood classification system for acromioclavicular joint (ACJ) separations has resulted in a trend towards using unilateral radiographs for grading. Further, the use of weighted views to 'unmask' a grade III injury has fallen out of favour. Recent evidence suggests that many radiographic grade III injuries represent only a partial injury to the stabilising ligaments. This study aimed to determine (1) whether accurate classification is possible on unilateral radiographs and (2) the efficacy of weighted bilateral radiographs in unmasking higher-grade injuries. Complete bilateral non-weighted and weighted sets of radiographs for patients presenting with an acromioclavicular separation over a 10-year period were analysed retrospectively, and they were graded I-VI according to Rockwood's criteria. Comparison was made between grading based on (1) a single antero-posterior (AP) view of the injured side, (2) bilateral non-weighted views and (3) bilateral weighted views. Radiographic measurements for cases that changed grade after weighted views were statistically compared to see if this could have been predicted beforehand. Fifty-nine sets of radiographs on 59 patients (48 male, mean age of 33 years) were included. Compared with unilateral radiographs, non-weighted bilateral comparison films resulted in a grade change for 44 patients (74.5%). Twenty-eight of 56 patients initially graded as I, II or III were upgraded to grade V and two of three initial grade V patients were downgraded to grade III. The addition of a weighted view further upgraded 10 patients to grade V. No grade II injury was changed to grade III and no injury of any severity was downgraded by a weighted view. Grade III injuries upgraded on weighted views had a significantly greater baseline median percentage coracoclavicular distance increase than those that were not upgraded (80.7% vs. 55.4%, p=0.015). However, no cut-off point for this value could be identified to predict an upgrade. The accurate classification of ACJ separation requires weighted bilateral comparative views. Attempts to predict grade on a single AP radiograph result in a gross underestimation of severity. The value of bilateral weighted views is to 'unmask' a grade V injury, and it is recommended as a first-line investigation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Forest/non-forest stratification in Georgia with Landsat Thematic Mapper data

    Treesearch

    William H. Cooke

    2000-01-01

    Geographically accurate Forest Inventory and Analysis (FIA) data may be useful for training, classification, and accuracy assessment of Landsat Thematic Mapper (TM) data. Minimum expectation for maps derived from Landsat data is accurate discrimination of several land cover classes. Landsat TM costs have decreased dramatically, but acquiring cloud-free scenes at...

  3. A new unified framework for the early detection of the progression to diabetic retinopathy from fundus images.

    PubMed

    Leontidis, Georgios

    2017-11-01

    Human retina is a diverse and important tissue, vastly studied for various retinal and other diseases. Diabetic retinopathy (DR), a leading cause of blindness, is one of them. This work proposes a novel and complete framework for the accurate and robust extraction and analysis of a series of retinal vascular geometric features. It focuses on studying the registered bifurcations in successive years of progression from diabetes (no DR) to DR, in order to identify the vascular alterations. Retinal fundus images are utilised, and multiple experimental designs are employed. The framework includes various steps, such as image registration and segmentation, extraction of features, statistical analysis and classification models. Linear mixed models are utilised for making the statistical inferences, alongside the elastic-net logistic regression, boruta algorithm, and regularised random forests for the feature selection and classification phases, in order to evaluate the discriminative potential of the investigated features and also build classification models. A number of geometric features, such as the central retinal artery and vein equivalents, are found to differ significantly across the experiments and also have good discriminative potential. The classification systems yield promising results with the area under the curve values ranging from 0.821 to 0.968, across the four different investigated combinations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. New classification of natural breeding habitats for Neotropical anophelines in the Yanomami Indian Reserve, Amazon Region, Brazil and a new larval sampling methodology.

    PubMed

    Sánchez-Ribas, Jordi; Oliveira-Ferreira, Joseli; Rosa-Freitas, Maria Goreti; Trilla, Lluís; Silva-do-Nascimento, Teresa Fernandes

    2015-09-01

    Here we present the first in a series of articles about the ecology of immature stages of anophelines in the Brazilian Yanomami area. We propose a new larval habitat classification and a new larval sampling methodology. We also report some preliminary results illustrating the applicability of the methodology based on data collected in the Brazilian Amazon rainforest in a longitudinal study of two remote Yanomami communities, Parafuri and Toototobi. In these areas, we mapped and classified 112 natural breeding habitats located in low-order river systems based on their association with river flood pulses, seasonality and exposure to sun. Our classification rendered seven types of larval habitats: lakes associated with the river, which are subdivided into oxbow lakes and nonoxbow lakes, flooded areas associated with the river, flooded areas not associated with the river, rainfall pools, small forest streams, medium forest streams and rivers. The methodology for larval sampling was based on the accurate quantification of the effective breeding area, taking into account the area of the perimeter and subtypes of microenvironments present per larval habitat type using a laser range finder and a small portable inflatable boat. The new classification and new sampling methodology proposed herein may be useful in vector control programs.

  5. New classification of natural breeding habitats for Neotropical anophelines in the Yanomami Indian Reserve, Amazon Region, Brazil and a new larval sampling methodology

    PubMed Central

    Sánchez-Ribas, Jordi; Oliveira-Ferreira, Joseli; Rosa-Freitas, Maria Goreti; Trilla, Lluís; Silva-do-Nascimento, Teresa Fernandes

    2015-01-01

    Here we present the first in a series of articles about the ecology of immature stages of anophelines in the Brazilian Yanomami area. We propose a new larval habitat classification and a new larval sampling methodology. We also report some preliminary results illustrating the applicability of the methodology based on data collected in the Brazilian Amazon rainforest in a longitudinal study of two remote Yanomami communities, Parafuri and Toototobi. In these areas, we mapped and classified 112 natural breeding habitats located in low-order river systems based on their association with river flood pulses, seasonality and exposure to sun. Our classification rendered seven types of larval habitats: lakes associated with the river, which are subdivided into oxbow lakes and nonoxbow lakes, flooded areas associated with the river, flooded areas not associated with the river, rainfall pools, small forest streams, medium forest streams and rivers. The methodology for larval sampling was based on the accurate quantification of the effective breeding area, taking into account the area of the perimeter and subtypes of microenvironments present per larval habitat type using a laser range finder and a small portable inflatable boat. The new classification and new sampling methodology proposed herein may be useful in vector control programs. PMID:26517655

  6. Classification of nasolabial folds in Asians and the corresponding surgical approaches: By Shanghai 9th People's Hospital.

    PubMed

    Zhang, Lu; Tang, Meng-Yao; Jin, Rong; Zhang, Ying; Shi, Yao-Ming; Sun, Bao-Shan; Zhang, Yu-Guang

    2015-07-01

    One of the earliest signs of aging appears in the nasolabial fold, which is a special anatomical region that requires many factors for comprehensive assessment. Hence, it is inadequate to rely on a single index to facilitate the classification of nasolabial folds. Through clinical observation, we have observed that traditional filling treatments provide little improvement for some patients, which prompted us to seek a more specific and scientific classification standard and assessment system. A total of 900 patients who sought facial rejuvenation treatment in Shanghai 9th People's Hospital were invited in this study. We observed the different nasolabial fold traits for different age groups and in different states, and the results were compared with the Wrinkle Severity Rating Scale (WSRS). We summarized the data, presented a classification scheme, and proposed a selection of treatment options. Consideration of the anatomical and histological features of nasolabial folds allowed us to divide nasolabial folds into five types, namely the skin type, fat pad type, muscular type, bone retrusion type, and hybrid type. Because different types of nasolabial folds require different treatments, it is crucial to accurately assess and correctly classify the conditions. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  7. Pulsed terahertz imaging of breast cancer in freshly excised murine tumors

    NASA Astrophysics Data System (ADS)

    Bowman, Tyler; Chavez, Tanny; Khan, Kamrul; Wu, Jingxian; Chakraborty, Avishek; Rajaram, Narasimhan; Bailey, Keith; El-Shenawee, Magda

    2018-02-01

    This paper investigates terahertz (THz) imaging and classification of freshly excised murine xenograft breast cancer tumors. These tumors are grown via injection of E0771 breast adenocarcinoma cells into the flank of mice maintained on high-fat diet. Within 1 h of excision, the tumor and adjacent tissues are imaged using a pulsed THz system in the reflection mode. The THz images are classified using a statistical Bayesian mixture model with unsupervised and supervised approaches. Correlation with digitized pathology images is conducted using classification images assigned by a modal class decision rule. The corresponding receiver operating characteristic curves are obtained based on the classification results. A total of 13 tumor samples obtained from 9 tumors are investigated. The results show good correlation of THz images with pathology results in all samples of cancer and fat tissues. For tumor samples of cancer, fat, and muscle tissues, THz images show reasonable correlation with pathology where the primary challenge lies in the overlapping dielectric properties of cancer and muscle tissues. The use of a supervised regression approach shows improvement in the classification images although not consistently in all tissue regions. Advancing THz imaging of breast tumors from mice and the development of accurate statistical models will ultimately progress the technique for the assessment of human breast tumor margins.

  8. Spatial-spectral blood cell classification with microscopic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ran, Qiong; Chang, Lan; Li, Wei; Xu, Xiaofeng

    2017-10-01

    Microscopic hyperspectral images provide a new way for blood cell examination. The hyperspectral imagery can greatly facilitate the classification of different blood cells. In this paper, the microscopic hyperspectral images are acquired by connecting the microscope and the hyperspectral imager, and then tested for blood cell classification. For combined use of the spectral and spatial information provided by hyperspectral images, a spatial-spectral classification method is improved from the classical extreme learning machine (ELM) by integrating spatial context into the image classification task with Markov random field (MRF) model. Comparisons are done among ELM, ELM-MRF, support vector machines(SVM) and SVMMRF methods. Results show the spatial-spectral classification methods(ELM-MRF, SVM-MRF) perform better than pixel-based methods(ELM, SVM), and the proposed ELM-MRF has higher precision and show more accurate location of cells.

  9. The Value of Ensari's Proposal in Evaluating the Mucosal Pathology of Childhood Celiac Disease: Old Classification versus New Version.

    PubMed

    Güreşci, Servet; Hızlı, Samil; Simşek, Gülçin Güler

    2012-09-01

    Small intestinal biopsy remains the gold standard in diagnosing celiac disease (CD); however, the wide spectrum of histopathological states and differential diagnosis of CD is still a diagnostic problem for pathologists. Recently, Ensari reviewed the literature and proposed an update of the histopathological diagnosis and classification for CD. In this study, the histopathological materials of 54 children in whom CD was diagnosed at our hospital were reviewed to compare the previous Marsh and Modified Marsh-Oberhuber classifications with this new proposal. In this study, we show that the Ensari classification is as accurate as the Marsh and Modified Marsh classifications in describing the consecutive states of mucosal damage seen in CD. Ensari's classification is simple, practical and facilitative in diagnosing and subtyping of mucosal pathology of CD.

  10. Refining Time-Activity Classification of Human Subjects Using the Global Positioning System.

    PubMed

    Hu, Maogui; Li, Wei; Li, Lianfa; Houston, Douglas; Wu, Jun

    2016-01-01

    Detailed spatial location information is important in accurately estimating personal exposure to air pollution. Global Position System (GPS) has been widely used in tracking personal paths and activities. Previous researchers have developed time-activity classification models based on GPS data, most of them were developed for specific regions. An adaptive model for time-location classification can be widely applied to air pollution studies that use GPS to track individual level time-activity patterns. Time-activity data were collected for seven days using GPS loggers and accelerometers from thirteen adult participants from Southern California under free living conditions. We developed an automated model based on random forests to classify major time-activity patterns (i.e. indoor, outdoor-static, outdoor-walking, and in-vehicle travel). Sensitivity analysis was conducted to examine the contribution of the accelerometer data and the supplemental spatial data (i.e. roadway and tax parcel data) to the accuracy of time-activity classification. Our model was evaluated using both leave-one-fold-out and leave-one-subject-out methods. Maximum speeds in averaging time intervals of 7 and 5 minutes, and distance to primary highways with limited access were found to be the three most important variables in the classification model. Leave-one-fold-out cross-validation showed an overall accuracy of 99.71%. Sensitivities varied from 84.62% (outdoor walking) to 99.90% (indoor). Specificities varied from 96.33% (indoor) to 99.98% (outdoor static). The exclusion of accelerometer and ambient light sensor variables caused a slight loss in sensitivity for outdoor walking, but little loss in overall accuracy. However, leave-one-subject-out cross-validation showed considerable loss in sensitivity for outdoor static and outdoor walking conditions. The random forests classification model can achieve high accuracy for the four major time-activity categories. The model also performed well with just GPS, road and tax parcel data. However, caution is warranted when generalizing the model developed from a small number of subjects to other populations.

  11. Improving galaxy morphologies for SDSS with Deep Learning

    NASA Astrophysics Data System (ADS)

    Domínguez Sánchez, H.; Huertas-Company, M.; Bernardi, M.; Tuccillo, D.; Fischer, J. L.

    2018-05-01

    We present a morphological catalogue for ˜670 000 galaxies in the Sloan Digital Sky Survey in two flavours: T-type, related to the Hubble sequence, and Galaxy Zoo 2 (GZ2 hereafter) classification scheme. By combining accurate existing visual classification catalogues with machine learning, we provide the largest and most accurate morphological catalogue up to date. The classifications are obtained with Deep Learning algorithms using Convolutional Neural Networks (CNNs). We use two visual classification catalogues, GZ2 and Nair & Abraham (2010), for training CNNs with colour images in order to obtain T-types and a series of GZ2 type questions (disc/features, edge-on galaxies, bar signature, bulge prominence, roundness, and mergers). We also provide an additional probability enabling a separation between pure elliptical (E) from S0, where the T-type model is not so efficient. For the T-type, our results show smaller offset and scatter than previous models trained with support vector machines. For the GZ2 type questions, our models have large accuracy (>97 per cent), precision and recall values (>90 per cent), when applied to a test sample with the same characteristics as the one used for training. The catalogue is publicly released with the paper.

  12. Teacher, parent, and peer reports of early aggression as screening measures for long-term maladaptive outcomes: Who provides the most useful information?

    PubMed Central

    Clemans, Katherine H.; Musci, Rashelle J.; Leoutsakos, Jeannie-Marie S.; Ialongo, Nicholas S.

    2014-01-01

    Objective This study compared the ability of teacher, parent, and peer reports of aggressive behavior in early childhood to accurately classify cases of maladaptive outcomes in late adolescence and early adulthood. Method Weighted kappa analyses determined optimal cut points and relative classification accuracy among teacher, parent, and peer reports of aggression assessed for 691 students (54% male; 84% African American, 13% White) in the fall of first grade. Outcomes included antisocial personality, substance use, incarceration history, risky sexual behavior, and failure to graduate from high school on time. Results Peer reports were the most accurate classifier of all outcomes in the full sample. For most outcomes, the addition of teacher or parent reports did not improve overall classification accuracy once peer reports were accounted for. Additional gender-specific and adjusted kappa analyses supported the superior classification utility of the peer report measure. Conclusion The results suggest that peer reports provided the most useful classification information of the three aggression measures. Implications for targeted intervention efforts which use screening measures to identify at-risk children are discussed. PMID:24512126

  13. Malignancy rates and diagnostic performance of the Bosniak classification for the diagnosis of cystic renal lesions in computed tomography - a systematic review and meta-analysis.

    PubMed

    Sevcenco, Sabina; Spick, Claudio; Helbich, Thomas H; Heinz, Gertraud; Shariat, Shahrokh F; Klingler, Hans C; Rauchenwald, Michael; Baltzer, Pascal A

    2017-06-01

    To systematically review the literature on the Bosniak classification system in CT to determine its diagnostic performance to diagnose malignant cystic lesions and the prevalence of malignancy in Bosniak categories. A predefined database search was performed from 1 January 1986 to 18 January 2016. Two independent reviewers extracted data on malignancy rates in Bosniak categories and several covariates using predefined criteria. Study quality was assessed using QUADAS-2. Meta-analysis included data pooling, subgroup analyses, meta-regression and investigation of publication bias. A total of 35 studies, which included 2,578 lesions, were investigated. Data on observer experience, inter-observer variation and technical CT standards were insufficiently reported. The pooled rate of malignancy increased from Bosniak I (3.2 %, 95 % CI 0-6.8, I 2  = 5 %) to Bosniak II (6 %, 95 % CI 2.7-9.3, I 2  = 32 %), IIF (6.7 %, 95 % CI 5-8.4, I 2  = 0 %), III (55.1 %, 95 % CI 45.7-64.5, I 2  = 89 %) and IV (91 %, 95 % CI 87.7-94.2, I 2  = 36). Several study design-related influences on malignancy rates and subsequent diagnostic performance indices were identified. The Bosniak classification is an accurate tool with which to stratify the risk of malignancy in renal cystic lesions. • The Bosniak classification can accurately rule out malignancy. • Specificity remains moderate at 74 % (95 % CI 64-82). • Follow-up examinations should be considered in Bosniak IIF and Bosniak II cysts. • Data on the influence of reader experience and inter-reader variability are insufficient. • Technical CT standards and publication year did not influence diagnostic performance.

  14. VizieR Online Data Catalog: G5 and later stars in a North Galactic Pole region (Upgren 1962)

    NASA Astrophysics Data System (ADS)

    Upgren, A. R., Jr.

    2015-11-01

    The catalog is an objective-prism survey of late-type stars in a region of 396 square degrees surrounding the north galactic pole. The objective-prism spectra employed have a dispersion of 58 nm/mm at H-γ and extend into the ultraviolet region. The catalog contains the magnitudes and spectral classes of 4027 stars of class G5 and later, complete to a limiting photographic magnitude of 13.0. The spectral classification of the stars is based on the Yerkes system. The catalog includes the serial numbers of the stars corresponding to the numbers on the identification charts in Upgren (1984), BD and HD numbers, B magnitudes, spectral classes, and letters designating the subregion and identification chart on which each star is located. This survey was undertaken to determine the space densities at varying distances from the galactic plane. Accurate separation of the surveyed stars of G5 and later into giants and dwarfs was achieved through the use of the UV region as well as conventional methods of classification. The resulting catalog of 4027 stars is probably complete over the region to a limiting photographic magnitude of 13.0. The region covered by the survey is the same as that discussed by Slettebak and Stock (1959) and is in the approximate range RA 11:30 to 13:00, Declination +25 to +50 (B1950.0). The catalog includes all M and Carbon stars previously published by Upgren (1960). For a discussion of the classification criteria, the combining of multiple classifications (each spectral image was classified twice), the determination of magnitudes, and additional details about the catalog, the source reference should be consulted. Corrections, accurate positions, more identifications, and remarks have been added in Nov. 2015 by B. Skiff in the file "positions.dat"; see the "History" section below for details. (3 data files).

  15. A survey of transposable element classification systems--a call for a fundamental update to meet the challenge of their diversity and complexity.

    PubMed

    Piégu, Benoît; Bire, Solenne; Arensburger, Peter; Bigot, Yves

    2015-05-01

    The increase of publicly available sequencing data has allowed for rapid progress in our understanding of genome composition. As new information becomes available we should constantly be updating and reanalyzing existing and newly acquired data. In this report we focus on transposable elements (TEs) which make up a significant portion of nearly all sequenced genomes. Our ability to accurately identify and classify these sequences is critical to understanding their impact on host genomes. At the same time, as we demonstrate in this report, problems with existing classification schemes have led to significant misunderstandings of the evolution of both TE sequences and their host genomes. In a pioneering publication Finnegan (1989) proposed classifying all TE sequences into two classes based on transposition mechanisms and structural features: the retrotransposons (class I) and the DNA transposons (class II). We have retraced how ideas regarding TE classification and annotation in both prokaryotic and eukaryotic scientific communities have changed over time. This has led us to observe that: (1) a number of TEs have convergent structural features and/or transposition mechanisms that have led to misleading conclusions regarding their classification, (2) the evolution of TEs is similar to that of viruses by having several unrelated origins, (3) there might be at least 8 classes and 12 orders of TEs including 10 novel orders. In an effort to address these classification issues we propose: (1) the outline of a universal TE classification, (2) a set of methods and classification rules that could be used by all scientific communities involved in the study of TEs, and (3) a 5-year schedule for the establishment of an International Committee for Taxonomy of Transposable Elements (ICTTE). Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Accuracy of land use change detection using support vector machine and maximum likelihood techniques for open-cast coal mining areas.

    PubMed

    Karan, Shivesh Kishore; Samadder, Sukha Ranjan

    2016-08-01

    One objective of the present study was to evaluate the performance of support vector machine (SVM)-based image classification technique with the maximum likelihood classification (MLC) technique for a rapidly changing landscape of an open-cast mine. The other objective was to assess the change in land use pattern due to coal mining from 2006 to 2016. Assessing the change in land use pattern accurately is important for the development and monitoring of coalfields in conjunction with sustainable development. For the present study, Landsat 5 Thematic Mapper (TM) data of 2006 and Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) data of 2016 of a part of Jharia Coalfield, Dhanbad, India, were used. The SVM classification technique provided greater overall classification accuracy when compared to the MLC technique in classifying heterogeneous landscape with limited training dataset. SVM exceeded MLC in handling a difficult challenge of classifying features having near similar reflectance on the mean signature plot, an improvement of over 11 % was observed in classification of built-up area, and an improvement of 24 % was observed in classification of surface water using SVM; similarly, the SVM technique improved the overall land use classification accuracy by almost 6 and 3 % for Landsat 5 and Landsat 8 images, respectively. Results indicated that land degradation increased significantly from 2006 to 2016 in the study area. This study will help in quantifying the changes and can also serve as a basis for further decision support system studies aiding a variety of purposes such as planning and management of mines and environmental impact assessment.

  17. Crowdsourced validation of a machine-learning classification system for autism and ADHD

    PubMed Central

    Duda, M; Haber, N; Daniels, J; Wall, D P

    2017-01-01

    Autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD) together affect >10% of the children in the United States, but considerable behavioral overlaps between the two disorders can often complicate differential diagnosis. Currently, there is no screening test designed to differentiate between the two disorders, and with waiting times from initial suspicion to diagnosis upwards of a year, methods to quickly and accurately assess risk for these and other developmental disorders are desperately needed. In a previous study, we found that four machine-learning algorithms were able to accurately (area under the curve (AUC)>0.96) distinguish ASD from ADHD using only a small subset of items from the Social Responsiveness Scale (SRS). Here, we expand upon our prior work by including a novel crowdsourced data set of responses to our predefined top 15 SRS-derived questions from parents of children with ASD (n=248) or ADHD (n=174) to improve our model’s capability to generalize to new, ‘real-world’ data. By mixing these novel survey data with our initial archival sample (n=3417) and performing repeated cross-validation with subsampling, we created a classification algorithm that performs with AUC=0.89±0.01 using only 15 questions. PMID:28509905

  18. The revised burn diagram and its effect on diagnosis-related group coding.

    PubMed

    Turner, D G; Berger, N; Weiland, A P; Jordan, M H

    1996-01-01

    Diagnosis-related group (DRG) codes for burn injuries are defined by thresholds of the percentage of total body surface area and depth of burns, and by whether surgery, debridement, or grafting or both occurred. This prospective study was designed to determine whether periodic revisions of the burn diagram resulted in more accurate assignment of the International Classification of Diseases and DRG codes. The admission burn diagrams were revised after admission and after each surgical procedure. All areas grafted (deep second-and third-degree burns) were diagrammed as "third-degree," after the current convention that both are biologically the same and require grafting. The multiple diagrams from 82 charts were analyzed to determine the disparities in the percentage of total body surface area burn and the percentage of body surface area third-degree burn. The revised diagrams differed from the admission diagrams in 96.5% of the cases. In 77% of the cases, the revised diagram correctly depicted the percentage of body surface area third-degree burn as confirmed intraoperatively. In 7.3% of the cases, diagram revision changed the DRG code. Documenting wound evolution in this manner allows more accurate assignment of the International Classification of Diseases and DRG codes, assuring optimal reimbursement under the prospective payment system.

  19. Creating a behavioural classification module for acceleration data: using a captive surrogate for difficult to observe species.

    PubMed

    Campbell, Hamish A; Gao, Lianli; Bidder, Owen R; Hunter, Jane; Franklin, Craig E

    2013-12-15

    Distinguishing specific behavioural modes from data collected by animal-borne tri-axial accelerometers can be a time-consuming and subjective process. Data synthesis can be further inhibited when the tri-axial acceleration data cannot be paired with the corresponding behavioural mode through direct observation. Here, we explored the use of a tame surrogate (domestic dog) to build a behavioural classification module, and then used that module to accurately identify and quantify behavioural modes within acceleration collected from other individuals/species. Tri-axial acceleration data were recorded from a domestic dog whilst it was commanded to walk, run, sit, stand and lie-down. Through video synchronisation, each tri-axial acceleration sample was annotated with its associated behavioural mode; the feature vectors were extracted and used to build the classification module through the application of support vector machines (SVMs). This behavioural classification module was then used to identify and quantify the same behavioural modes in acceleration collected from a range of other species (alligator, badger, cheetah, dingo, echidna, kangaroo and wombat). Evaluation of the module performance, using a binary classification system, showed there was a high capacity (>90%) for behaviour recognition between individuals of the same species. Furthermore, a positive correlation existed between SVM capacity and the similarity of the individual's spinal length-to-height above the ground ratio (SL:SH) to that of the surrogate. The study describes how to build a behavioural classification module and highlights the value of using a surrogate for studying cryptic, rare or endangered species.

  20. Hyperspectral Imaging Analysis for the Classification of Soil Types and the Determination of Soil Total Nitrogen

    PubMed Central

    Jia, Shengyao; Li, Hongyang; Wang, Yanjie; Tong, Renyuan; Li, Qing

    2017-01-01

    Soil is an important environment for crop growth. Quick and accurately access to soil nutrient content information is a prerequisite for scientific fertilization. In this work, hyperspectral imaging (HSI) technology was applied for the classification of soil types and the measurement of soil total nitrogen (TN) content. A total of 183 soil samples collected from Shangyu City (People’s Republic of China), were scanned by a near-infrared hyperspectral imaging system with a wavelength range of 874–1734 nm. The soil samples belonged to three major soil types typical of this area, including paddy soil, red soil and seashore saline soil. The successive projections algorithm (SPA) method was utilized to select effective wavelengths from the full spectrum. Pattern texture features (energy, contrast, homogeneity and entropy) were extracted from the gray-scale images at the effective wavelengths. The support vector machines (SVM) and partial least squares regression (PLSR) methods were used to establish classification and prediction models, respectively. The results showed that by using the combined data sets of effective wavelengths and texture features for modelling an optimal correct classification rate of 91.8%. could be achieved. The soil samples were first classified, then the local models were established for soil TN according to soil types, which achieved better prediction results than the general models. The overall results indicated that hyperspectral imaging technology could be used for soil type classification and soil TN determination, and data fusion combining spectral and image texture information showed advantages for the classification of soil types. PMID:28974005

  1. Object-oriented crop mapping and monitoring using multi-temporal polarimetric RADARSAT-2 data

    NASA Astrophysics Data System (ADS)

    Jiao, Xianfeng; Kovacs, John M.; Shang, Jiali; McNairn, Heather; Walters, Dan; Ma, Baoluo; Geng, Xiaoyuan

    2014-10-01

    The aim of this paper is to assess the accuracy of an object-oriented classification of polarimetric Synthetic Aperture Radar (PolSAR) data to map and monitor crops using 19 RADARSAT-2 fine beam polarimetric (FQ) images of an agricultural area in North-eastern Ontario, Canada. Polarimetric images and field data were acquired during the 2011 and 2012 growing seasons. The classification and field data collection focused on the main crop types grown in the region, which include: wheat, oat, soybean, canola and forage. The polarimetric parameters were extracted with PolSAR analysis using both the Cloude-Pottier and Freeman-Durden decompositions. The object-oriented classification, with a single date of PolSAR data, was able to classify all five crop types with an accuracy of 95% and Kappa of 0.93; a 6% improvement in comparison with linear-polarization only classification. However, the time of acquisition is crucial. The larger biomass crops of canola and soybean were most accurately mapped, whereas the identification of oat and wheat were more variable. The multi-temporal data using the Cloude-Pottier decomposition parameters provided the best classification accuracy compared to the linear polarizations and the Freeman-Durden decomposition parameters. In general, the object-oriented classifications were able to accurately map crop types by reducing the noise inherent in the SAR data. Furthermore, using the crop classification maps we were able to monitor crop growth stage based on a trend analysis of the radar response. Based on field data from canola crops, there was a strong relationship between the phenological growth stage based on the BBCH scale, and the HV backscatter and entropy.

  2. An assessment of the effectiveness of a random forest classifier for land-cover classification

    NASA Astrophysics Data System (ADS)

    Rodriguez-Galiano, V. F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J. P.

    2012-01-01

    Land cover monitoring using remotely sensed data requires robust classification methods which allow for the accurate mapping of complex land cover and land use categories. Random forest (RF) is a powerful machine learning classifier that is relatively unknown in land remote sensing and has not been evaluated thoroughly by the remote sensing community compared to more conventional pattern recognition techniques. Key advantages of RF include: their non-parametric nature; high classification accuracy; and capability to determine variable importance. However, the split rules for classification are unknown, therefore RF can be considered to be black box type classifier. RF provides an algorithm for estimating missing values; and flexibility to perform several types of data analysis, including regression, classification, survival analysis, and unsupervised learning. In this paper, the performance of the RF classifier for land cover classification of a complex area is explored. Evaluation was based on several criteria: mapping accuracy, sensitivity to data set size and noise. Landsat-5 Thematic Mapper data captured in European spring and summer were used with auxiliary variables derived from a digital terrain model to classify 14 different land categories in the south of Spain. Results show that the RF algorithm yields accurate land cover classifications, with 92% overall accuracy and a Kappa index of 0.92. RF is robust to training data reduction and noise because significant differences in kappa values were only observed for data reduction and noise addition values greater than 50 and 20%, respectively. Additionally, variables that RF identified as most important for classifying land cover coincided with expectations. A McNemar test indicates an overall better performance of the random forest model over a single decision tree at the 0.00001 significance level.

  3. Mechanistic Physiologically Based Pharmacokinetic Modeling of the Dissolution and Food Effect of a Biopharmaceutics Classification System IV Compound-The Venetoclax Story.

    PubMed

    Emami Riedmaier, Arian; Lindley, David J; Hall, Jeffrey A; Castleberry, Steven; Slade, Russell T; Stuart, Patricia; Carr, Robert A; Borchardt, Thomas B; Bow, Daniel A J; Nijsen, Marjoleen

    2018-01-01

    Venetoclax, a selective B-cell lymphoma-2 inhibitor, is a biopharmaceutics classification system class IV compound. The aim of this study was to develop a physiologically based pharmacokinetic (PBPK) model to mechanistically describe absorption and disposition of an amorphous solid dispersion formulation of venetoclax in humans. A mechanistic PBPK model was developed incorporating measured amorphous solubility, dissolution, metabolism, and plasma protein binding. A middle-out approach was used to define permeability. Model predictions of oral venetoclax pharmacokinetics were verified against clinical studies of fed and fasted healthy volunteers, and clinical drug interaction studies with strong CYP3A inhibitor (ketoconazole) and inducer (rifampicin). Model verification demonstrated accurate prediction of the observed food effect following a low-fat diet. Ratios of predicted versus observed C max and area under the curve of venetoclax were within 0.8- to 1.25-fold of observed ratios for strong CYP3A inhibitor and inducer interactions, indicating that the venetoclax elimination pathway was correctly specified. The verified venetoclax PBPK model is one of the first examples mechanistically capturing absorption, food effect, and exposure of an amorphous solid dispersion formulated compound. This model allows evaluation of untested drug-drug interactions, especially those primarily occurring in the intestine, and paves the way for future modeling of biopharmaceutics classification system IV compounds. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  4. Classification of hospital admissions into emergency and elective care: a machine learning approach.

    PubMed

    Krämer, Jonas; Schreyögg, Jonas; Busse, Reinhard

    2017-11-25

    Rising admissions from emergency departments (EDs) to hospitals are a primary concern for many healthcare systems. The issue of how to differentiate urgent admissions from non-urgent or even elective admissions is crucial. We aim to develop a model for classifying inpatient admissions based on a patient's primary diagnosis as either emergency care or elective care and predicting urgency as a numerical value. We use supervised machine learning techniques and train the model with physician-expert judgments. Our model is accurate (96%) and has a high area under the ROC curve (>.99). We provide the first comprehensive classification and urgency categorization for inpatient emergency and elective care. This model assigns urgency values to every relevant diagnosis in the ICD catalog, and these values are easily applicable to existing hospital data. Our findings may provide a basis for policy makers to create incentives for hospitals to reduce the number of inappropriate ED admissions.

  5. Weak scratch detection and defect classification methods for a large-aperture optical element

    NASA Astrophysics Data System (ADS)

    Tao, Xian; Xu, De; Zhang, Zheng-Tao; Zhang, Feng; Liu, Xi-Long; Zhang, Da-Peng

    2017-03-01

    Surface defects on optics cause optic failure and heavy loss to the optical system. Therefore, surface defects on optics must be carefully inspected. This paper proposes a coarse-to-fine detection strategy of weak scratches in complicated dark-field images. First, all possible scratches are detected based on bionic vision. Then, each possible scratch is precisely positioned and connected to a complete scratch by the LSD and a priori knowledge. Finally, multiple scratches with various types can be detected in dark-field images. To classify defects and pollutants, a classification method based on GIST features is proposed. This paper uses many real dark-field images as experimental images. The results show that this method can detect multiple types of weak scratches in complex images and that the defects can be correctly distinguished with interference. This method satisfies the real-time and accurate detection requirements of surface defects.

  6. The limb movement analysis of rehabilitation exercises using wearable inertial sensors.

    PubMed

    Bingquan Huang; Giggins, Oonagh; Kechadi, Tahar; Caulfield, Brian

    2016-08-01

    Due to no supervision of a therapist in home based exercise programs, inertial sensor based feedback systems which can accurately assess movement repetitions are urgently required. The synchronicity and the degrees of freedom both show that one movement might resemble another movement signal which is mixed in with another not precisely defined movement. Therefore, the data and feature selections are important for movement analysis. This paper explores the data and feature selection for the limb movement analysis of rehabilitation exercises. The results highlight that the classification accuracy is very sensitive to the mount location of the sensors. The results show that the use of 2 or 3 sensor units, the combination of acceleration and gyroscope data, and the feature sets combined by the statistical feature set with another type of feature, can significantly improve the classification accuracy rates. The results illustrate that acceleration data is more effective than gyroscope data for most of the movement analysis.

  7. Computed tomographic atlas for the new international lymph node map for lung cancer: A radiation oncologist perspective.

    PubMed

    Lynch, Rod; Pitson, Graham; Ball, David; Claude, Line; Sarrut, David

    2013-01-01

    To develop a reproducible definition for each mediastinal lymph node station based on the new TNM classification for lung cancer. This paper proposes an atlas using the new international lymph node map used in the seventh edition of the TNM classification for lung cancer. Four radiation oncologists and 1 diagnostic radiologist were involved in the project to put forward a reproducible radiologic description for the lung lymph node stations. The International Association for the Study of Lung Cancer lymph node definitions for stations 1 to 11 have been described and illustrated on axial computed tomographic scan images using a certified radiotherapy planning system. This atlas will assist both diagnostic radiologists and radiation oncologists in accurately defining the lymph node stations on computed tomographic scan in patients diagnosed with lung cancer. Copyright © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  8. CNS embryonal tumours: WHO 2016 and beyond.

    PubMed

    Pickles, J C; Hawkins, C; Pietsch, T; Jacques, T S

    2018-02-01

    Embryonal tumours of the central nervous system (CNS) present a significant clinical challenge. Many of these neoplasms affect young children, have a very high mortality and therapeutic strategies are often aggressive with poor long-term outcomes. There is a great need to accurately diagnose embryonal tumours, predict their outcome and adapt therapy to the individual patient's risk. For the first time in 2016, the WHO classification took into account molecular characteristics for the diagnosis of CNS tumours. This integration of histological features with genetic information has significantly changed the diagnostic work-up and reporting of tumours of the CNS. However, this remains challenging in embryonal tumours due to their previously unaccounted tumour heterogeneity. We describe the recent revisions made to the 4th edition of the WHO classification of CNS tumours and review the main changes, while highlighting some of the more common diagnostic testing strategies. © 2017 British Neuropathological Society.

  9. Land use survey and mapping and water resources investigation in Korea

    NASA Technical Reports Server (NTRS)

    Choi, J. H.; Kim, W. I.; Son, D. S. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. Land use imagery is applicable to land use classification for small scale land use mapping less than 1:250,000. Land use mapping by satellite is more efficient and more cost-effective than land use mapping from conventional medium altitude aerial photographs. Six categories of level 1 land use classification are recognizable from MSS imagery. A hydrogeomorphological study of the Han River basin indicates that band 7 is useful for recognizing the soil and the weathering part of bed rock. The morphological change of the main river is accurately recognized and the drainage system in the area observed is easily classified because of the more or less simple rock type. Although the direct hydrological characteristics are not obtained from the MSS imagery, the indirect information such as the permeability of the soil and the vegetation cover, is helpful in interpreting the hydrological aspects.

  10. Vaccine adverse event text mining system for extracting features from vaccine safety reports.

    PubMed

    Botsis, Taxiarchis; Buttolph, Thomas; Nguyen, Michael D; Winiecki, Scott; Woo, Emily Jane; Ball, Robert

    2012-01-01

    To develop and evaluate a text mining system for extracting key clinical features from vaccine adverse event reporting system (VAERS) narratives to aid in the automated review of adverse event reports. Based upon clinical significance to VAERS reviewing physicians, we defined the primary (diagnosis and cause of death) and secondary features (eg, symptoms) for extraction. We built a novel vaccine adverse event text mining (VaeTM) system based on a semantic text mining strategy. The performance of VaeTM was evaluated using a total of 300 VAERS reports in three sequential evaluations of 100 reports each. Moreover, we evaluated the VaeTM contribution to case classification; an information retrieval-based approach was used for the identification of anaphylaxis cases in a set of reports and was compared with two other methods: a dedicated text classifier and an online tool. The performance metrics of VaeTM were text mining metrics: recall, precision and F-measure. We also conducted a qualitative difference analysis and calculated sensitivity and specificity for classification of anaphylaxis cases based on the above three approaches. VaeTM performed best in extracting diagnosis, second level diagnosis, drug, vaccine, and lot number features (lenient F-measure in the third evaluation: 0.897, 0.817, 0.858, 0.874, and 0.914, respectively). In terms of case classification, high sensitivity was achieved (83.1%); this was equal and better compared to the text classifier (83.1%) and the online tool (40.7%), respectively. Our VaeTM implementation of a semantic text mining strategy shows promise in providing accurate and efficient extraction of key features from VAERS narratives.

  11. Update on diabetes classification.

    PubMed

    Thomas, Celeste C; Philipson, Louis H

    2015-01-01

    This article highlights the difficulties in creating a definitive classification of diabetes mellitus in the absence of a complete understanding of the pathogenesis of the major forms. This brief review shows the evolving nature of the classification of diabetes mellitus. No classification scheme is ideal, and all have some overlap and inconsistencies. The only diabetes in which it is possible to accurately diagnose by DNA sequencing, monogenic diabetes, remains undiagnosed in more than 90% of the individuals who have diabetes caused by one of the known gene mutations. The point of classification, or taxonomy, of disease, should be to give insight into both pathogenesis and treatment. It remains a source of frustration that all schemes of diabetes mellitus continue to fall short of this goal. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Mechanization of Library Procedures in the Medium-Sized Medical Library: XIV. Correlations between National Library of Medicine Classification Numbers and MeSH Headings *

    PubMed Central

    Fenske, Ruth E.

    1972-01-01

    The purpose of this study was to determine the amount of correlation between National Library of Medicine classification numbers and MeSH headings in a body of cataloging which had already been done and then to find out which of two alternative methods of utilizing the correlation would be best. There was a correlation of 44.5% between classification numbers and subject headings in the data base studied, cataloging data covering 8,137 books. The results indicate that a subject heading index showing classification numbers would be the preferred method of utilization, because it would be more accurate than the alternative considered, an arrangement by classification numbers which would be consulted to obtain subject headings. PMID:16017607

  13. The medication reconciliation process and classification of discrepancies: a systematic review.

    PubMed

    Almanasreh, Enas; Moles, Rebekah; Chen, Timothy F

    2016-09-01

    Medication reconciliation is a part of the medication management process and facilitates improved patient safety during care transitions. The aims of the study were to evaluate how medication reconciliation has been conducted and how medication discrepancies have been classified. We searched MEDLINE, EMBASE, CINAHL, PubMed, International Pharmaceutical Abstracts (IPA), and Web of Science (WOS), in accordance with the PRISMA statement up to April 2016. Studies were eligible for inclusion if they evaluated the types of medication discrepancy found through the medication reconciliation process and contained a classification system for discrepancies. Data were extracted by one author based on a predefined table, and 10% of included studies were verified by two authors. Ninety-five studies met the inclusion criteria. Approximately one-third of included studies (n = 35, 36.8%) utilized a 'gold' standard medication list. The majority of studies (n = 57, 60%) used an empirical classification system and the number of classification terms ranged from 2 to 50 terms. Whilst we identified three taxonomies, only eight studies utilized these tools to categorize discrepancies, and 11.6% of included studies used different patient safety related terms rather than discrepancy to describe the disagreement between the medication lists. We suggest that clear and consistent information on prevalence, types, causes and contributory factors of medication discrepancy are required to develop suitable strategies to reduce the risk of adverse consequences on patient safety. Therefore, to obtain that information, we need a well-designed taxonomy to be able to accurately measure, report and classify medication discrepancies in clinical practice. © 2016 The British Pharmacological Society.

  14. Automatic classification of patients with idiopathic Parkinson's disease and progressive supranuclear palsy using diffusion MRI datasets

    NASA Astrophysics Data System (ADS)

    Talai, Sahand; Boelmans, Kai; Sedlacik, Jan; Forkert, Nils D.

    2017-03-01

    Parkinsonian syndromes encompass a spectrum of neurodegenerative diseases, which can be classified into various subtypes. The differentiation of these subtypes is typically conducted based on clinical criteria. Due to the overlap of intra-syndrome symptoms, the accurate differential diagnosis based on clinical guidelines remains a challenge with failure rates up to 25%. The aim of this study is to present an image-based classification method of patients with Parkinson's disease (PD) and patients with progressive supranuclear palsy (PSP), an atypical variant of PD. Therefore, apparent diffusion coefficient (ADC) parameter maps were calculated based on diffusion-tensor magnetic resonance imaging (MRI) datasets. Mean ADC values were determined in 82 brain regions using an atlas-based approach. The extracted mean ADC values for each patient were then used as features for classification using a linear kernel support vector machine classifier. To increase the classification accuracy, a feature selection was performed, which resulted in the top 17 attributes to be used as the final input features. A leave-one-out cross validation based on 56 PD and 21 PSP subjects revealed that the proposed method is capable of differentiating PD and PSP patients with an accuracy of 94.8%. In conclusion, the classification of PD and PSP patients based on ADC features obtained from diffusion MRI datasets is a promising new approach for the differentiation of Parkinsonian syndromes in the broader context of decision support systems.

  15. Nursing Home Levels of Care: Problems and Alternatives

    PubMed Central

    Bishop, Christine E.; Plough, Alonzo L.; Willemain, Thomas R.

    1980-01-01

    Providers and recipients of nursing home care under Medicaid are currently classified into two levels of care to facilitate appropriate placement, care, and reimbursement. The inherent imprecision of the two level system leads to problems of increased cost to Medicaid, lowered quality of care, and inadequate access to care for Medicaid recipients. However, a more refined system is likely to encounter difficulties in carrying out the functions performed by the broad two-level system, including assessment of residents, prescription of needed services, and implementation of service plans. The service type-service intensity classification proposed here can work in combination with a three-part reimbursement rate to encourage more accurate matching of resident needs, services, and Medicaid payment, while avoiding disruption of care. PMID:10309329

  16. MAP Fault Localization Based on Wide Area Synchronous Phasor Measurement Information

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Wang, Zengping

    2015-02-01

    In the research of complicated electrical engineering, the emergence of phasor measurement units (PMU) is a landmark event. The establishment and application of wide area measurement system (WAMS) in power system has made widespread and profound influence on the safe and stable operation of complicated power system. In this paper, taking full advantage of wide area synchronous phasor measurement information provided by PMUs, we have carried out precise fault localization based on the principles of maximum posteriori probability (MAP). Large numbers of simulation experiments have confirmed that the results of MAP fault localization are accurate and reliable. Even if there are interferences from white Gaussian stochastic noise, the results from MAP classification are also identical to the actual real situation.

  17. Ensemble of sparse classifiers for high-dimensional biological data.

    PubMed

    Kim, Sunghan; Scalzo, Fabien; Telesca, Donatello; Hu, Xiao

    2015-01-01

    Biological data are often high in dimension while the number of samples is small. In such cases, the performance of classification can be improved by reducing the dimension of data, which is referred to as feature selection. Recently, a novel feature selection method has been proposed utilising the sparsity of high-dimensional biological data where a small subset of features accounts for most variance of the dataset. In this study we propose a new classification method for high-dimensional biological data, which performs both feature selection and classification within a single framework. Our proposed method utilises a sparse linear solution technique and the bootstrap aggregating algorithm. We tested its performance on four public mass spectrometry cancer datasets along with two other conventional classification techniques such as Support Vector Machines and Adaptive Boosting. The results demonstrate that our proposed method performs more accurate classification across various cancer datasets than those conventional classification techniques.

  18. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  19. LANDSAT applications to wetlands classification in the upper Mississippi River Valley. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Lillesand, T. M.; Werth, L. F. (Principal Investigator)

    1980-01-01

    A 25% improvement in average classification accuracy was realized by processing double-date vs. single-date data. Under the spectrally and spatially complex site conditions characterizing the geographical area used, further improvement in wetland classification accuracy is apparently precluded by the spectral and spatial resolution restrictions of the LANDSAT MSS. Full scene analysis of scanning densitometer data extracted from scale infrared photography failed to permit discrimination of many wetland and nonwetland cover types. When classification of photographic data was limited to wetland areas only, much more detailed and accurate classification could be made. The integration of conventional image interpretation (to simply delineate wetland boundaries) and machine assisted classification (to discriminate among cover types present within the wetland areas) appears to warrant further research to study the feasibility and cost of extending this methodology over a large area using LANDSAT and/or small scale photography.

  20. The Value of Ensari’s Proposal in Evaluating the Mucosal Pathology of Childhood Celiac Disease: Old Classification versus New Version

    PubMed Central

    Güreşci, Servet; Hızlı, Şamil; Şimşek, Gülçin Güler

    2012-01-01

    Objective: Small intestinal biopsy remains the gold standard in diagnosing celiac disease (CD); however, the wide spectrum of histopathological states and differential diagnosis of CD is still a diagnostic problem for pathologists. Recently, Ensari reviewed the literature and proposed an update of the histopathological diagnosis and classification for CD. Materials and Methods: In this study, the histopathological materials of 54 children in whom CD was diagnosed at our hospital were reviewed to compare the previous Marsh and Modified Marsh-Oberhuber classifications with this new proposal. Results: In this study, we show that the Ensari classification is as accurate as the Marsh and Modified Marsh classifications in describing the consecutive states of mucosal damage seen in CD. Conclusions: Ensari’s classification is simple, practical and facilitative in diagnosing and subtyping of mucosal pathology of CD. PMID:25207015

  1. Branched-chain amino acids to tyrosine ratio (BTR) predicts intrahepatic distant recurrence and survival for early hepatocellular carcinoma.

    PubMed

    Ishikawa, Toru; Kubota, Tomoyuki; Horigome, Ryoko; Kimura, Naruhiro; Honda, Hiroki; Iwanaga, Akito; Seki, Keiichi; Honma, Terasu; Yoshida, Toshiaki

    2013-01-01

    The Child-Pugh classification system is the most widely used system for assessing hepatic functional reserve in HCC treatment. In the Child-Pugh classification system, serum albumin levels are used to accurately assess the status of protein metabolism and nutrition. To date, a lack of attention has been given to amino acid metabolism. In the present study, we investigated whether the branched-chain amino acids to tyrosine ratio (BTR) as an indicator of amino acid metabolism can serve as both a prognostic factor for early HCC and a predictive factor for recurrence. We conducted a cohort study of 50 patients with stage I/II HCC enrolled between May 2002 and December 2010. It was investigated whether BTR can serve as both a prognostic factor and a predictive factor for HCC recurrence. Overall survival rates were significantly higher in patients with high baseline BTR than in those with low BTR. Multivariate analysis showed that both BTR and serum albumin were prognostic factors, and that BTR was the best predictive factor for recurrence. BTR was a prognostic factor for early HCC and the most predictive factor for intrahepatic distant recurrence and contributing factors for survival.

  2. Enhancing image classification models with multi-modal biomarkers

    NASA Astrophysics Data System (ADS)

    Caban, Jesus J.; Liao, David; Yao, Jianhua; Mollura, Daniel J.; Gochuico, Bernadette; Yoo, Terry

    2011-03-01

    Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose, quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at providing quantitative measurements and assisting physicians during the decision-making process. As the need for more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged. In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40, are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.

  3. Sitting Posture Monitoring System Based on a Low-Cost Load Cell Using Machine Learning

    PubMed Central

    Roh, Jongryun; Park, Hyeong-jun; Lee, Kwang Jin; Hyeong, Joonho; Kim, Sayup

    2018-01-01

    Sitting posture monitoring systems (SPMSs) help assess the posture of a seated person in real-time and improve sitting posture. To date, SPMS studies reported have required many sensors mounted on the backrest plate and seat plate of a chair. The present study, therefore, developed a system that measures a total of six sitting postures including the posture that applied a load to the backrest plate, with four load cells mounted only on the seat plate. Various machine learning algorithms were applied to the body weight ratio measured by the developed SPMS to identify the method that most accurately classified the actual sitting posture of the seated person. After classifying the sitting postures using several classifiers, average and maximum classification rates of 97.20% and 97.94%, respectively, were obtained from nine subjects with a support vector machine using the radial basis function kernel; the results obtained by this classifier showed a statistically significant difference from the results of multiple classifications using other classifiers. The proposed SPMS was able to classify six sitting postures including the posture with loading on the backrest and showed the possibility of classifying the sitting posture even though the number of sensors is reduced. PMID:29329261

  4. A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.

    PubMed

    Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A

    2017-03-01

    Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  5. Research on a pulmonary nodule segmentation method combining fast self-adaptive FCM and classification.

    PubMed

    Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai

    2015-01-01

    The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms.

  6. Adiabatic Quantum Anomaly Detection and Machine Learning

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen; Lidar, Daniel

    2012-02-01

    We present methods of anomaly detection and machine learning using adiabatic quantum computing. The machine learning algorithm is a boosting approach which seeks to optimally combine somewhat accurate classification functions to create a unified classifier which is much more accurate than its components. This algorithm then becomes the first part of the larger anomaly detection algorithm. In the anomaly detection routine, we first use adiabatic quantum computing to train two classifiers which detect two sets, the overlap of which forms the anomaly class. We call this the learning phase. Then, in the testing phase, the two learned classification functions are combined to form the final Hamiltonian for an adiabatic quantum computation, the low energy states of which represent the anomalies in a binary vector space.

  7. Classification of human coronary atherosclerotic plaques using ex vivo high-resolution multicontrast-weighted MRI compared with histopathology.

    PubMed

    Li, Tao; Li, Xin; Zhao, Xihai; Zhou, Weihua; Cai, Zulong; Yang, Li; Guo, Aitao; Zhao, Shaohong

    2012-05-01

    The objective of our study was to evaluate the feasibility of ex vivo high-resolution multicontrast-weighted MRI to accurately classify human coronary atherosclerotic plaques according to the American Heart Association classification. Thirteen human cadaver heart specimens were imaged using high-resolution multicontrast-weighted MR technique (T1-weighted, proton density-weighted, and T2-weighted). All MR images were matched with histopathologic sections according to the landmark of the bifurcation of the left main coronary artery. The sensitivity and specificity of MRI for the classification of plaques were determined, and Cohen's kappa analysis was applied to evaluate the agreement between MRI and histopathology in the classification of atherosclerotic plaques. One hundred eleven MR cross-sectional images obtained perpendicular to the long axis of the proximal left anterior descending artery were successfully matched with the histopathologic sections. For the classification of plaques, the sensitivity and specificity of MRI were as follows: type I-II (near normal), 60% and 100%; type III (focal lipid pool), 80% and 100%; type IV-V (lipid, necrosis, fibrosis), 96.2% and 88.2%; type VI (hemorrhage), 100% and 99.0%; type VII (calcification), 93% and 100%; and type VIII (fibrosis without lipid core), 100% and 99.1%, respectively. Isointensity, which indicates lipid composition on histopathology, was detected on MRI in 48.8% of calcified plaques. Agreement between MRI and histopathology for plaque classification was 0.86 (p < 0.001). Ex vivo high-resolution multicontrast-weighted MRI can accurately classify advanced atherosclerotic plaques in human coronary arteries.

  8. SkICAT: A cataloging and analysis tool for wide field imaging surveys

    NASA Technical Reports Server (NTRS)

    Weir, N.; Fayyad, U. M.; Djorgovski, S. G.; Roden, J.

    1992-01-01

    We describe an integrated system, SkICAT (Sky Image Cataloging and Analysis Tool), for the automated reduction and analysis of the Palomar Observatory-ST ScI Digitized Sky Survey. The Survey will consist of the complete digitization of the photographic Second Palomar Observatory Sky Survey (POSS-II) in three bands, comprising nearly three Terabytes of pixel data. SkICAT applies a combination of existing packages, including FOCAS for basic image detection and measurement and SAS for database management, as well as custom software, to the task of managing this wealth of data. One of the most novel aspects of the system is its method of object classification. Using state-of-theart machine learning classification techniques (GID3* and O-BTree), we have developed a powerful method for automatically distinguishing point sources from non-point sources and artifacts, achieving comparably accurate discrimination a full magnitude fainter than in previous Schmidt plate surveys. The learning algorithms produce decision trees for classification by examining instances of objects classified by eye on both plate and higher quality CCD data. The same techniques will be applied to perform higher-level object classification (e.g., of galaxy morphology) in the near future. Another key feature of the system is the facility to integrate the catalogs from multiple plates (and portions thereof) to construct a single catalog of uniform calibration and quality down to the faintest limits of the survey. SkICAT also provides a variety of data analysis and exploration tools for the scientific utilization of the resulting catalogs. We include initial results of applying this system to measure the counts and distribution of galaxies in two bands down to Bj is approximately 21 mag over an approximate 70 square degree multi-plate field from POSS-II. SkICAT is constructed in a modular and general fashion and should be readily adaptable to other large-scale imaging surveys.

  9. Real-time ultrasound image classification for spine anesthesia using local directional Hadamard features.

    PubMed

    Pesteie, Mehran; Abolmaesumi, Purang; Ashab, Hussam Al-Deen; Lessoway, Victoria A; Massey, Simon; Gunka, Vit; Rohling, Robert N

    2015-06-01

    Injection therapy is a commonly used solution for back pain management. This procedure typically involves percutaneous insertion of a needle between or around the vertebrae, to deliver anesthetics near nerve bundles. Most frequently, spinal injections are performed either blindly using palpation or under the guidance of fluoroscopy or computed tomography. Recently, due to the drawbacks of the ionizing radiation of such imaging modalities, there has been a growing interest in using ultrasound imaging as an alternative. However, the complex spinal anatomy with different wave-like structures, affected by speckle noise, makes the accurate identification of the appropriate injection plane difficult. The aim of this study was to propose an automated system that can identify the optimal plane for epidural steroid injections and facet joint injections. A multi-scale and multi-directional feature extraction system to provide automated identification of the appropriate plane is proposed. Local Hadamard coefficients are obtained using the sequency-ordered Hadamard transform at multiple scales. Directional features are extracted from local coefficients which correspond to different regions in the ultrasound images. An artificial neural network is trained based on the local directional Hadamard features for classification. The proposed method yields distinctive features for classification which successfully classified 1032 images out of 1090 for epidural steroid injection and 990 images out of 1052 for facet joint injection. In order to validate the proposed method, a leave-one-out cross-validation was performed. The average classification accuracy for leave-one-out validation was 94 % for epidural and 90 % for facet joint targets. Also, the feature extraction time for the proposed method was 20 ms for a native 2D ultrasound image. A real-time machine learning system based on the local directional Hadamard features extracted by the sequency-ordered Hadamard transform for detecting the laminae and facet joints in ultrasound images has been proposed. The system has the potential to assist the anesthesiologists in quickly finding the target plane for epidural steroid injections and facet joint injections.

  10. Steganalysis feature improvement using expectation maximization

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.

    2007-04-01

    Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.

  11. Higher-order neural network software for distortion invariant object recognition

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly

    1991-01-01

    The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.

  12. Variational mode decomposition based approach for accurate classification of color fundus images with hemorrhages

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim; Shmuel, Amir

    2017-11-01

    Diabetic retinopathy is a disease that can cause a loss of vision. An early and accurate diagnosis helps to improve treatment of the disease and prognosis. One of the earliest characteristics of diabetic retinopathy is the appearance of retinal hemorrhages. The purpose of this study is to design a fully automated system for the detection of hemorrhages in a retinal image. In the first stage of our proposed system, a retinal image is processed with variational mode decomposition (VMD) to obtain the first variational mode, which captures the high frequency components of the original image. In the second stage, four texture descriptors are extracted from the first variational mode. Finally, a classifier trained with all computed texture descriptors is used to distinguish between images of healthy and unhealthy retinas with hemorrhages. Experimental results showed evidence of the effectiveness of the proposed system for detection of hemorrhages in the retina, since a perfect detection rate was achieved. Our proposed system for detecting diabetic retinopathy is simple and easy to implement. It requires only short processing time, and it yields higher accuracy in comparison with previously proposed methods for detecting diabetic retinopathy.

  13. RAIRS2 a new expert system for diagnosing tuberculosis with real-world tournament selection mechanism inside artificial immune recognition system.

    PubMed

    Saybani, Mahmoud Reza; Shamshirband, Shahaboddin; Golzari, Shahram; Wah, Teh Ying; Saeed, Aghabozorgi; Mat Kiah, Miss Laiha; Balas, Valentina Emilia

    2016-03-01

    Tuberculosis is a major global health problem that has been ranked as the second leading cause of death from an infectious disease worldwide, after the human immunodeficiency virus. Diagnosis based on cultured specimens is the reference standard; however, results take weeks to obtain. Slow and insensitive diagnostic methods hampered the global control of tuberculosis, and scientists are looking for early detection strategies, which remain the foundation of tuberculosis control. Consequently, there is a need to develop an expert system that helps medical professionals to accurately diagnose the disease. The objective of this study is to diagnose tuberculosis using a machine learning method. Artificial immune recognition system (AIRS) has been used successfully for diagnosing various diseases. However, little effort has been undertaken to improve its classification accuracy. In order to increase the classification accuracy, this study introduces a new hybrid system that incorporates real tournament selection mechanism into the AIRS. This mechanism is used to control the population size of the model and to overcome the existing selection pressure. Patient epacris reports obtained from the Pasteur laboratory in northern Iran were used as the benchmark data set. The sample consisted of 175 records, from which 114 (65 %) were positive for TB, and the remaining 61 (35 %) were negative. The classification performance was measured through tenfold cross-validation, root-mean-square error, sensitivity, and specificity. With an accuracy of 100 %, RMSE of 0, sensitivity of 100 %, and specificity of 100 %, the proposed method was able to successfully classify tuberculosis cases. In addition, the proposed method is comparable with top classifiers used in this research.

  14. Invasive endocervical adenocarcinoma: proposal for a new pattern-based classification system with significant clinical implications: a multi-institutional study.

    PubMed

    Diaz De Vivar, Andrea; Roma, Andres A; Park, Kay J; Alvarado-Cabrero, Isabel; Rasty, Golnar; Chanona-Vilchis, Jose G; Mikami, Yoshiki; Hong, Sung R; Arville, Brent; Teramoto, Norihiro; Ali-Fehmi, Rouba; Rutgers, Joanne K L; Tabassum, Farah; Barbuto, Denise; Aguilera-Barrantes, Irene; Shaye-Brown, Alexandra; Daya, Dean; Silva, Elvio G

    2013-11-01

    The management of endocervical adenocarcinoma is largely based on tumor size and depth of invasion (DOI); however, DOI is difficult to measure accurately. The surgical treatment includes resection of regional lymph nodes, even though most lymph nodes are negative and lymphadenectomies can cause significant morbidity. We have investigated alternative parameters to better identify patients at risk of node metastases. Cases of invasive endocervical adenocarcinoma from 12 institutions were reviewed, and clinical/pathologic features assessed: patients' age, tumor size, DOI, differentiation, lymph-vascular invasion, lymph node metastases, recurrences, and stage. Cases were classified according to a new pattern-based system into Pattern A (well-demarcated glands), B (early destructive stromal invasion arising from well-demarcated glands), and C (diffuse destructive invasion). In total, 352 cases (FIGO Stages I-IV) were identified. Patients' age ranged from 20 to 83 years (mean 45), DOI ranged from 0.2 to 27 mm (mean 6.73), and lymph-vascular invasion was present in 141 cases. Forty-nine (13.9%) demonstrated lymph node metastases. Using this new system, 73 patients (20.7%) with Pattern A tumors (all Stage I) were identified. None had lymph node metastases and/or recurrences. Ninety patients (25.6%) had Pattern B tumors, of which 4 (4.4%) had positive nodes; whereas 189 (53.7%) had Pattern C tumors, of which 45 (23.8%) had metastatic nodes. The proposed classification system can spare 20.7% of patients (Pattern A) of unnecessary lymphadenectomy. Patients with Pattern B rarely present with positive nodes. An aggressive approach is justified in patients with Pattern C. This classification system is simple, easy to apply, and clinically significant.

  15. [Solitary fibrous tumors and hemangiopericytomas of the meninges: Immunophenotype and histoprognosis in a series of 17 cases].

    PubMed

    Savary, Caroline; Rousselet, Marie-Christine; Michalak, Sophie; Fournier, Henri-Dominique; Taris, Michaël; Loussouarn, Delphine; Rousseau, Audrey

    2016-08-01

    The 2007 World Health Organization (WHO) classification of tumors of the central nervous system distinguishes meningeal hemangiopericytomas (HPC) from solitary fibrous tumors (TFS). In the WHO classification of tumors of soft tissue and bone, those neoplasms are no longer separate entities since the discovery in 2013 of a common oncogenic event, i.e. the NAB2-STAT6 gene fusion. A shared histopronostic grading system, called "Marseille grading system", was recently proposed, based on hypercellularity, mitotic count and necrosis. We evaluated the immunophenotype and histoprognosis in a retrospective cohort of intracranial HPC and TFS. Fifteen initial tumors and 2 recurrences were evaluated by immunohistochemistry for STAT6, CD34, EMA, progesterone receptors and Ki67. The pronostic value of the WHO and the Marseille grading systems was tested on 12 patients with clinical follow-up. Initial tumors were 11 HPC and 4 SFT. STAT6 and CD34 were expressed in 16/17 tumors, EMA and progesterone receptors in 2 and 5 cases, respectively. The Ki67 labelling index was 6.25% in HPC and 3% in SFT. Half of the tumors recurred between 2 years and 9 years after initial diagnosis (mean time 5 years). No statistical difference in the risk of recurrence was associated with either grade (WHO or Marseille), in this small cohort. The diagnosis of HPC and TFS is facilitated by the almost constant immuno-expression of STAT6, and this justifies their common classification. The high rate of recurrence implies a very long-term follow-up because the current grading systems do not accurately predict the individual risk. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  16. Segmentation method of eye region based on fuzzy logic system for classifying open and closed eyes

    NASA Astrophysics Data System (ADS)

    Kim, Ki Wan; Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Lee, Eui Chul; Park, Kang Ryoung

    2015-03-01

    The classification of eye openness and closure has been researched in various fields, e.g., driver drowsiness detection, physiological status analysis, and eye fatigue measurement. For a classification with high accuracy, accurate segmentation of the eye region is required. Most previous research used the segmentation method by image binarization on the basis that the eyeball is darker than skin, but the performance of this approach is frequently affected by thick eyelashes or shadows around the eye. Thus, we propose a fuzzy-based method for classifying eye openness and closure. First, the proposed method uses I and K color information from the HSI and CMYK color spaces, respectively, for eye segmentation. Second, the eye region is binarized using the fuzzy logic system based on I and K inputs, which is less affected by eyelashes and shadows around the eye. The combined image of I and K pixels is obtained through the fuzzy logic system. Third, in order to reflect the effect by all the inference values on calculating the output score of the fuzzy system, we use the revised weighted average method, where all the rectangular regions by all the inference values are considered for calculating the output score. Fourth, the classification of eye openness or closure is successfully made by the proposed fuzzy-based method with eye images of low resolution which are captured in the environment of people watching TV at a distance. By using the fuzzy logic system, our method does not require the additional procedure of training irrespective of the chosen database. Experimental results with two databases of eye images show that our method is superior to previous approaches.

  17. The comprehensiveness of the ESHRE/ESGE classification of female genital tract congenital anomalies: a systematic review of cases not classified by the AFS system

    PubMed Central

    Di Spiezio Sardo, A.; Campo, R.; Gordts, S.; Spinelli, M.; Cosimato, C.; Tanos, V.; Brucker, S.; Li, T. C.; Gergolet, M.; De Angelis, C.; Gianaroli, L.; Grimbizis, G.

    2015-01-01

    STUDY QUESTION How comprehensive is the recently published European Society of Human Reproduction and Embryology (ESHRE)/European Society for Gynaecological Endoscopy (ESGE) classification system of female genital anomalies? SUMMARY ANSWER The ESHRE/ESGE classification provides a comprehensive description and categorization of almost all of the currently known anomalies that could not be classified properly with the American Fertility Society (AFS) system. WHAT IS KNOWN ALREADY Until now, the more accepted classification system, namely that of the AFS, is associated with serious limitations in effective categorization of female genital anomalies. Many cases published in the literature could not be properly classified using the AFS system, yet a clear and accurate classification is a prerequisite for treatment. STUDY DESIGN, SIZE AND DURATION The CONUTA (CONgenital UTerine Anomalies) ESHRE/ESGE group conducted a systematic review of the literature to examine if those types of anomalies that could not be properly classified with the AFS system could be effectively classified with the use of the new ESHRE/ESGE system. An electronic literature search through Medline, Embase and Cochrane library was carried out from January 1988 to January 2014. Three participants independently screened, selected articles of potential interest and finally extracted data from all the included studies. Any disagreement was discussed and resolved after consultation with a fourth reviewer and the results were assessed independently and approved by all members of the CONUTA group. PARTICIPANTS/MATERIALS, SETTING, METHODS Among the 143 articles assessed in detail, 120 were finally selected reporting 140 cases that could not properly fit into a specific class of the AFS system. Those 140 cases were clustered in 39 different types of anomalies. MAIN RESULTS AND THE ROLE OF CHANCE The congenital anomaly involved a single organ in 12 (30.8%) out of the 39 types of anomalies, while multiple organs and/or segments of Müllerian ducts (complex anomaly) were involved in 27 (69.2%) types. Uterus was the organ most frequently involved (30/39: 76.9%), followed by cervix (26/39: 66.7%) and vagina (23/39: 59%). In all 39 types, the ESHRE/ESGE classification system provided a comprehensive description of each single or complex anomaly. A precise categorization was reached in 38 out of 39 types studied. Only one case of a bizarre uterine anomaly, with no clear embryological defect, could not be categorized and thus was placed in Class 6 (un-classified) of the ESHRE/ESGE system. LIMITATIONS, REASONS FOR CAUTION The review of the literature was thorough but we cannot rule out the possibility that other defects exist which will also require testing in the new ESHRE/ESGE system. These anomalies, however, must be rare. WIDER IMPLICATIONS OF THE FINDINGS The comprehensiveness of the ESHRE/ESGE classification adds objective scientific validity to its use. This may, therefore, promote its further dissemination and acceptance, which will have a positive outcome in clinical care and research. STUDY FUNDING/COMPETING INTEREST(S) None. PMID:25788565

  18. Can glenoid wear be accurately assessed using x-ray imaging? Evaluating agreement of x-ray and magnetic resonance imaging (MRI) Walch classification.

    PubMed

    Kopka, Michaela; Fourman, Mitchell; Soni, Ashish; Cordle, Andrew C; Lin, Albert

    2017-09-01

    The Walch classification is the most recognized means of assessing glenoid wear in preoperative planning for shoulder arthroplasty. This classification relies on advanced imaging, which is more expensive and less practical than plain radiographs. The purpose of this study was to determine whether the Walch classification could be accurately applied to x-ray images compared with magnetic resonance imaging (MRI) as the gold standard. We hypothesized that x-ray images cannot adequately replace advanced imaging in the evaluation of glenoid wear. Preoperative axillary x-ray images and MRI scans of 50 patients assessed for shoulder arthroplasty were independently reviewed by 5 raters. Glenoid wear was individually classified according to the Walch classification using each imaging modality. The raters then collectively reviewed the MRI scans and assigned a consensus classification to serve as the gold standard. The κ coefficient was used to determine interobserver agreement for x-ray images and independent MRI reads, as well as the agreement between x-ray images and consensus MRI. The inter-rater agreement for x-ray images and MRIs was "moderate" (κ = 0.42 and κ = 0.47, respectively) for the 5-category Walch classification (A1, A2, B1, B2, C) and "moderate" (κ = 0.54 and κ = 0.59, respectively) for the 3-category Walch classification (A, B, C). The agreement between x-ray images and consensus MRI was much lower: "fair-to-moderate" (κ = 0.21-0.51) for the 5-category and "moderate" (κ = 0.36-0.60) for the 3-category Walch classification. The inter-rater agreement between x-ray images and consensus MRI is "fair-to-moderate." This is lower than the previously reported reliability of the Walch classification using computed tomography scans. Accordingly, x-ray images are inferior to advanced imaging when assessing glenoid wear. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  19. Annotation and prediction of stress and workload from physiological and inertial signals.

    PubMed

    Ghosh, Arindam; Danieli, Morena; Riccardi, Giuseppe

    2015-08-01

    Continuous daily stress and high workload can have negative effects on individuals' physical and mental well-being. It has been shown that physiological signals may support the prediction of stress and workload. However, previous research is limited by the low diversity of signals concurring to such predictive tasks and controlled experimental design. In this paper we present 1) a pipeline for continuous and real-life acquisition of physiological and inertial signals 2) a mobile agent application for on-the-go event annotation and 3) an end-to-end signal processing and classification system for stress and workload from diverse signal streams. We study physiological signals such as Galvanic Skin Response (GSR), Skin Temperature (ST), Inter Beat Interval (IBI) and Blood Volume Pulse (BVP) collected using a non-invasive wearable device; and inertial signals collected from accelerometer and gyroscope sensors. We combine them with subjects' inputs (e.g. event tagging) acquired using the agent application, and their emotion regulation scores. In our experiments we explore signal combination and selection techniques for stress and workload prediction from subjects whose signals have been recorded continuously during their daily life. The end-to-end classification system is described for feature extraction, signal artifact removal, and classification. We show that a combination of physiological, inertial and user event signals provides accurate prediction of stress for real-life users and signals.

  20. A binary method for simple and accurate two-dimensional cursor control from EEG with minimal subject training.

    PubMed

    Kayagil, Turan A; Bai, Ou; Henriquez, Craig S; Lin, Peter; Furlani, Stephen J; Vorbach, Sherry; Hallett, Mark

    2009-05-06

    Brain-computer interfaces (BCI) use electroencephalography (EEG) to interpret user intention and control an output device accordingly. We describe a novel BCI method to use a signal from five EEG channels (comprising one primary channel with four additional channels used to calculate its Laplacian derivation) to provide two-dimensional (2-D) control of a cursor on a computer screen, with simple threshold-based binary classification of band power readings taken over pre-defined time windows during subject hand movement. We tested the paradigm with four healthy subjects, none of whom had prior BCI experience. Each subject played a game wherein he or she attempted to move a cursor to a target within a grid while avoiding a trap. We also present supplementary results including one healthy subject using motor imagery, one primary lateral sclerosis (PLS) patient, and one healthy subject using a single EEG channel without Laplacian derivation. For the four healthy subjects using real hand movement, the system provided accurate cursor control with little or no required user training. The average accuracy of the cursor movement was 86.1% (SD 9.8%), which is significantly better than chance (p = 0.0015). The best subject achieved a control accuracy of 96%, with only one incorrect bit classification out of 47. The supplementary results showed that control can be achieved under the respective experimental conditions, but with reduced accuracy. The binary method provides naïve subjects with real-time control of a cursor in 2-D using dichotomous classification of synchronous EEG band power readings from a small number of channels during hand movement. The primary strengths of our method are simplicity of hardware and software, and high accuracy when used by untrained subjects.

  1. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  2. An Accurate Direction Finding Scheme Using Virtual Antenna Array via Smartphones

    PubMed Central

    Wang, Xiaopu; Xiong, Yan; Huang, Wenchao

    2016-01-01

    With the development of localization technologies, researchers solve the indoor localization problems using diverse methods and equipment. Most localization techniques require either specialized devices or fingerprints, which are inconvenient for daily use. Therefore, we propose and implement an accurate, efficient and lightweight system for indoor direction finding using common smartphones and loudspeakers. Our method is derived from a key insight: By moving a smartphone in regular patterns, we can effectively emulate the sensitivity and functionality of a Uniform Antenna Array to estimate the angle of arrival of the target signal. Specifically, a user only needs to hold his smartphone still in front of him, and then rotate his body around 360∘ duration with the smartphone at an approximate constant velocity. Then, our system can provide accurate directional guidance and lead the user to their destinations (normal loudspeakers we preset in the indoor environment transmitting high frequency acoustic signals) after a few measurements. Major challenges in implementing our system are not only imitating a virtual antenna array by ordinary smartphones but also overcoming the detection difficulties caused by the complex indoor environment. In addition, we leverage the gyroscope of the smartphone to reduce the impact of a user’s motion pattern change to the accuracy of our system. In order to get rid of the multipath effect, we leverage multiple signal classification to calculate the direction of the target signal, and then design and deploy our system in various indoor scenes. Extensive comparative experiments show that our system is reliable under various circumstances. PMID:27801866

  3. DARPA counter-sniper program: Phase 1 Acoustic Systems Demonstration results

    NASA Astrophysics Data System (ADS)

    Carapezza, Edward M.; Law, David B.; Csanadi, Christina J.

    1997-02-01

    During October 1995 through May 1996, the Defense Advanced Research Projects Agency sponsored the development of prototype systems that exploit acoustic muzzle blast and ballistic shock wave signatures to accurately predict the location of gunfire events and associated shooter locations using either single or multiple volumetric arrays. The output of these acoustic systems is an estimate of the shooter location and a classification estimate of the caliber of the shooter's weapon. A portable display and control unit provides both graphical and alphanumeric shooter location related information integrated on a two- dimensional digital map of the defended area. The final Phase I Acoustic Systems Demonstration field tests were completed in May. These these tests were held at USMC Base Camp Pendleton Military Operations Urban Training (MOUT) facility. These tests were structured to provide challenging gunfire related scenarios with significant reverberation and multi-path conditions. Special shot geometries and false alarms were included in these tests to probe potential system vulnerabilities and to determine the performance and robustness of the systems. Five prototypes developed by U.S. companies and one Israeli developed prototype were tested. This analysis quantifies the spatial resolution estimation capability (azimuth, elevation and range) of these prototypes and describes their ability to accurately classify the type of bullet fired in a challenging urban- like setting.

  4. Grading dermatologic adverse events of cancer treatments: the Common Terminology Criteria for Adverse Events Version 4.0.

    PubMed

    Chen, Alice P; Setser, Ann; Anadkat, Milan J; Cotliar, Jonathan; Olsen, Elise A; Garden, Benjamin C; Lacouture, Mario E

    2012-11-01

    Dermatologic adverse events to cancer therapies have become more prevalent and may to lead to dose modifications or discontinuation of life-saving or prolonging treatments. This has resulted in a new collaboration between oncologists and dermatologists, which requires accurate cataloging and grading of side effects. The Common Terminology Criteria for Adverse Events Version 4.0 is a descriptive terminology and grading system that can be used for uniform reporting of adverse events. A proper understanding of this standardized classification system is essential for dermatologists to properly communicate with all physicians caring for patients with cancer. Copyright © 2012 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  5. Case-Mix for Performance Management: A Risk Algorithm Based on ICD-10-CM.

    PubMed

    Gao, Jian; Moran, Eileen; Almenoff, Peter L

    2018-06-01

    Accurate risk adjustment is the key to a reliable comparison of cost and quality performance among providers and hospitals. However, the existing case-mix algorithms based on age, sex, and diagnoses can only explain up to 50% of the cost variation. More accurate risk adjustment is desired for provider performance assessment and improvement. To develop a case-mix algorithm that hospitals and payers can use to measure and compare cost and quality performance of their providers. All 6,048,895 patients with valid diagnoses and cost recorded in the US Veterans health care system in fiscal year 2016 were included in this study. The dependent variable was total cost at the patient level, and the explanatory variables were age, sex, and comorbidities represented by 762 clinically homogeneous groups, which were created by expanding the 283 categories from Clinical Classifications Software based on ICD-10-CM codes. The split-sample method was used to assess model overfitting and coefficient stability. The predictive power of the algorithms was ascertained by comparing the R, mean absolute percentage error, root mean square error, predictive ratios, and c-statistics. The expansion of the Clinical Classifications Software categories resulted in higher predictive power. The R reached 0.72 and 0.52 for the transformed and raw scale cost, respectively. The case-mix algorithm we developed based on age, sex, and diagnoses outperformed the existing case-mix models reported in the literature. The method developed in this study can be used by other health systems to produce tailored risk models for their specific purpose.

  6. An Assessment of Worldview-2 Imagery for the Classification Of a Mixed Deciduous Forest

    NASA Astrophysics Data System (ADS)

    Carter, Nahid

    Remote sensing provides a variety of methods for classifying forest communities and can be a valuable tool for the impact assessment of invasive species. The emerald ash borer (Agrilus planipennis) infestation of ash trees (Fraxinus) in the United States has resulted in the mortality of large stands of ash throughout the Northeast. This study assessed the suitability of multi-temporal Worldview-2 multispectral satellite imagery for classifying a mixed deciduous forest in Upstate New York. Training sites were collected using a Global Positioning System (GPS) receiver, with each training site consisting of a single tree of a corresponding class. Six classes were collected; Ash, Maple, Oak, Beech, Evergreen, and Other. Three different classifications were investigated on four data sets. A six class classification (6C), a two class classification consisting of ash and all other classes combined (2C), and a merging of the ash and maple classes for a five class classification (5C). The four data sets included Worldview-2 multispectral data collection from June 2010 (J-WV2) and September 2010 (S-WV2), a layer stacked data set using J-WV2 and S-WV2 (LS-WV2), and a reduced data set (RD-WV2). RD-WV2 was created using a statistical analysis of the processed and unprocessed imagery. Statistical analysis was used to reduce the dimensionality of the data and identify key bands to create a fourth data set (RD-WV2). Overall accuracy varied considerably depending upon the classification type, but results indicated that ash was confused with maple in a majority of the classifications. Ash was most accurately identified using the 2C classification and RD-WV2 data set (81.48%). A combination of the ash and maple classes yielded an accuracy of 89.41%. Future work should focus on separating the ash and maple classifiers by using data sources such as hyperspectral imagery, LiDAR, or extensive forest surveys.

  7. A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork.

    PubMed

    Xu, Yi; Chen, Quansheng; Liu, Yan; Sun, Xin; Huang, Qiping; Ouyang, Qin; Zhao, Jiewen

    2018-04-01

    This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control.

  8. A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork

    PubMed Central

    Xu, Yi; Chen, Quansheng; Liu, Yan; Sun, Xin; Huang, Qiping; Ouyang, Qin; Zhao, Jiewen

    2018-01-01

    Abstract This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control. PMID:29805285

  9. Remembering Left–Right Orientation of Pictures

    PubMed Central

    Bartlett, James C.; Gernsbacher, Morton Ann; Till, Robert E.

    2015-01-01

    In a study of recognition memory for pictures, we observed an asymmetry in classifying test items as “same” versus “different” in left–right orientation: Identical copies of previously viewed items were classified more accurately than left–right reversals of those items. Response bias could not explain this asymmetry, and, moreover, correct “same” and “different” classifications were independently manipulable: Whereas repetition of input pictures (one vs. two presentations) affected primarily correct “same” classifications, retention interval (3 hr vs. 1 week) affected primarily correct “different” classifications. In addition, repetition but not retention interval affected judgments that previously seen pictures (both identical and reversed) were “old”. These and additional findings supported a dual-process hypothesis that links “same” classifications to high familiarity, and “different” classifications to conscious sampling of images of previously viewed pictures. PMID:2949051

  10. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  11. Empirical Testing of an Algorithm for Defining Somatization in Children

    PubMed Central

    Eisman, Howard D.; Fogel, Joshua; Lazarovich, Regina; Pustilnik, Inna

    2007-01-01

    Introduction A previous article proposed an algorithm for defining somatization in children by classifying them into three categories: well, medically ill, and somatizer; the authors suggested further empirical validation of the algorithm (Postilnik et al., 2006). We use the Child Behavior Checklist (CBCL) to provide this empirical validation. Method Parents of children seen in pediatric clinics completed the CBCL (n=126). The physicians of these children completed specially-designed questionnaires. The sample comprised of 62 boys and 64 girls (age range 2 to 15 years). Classification categories included: well (n=53), medically ill (n=55), and somatizer (n=18). Analysis of variance (ANOVA) was used for statistical comparisons. Discriminant function analysis was conducted with the CBCL subscales. Results There were significant differences between the classification categories for the somatic complaints (p=<0.001), social problems (p=0.004), thought problems (p=0.01), attention problems (0.006), and internalizing (p=0.003) subscales and also total (p=0.001), and total-t (p=0.001) scales of the CBCL. Discriminant function analysis showed that 78% of somatizers and 66% of well were accurately classified, while only 35% of medically ill were accurately classified. Conclusion The somatization classification algorithm proposed by Postilnik et al. (2006) shows promise for classification of children and adolescents with somatic symptoms. PMID:18421368

  12. Data mining in forecasting PVT correlations of crude oil systems based on Type1 fuzzy logic inference systems

    NASA Astrophysics Data System (ADS)

    El-Sebakhy, Emad A.

    2009-09-01

    Pressure-volume-temperature properties are very important in the reservoir engineering computations. There are many empirical approaches for predicting various PVT properties based on empirical correlations and statistical regression models. Last decade, researchers utilized neural networks to develop more accurate PVT correlations. These achievements of neural networks open the door to data mining techniques to play a major role in oil and gas industry. Unfortunately, the developed neural networks correlations are often limited, and global correlations are usually less accurate compared to local correlations. Recently, adaptive neuro-fuzzy inference systems have been proposed as a new intelligence framework for both prediction and classification based on fuzzy clustering optimization criterion and ranking. This paper proposes neuro-fuzzy inference systems for estimating PVT properties of crude oil systems. This new framework is an efficient hybrid intelligence machine learning scheme for modeling the kind of uncertainty associated with vagueness and imprecision. We briefly describe the learning steps and the use of the Takagi Sugeno and Kang model and Gustafson-Kessel clustering algorithm with K-detected clusters from the given database. It has featured in a wide range of medical, power control system, and business journals, often with promising results. A comparative study will be carried out to compare their performance of this new framework with the most popular modeling techniques, such as neural networks, nonlinear regression, and the empirical correlations algorithms. The results show that the performance of neuro-fuzzy systems is accurate, reliable, and outperform most of the existing forecasting techniques. Future work can be achieved by using neuro-fuzzy systems for clustering the 3D seismic data, identification of lithofacies types, and other reservoir characterization.

  13. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  14. Hyperspectral Image Classification via Multitask Joint Sparse Representation and Stepwise MRF Optimization.

    PubMed

    Yuan, Yuan; Lin, Jianzhe; Wang, Qi

    2016-12-01

    Hyperspectral image (HSI) classification is a crucial issue in remote sensing. Accurate classification benefits a large number of applications such as land use analysis and marine resource utilization. But high data correlation brings difficulty to reliable classification, especially for HSI with abundant spectral information. Furthermore, the traditional methods often fail to well consider the spatial coherency of HSI that also limits the classification performance. To address these inherent obstacles, a novel spectral-spatial classification scheme is proposed in this paper. The proposed method mainly focuses on multitask joint sparse representation (MJSR) and a stepwise Markov random filed framework, which are claimed to be two main contributions in this procedure. First, the MJSR not only reduces the spectral redundancy, but also retains necessary correlation in spectral field during classification. Second, the stepwise optimization further explores the spatial correlation that significantly enhances the classification accuracy and robustness. As far as several universal quality evaluation indexes are concerned, the experimental results on Indian Pines and Pavia University demonstrate the superiority of our method compared with the state-of-the-art competitors.

  15. Classification of Dual-Wavelength Airborne Laser Scanning Point Cloud Based on the Radiometric Properties of the Objects

    NASA Astrophysics Data System (ADS)

    Pilarska, M.

    2018-05-01

    Airborne laser scanning (ALS) is a well-known and willingly used technology. One of the advantages of this technology is primarily its fast and accurate data registration. In recent years ALS is continuously developed. One of the latest achievements is multispectral ALS, which consists in obtaining simultaneously the data in more than one laser wavelength. In this article the results of the dual-wavelength ALS data classification are presented. The data were acquired with RIEGL VQ-1560i sensor, which is equipped with two laser scanners operating in different wavelengths: 532 nm and 1064 nm. Two classification approaches are presented in the article: classification, which is based on geometric relationships between points and classification, which mostly relies on the radiometric properties of registered objects. The overall accuracy of the geometric classification was 86 %, whereas for the radiometric classification it was 81 %. As a result, it can be assumed that the radiometric features which are provided by the multispectral ALS have potential to be successfully used in ALS point cloud classification.

  16. Efficient alignment-free DNA barcode analytics.

    PubMed

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-11-10

    In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.

  17. Foliar and woody materials discriminated using terrestrial LiDAR in a mixed natural forest

    NASA Astrophysics Data System (ADS)

    Zhu, Xi; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Niemann, K. Olaf; Liu, Jing; Shi, Yifang; Wang, Tiejun

    2018-02-01

    Separation of foliar and woody materials using remotely sensed data is crucial for the accurate estimation of leaf area index (LAI) and woody biomass across forest stands. In this paper, we present a new method to accurately separate foliar and woody materials using terrestrial LiDAR point clouds obtained from ten test sites in a mixed forest in Bavarian Forest National Park, Germany. Firstly, we applied and compared an adaptive radius near-neighbor search algorithm with a fixed radius near-neighbor search method in order to obtain both radiometric and geometric features derived from terrestrial LiDAR point clouds. Secondly, we used a random forest machine learning algorithm to classify foliar and woody materials and examined the impact of understory and slope on the classification accuracy. An average overall accuracy of 84.4% (Kappa = 0.75) was achieved across all experimental plots. The adaptive radius near-neighbor search method outperformed the fixed radius near-neighbor search method. The classification accuracy was significantly higher when the combination of both radiometric and geometric features was utilized. The analysis showed that increasing slope and understory coverage had a significant negative effect on the overall classification accuracy. Our results suggest that the utilization of the adaptive radius near-neighbor search method coupling both radiometric and geometric features has the potential to accurately discriminate foliar and woody materials from terrestrial LiDAR data in a mixed natural forest.

  18. Selecting Feature Subsets Based on SVM-RFE and the Overlapping Ratio with Applications in Bioinformatics.

    PubMed

    Lin, Xiaohui; Li, Chao; Zhang, Yanhui; Su, Benzhe; Fan, Meng; Wei, Hai

    2017-12-26

    Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE) is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA) algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.

  19. MULTI-K: accurate classification of microarray subtypes using ensemble k-means clustering

    PubMed Central

    Kim, Eun-Youn; Kim, Seon-Young; Ashlock, Daniel; Nam, Dougu

    2009-01-01

    Background Uncovering subtypes of disease from microarray samples has important clinical implications such as survival time and sensitivity of individual patients to specific therapies. Unsupervised clustering methods have been used to classify this type of data. However, most existing methods focus on clusters with compact shapes and do not reflect the geometric complexity of the high dimensional microarray clusters, which limits their performance. Results We present a cluster-number-based ensemble clustering algorithm, called MULTI-K, for microarray sample classification, which demonstrates remarkable accuracy. The method amalgamates multiple k-means runs by varying the number of clusters and identifies clusters that manifest the most robust co-memberships of elements. In addition to the original algorithm, we newly devised the entropy-plot to control the separation of singletons or small clusters. MULTI-K, unlike the simple k-means or other widely used methods, was able to capture clusters with complex and high-dimensional structures accurately. MULTI-K outperformed other methods including a recently developed ensemble clustering algorithm in tests with five simulated and eight real gene-expression data sets. Conclusion The geometric complexity of clusters should be taken into account for accurate classification of microarray data, and ensemble clustering applied to the number of clusters tackles the problem very well. The C++ code and the data sets tested are available from the authors. PMID:19698124

  20. Mapping Sub-Antarctic Cushion Plants Using Random Forests to Combine Very High Resolution Satellite Imagery and Terrain Modelling

    PubMed Central

    Bricher, Phillippa K.; Lucieer, Arko; Shaw, Justine; Terauds, Aleks; Bergstrom, Dana M.

    2013-01-01

    Monitoring changes in the distribution and density of plant species often requires accurate and high-resolution baseline maps of those species. Detecting such change at the landscape scale is often problematic, particularly in remote areas. We examine a new technique to improve accuracy and objectivity in mapping vegetation, combining species distribution modelling and satellite image classification on a remote sub-Antarctic island. In this study, we combine spectral data from very high resolution WorldView-2 satellite imagery and terrain variables from a high resolution digital elevation model to improve mapping accuracy, in both pixel- and object-based classifications. Random forest classification was used to explore the effectiveness of these approaches on mapping the distribution of the critically endangered cushion plant Azorella macquariensis Orchard (Apiaceae) on sub-Antarctic Macquarie Island. Both pixel- and object-based classifications of the distribution of Azorella achieved very high overall validation accuracies (91.6–96.3%, κ = 0.849–0.924). Both two-class and three-class classifications were able to accurately and consistently identify the areas where Azorella was absent, indicating that these maps provide a suitable baseline for monitoring expected change in the distribution of the cushion plants. Detecting such change is critical given the threats this species is currently facing under altering environmental conditions. The method presented here has applications to monitoring a range of species, particularly in remote and isolated environments. PMID:23940805

  1. Configuration of electro-optic fire source detection system

    NASA Astrophysics Data System (ADS)

    Fabian, Ram Z.; Steiner, Zeev; Hofman, Nir

    2007-04-01

    The recent fighting activities in various parts of the world have highlighted the need for accurate fire source detection on one hand and fast "sensor to shooter cycle" capabilities on the other. Both needs can be met by the SPOTLITE system which dramatically enhances the capability to rapidly engage hostile fire source with a minimum of casualties to friendly force and to innocent bystanders. Modular system design enable to meet each customer specific requirements and enable excellent future growth and upgrade potential. The design and built of a fire source detection system is governed by sets of requirements issued by the operators. This can be translated into the following design criteria: I) Long range, fast and accurate fire source detection capability. II) Different threat detection and classification capability. III) Threat investigation capability. IV) Fire source data distribution capability (Location, direction, video image, voice). V) Men portability. ) In order to meet these design criteria, an optimized concept was presented and exercised for the SPOTLITE system. Three major modular components were defined: I) Electro Optical Unit -Including FLIR camera, CCD camera, Laser Range Finder and Marker II) Electronic Unit -including system computer and electronic. III) Controller Station Unit - Including the HMI of the system. This article discusses the system's components definition and optimization processes, and also show how SPOTLITE designers successfully managed to introduce excellent solutions for other system parameters.

  2. Photoacoustic discrimination of vascular and pigmented lesions using classical and Bayesian methods

    NASA Astrophysics Data System (ADS)

    Swearingen, Jennifer A.; Holan, Scott H.; Feldman, Mary M.; Viator, John A.

    2010-01-01

    Discrimination of pigmented and vascular lesions in skin can be difficult due to factors such as size, subungual location, and the nature of lesions containing both melanin and vascularity. Misdiagnosis may lead to precancerous or cancerous lesions not receiving proper medical care. To aid in the rapid and accurate diagnosis of such pathologies, we develop a photoacoustic system to determine the nature of skin lesions in vivo. By irradiating skin with two laser wavelengths, 422 and 530 nm, we induce photoacoustic responses, and the relative response at these two wavelengths indicates whether the lesion is pigmented or vascular. This response is due to the distinct absorption spectrum of melanin and hemoglobin. In particular, pigmented lesions have ratios of photoacoustic amplitudes of approximately 1.4 to 1 at the two wavelengths, while vascular lesions have ratios of about 4.0 to 1. Furthermore, we consider two statistical methods for conducting classification of lesions: standard multivariate analysis classification techniques and a Bayesian-model-based approach. We study 15 human subjects with eight vascular and seven pigmented lesions. Using the classical method, we achieve a perfect classification rate, while the Bayesian approach has an error rate of 20%.

  3. Forecasting Daily Volume and Acuity of Patients in the Emergency Department.

    PubMed

    Calegari, Rafael; Fogliatto, Flavio S; Lucini, Filipe R; Neyeloff, Jeruza; Kuchenbecker, Ricardo S; Schaan, Beatriz D

    2016-01-01

    This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification.

  4. Forecasting Daily Volume and Acuity of Patients in the Emergency Department

    PubMed Central

    Fogliatto, Flavio S.; Neyeloff, Jeruza; Kuchenbecker, Ricardo S.; Schaan, Beatriz D.

    2016-01-01

    This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification. PMID:27725842

  5. Real-time classification of auditory sentences using evoked cortical activity in humans

    NASA Astrophysics Data System (ADS)

    Moses, David A.; Leonard, Matthew K.; Chang, Edward F.

    2018-06-01

    Objective. Recent research has characterized the anatomical and functional basis of speech perception in the human auditory cortex. These advances have made it possible to decode speech information from activity in brain regions like the superior temporal gyrus, but no published work has demonstrated this ability in real-time, which is necessary for neuroprosthetic brain-computer interfaces. Approach. Here, we introduce a real-time neural speech recognition (rtNSR) software package, which was used to classify spoken input from high-resolution electrocorticography signals in real-time. We tested the system with two human subjects implanted with electrode arrays over the lateral brain surface. Subjects listened to multiple repetitions of ten sentences, and rtNSR classified what was heard in real-time from neural activity patterns using direct sentence-level and HMM-based phoneme-level classification schemes. Main results. We observed single-trial sentence classification accuracies of 90% or higher for each subject with less than 7 minutes of training data, demonstrating the ability of rtNSR to use cortical recordings to perform accurate real-time speech decoding in a limited vocabulary setting. Significance. Further development and testing of the package with different speech paradigms could influence the design of future speech neuroprosthetic applications.

  6. Remote sensing of Earth terrain

    NASA Technical Reports Server (NTRS)

    Kong, Jin AU; Shin, Robert T.; Nghiem, Son V.; Yueh, Herng-Aung; Han, Hsiu C.; Lim, Harold H.; Arnold, David V.

    1990-01-01

    Remote sensing of earth terrain is examined. The layered random medium model is used to investigate the fully polarimetric scattering of electromagnetic waves from vegetation. The model is used to interpret the measured data for vegetation fields such as rice, wheat, or soybean over water or soil. Accurate calibration of polarimetric radar systems is essential for the polarimetric remote sensing of earth terrain. A polarimetric calibration algorithm using three arbitrary in-scene reflectors is developed. In the interpretation of active and passive microwave remote sensing data from the earth terrain, the random medium model was shown to be quite successful. A multivariate K-distribution is proposed to model the statistics of fully polarimetric radar returns from earth terrain. In the terrain cover classification using the synthetic aperture radar (SAR) images, the applications of the K-distribution model will provide better performance than the conventional Gaussian classifiers. The layered random medium model is used to study the polarimetric response of sea ice. Supervised and unsupervised classification procedures are also developed and applied to synthetic aperture radar polarimetric images in order to identify their various earth terrain components for more than two classes. These classification procedures were applied to San Francisco Bay and Traverse City SAR images.

  7. Hyperspectral image analysis for rapid and accurate discrimination of bacterial infections: A benchmark study.

    PubMed

    Arrigoni, Simone; Turra, Giovanni; Signoroni, Alberto

    2017-09-01

    With the rapid diffusion of Full Laboratory Automation systems, Clinical Microbiology is currently experiencing a new digital revolution. The ability to capture and process large amounts of visual data from microbiological specimen processing enables the definition of completely new objectives. These include the direct identification of pathogens growing on culturing plates, with expected improvements in rapid definition of the right treatment for patients affected by bacterial infections. In this framework, the synergies between light spectroscopy and image analysis, offered by hyperspectral imaging, are of prominent interest. This leads us to assess the feasibility of a reliable and rapid discrimination of pathogens through the classification of their spectral signatures extracted from hyperspectral image acquisitions of bacteria colonies growing on blood agar plates. We designed and implemented the whole data acquisition and processing pipeline and performed a comprehensive comparison among 40 combinations of different data preprocessing and classification techniques. High discrimination performance has been achieved also thanks to improved colony segmentation and spectral signature extraction. Experimental results reveal the high accuracy and suitability of the proposed approach, driving the selection of most suitable and scalable classification pipelines and stimulating clinical validations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. An Improved Cloud Classification Algorithm for China’s FY-2C Multi-Channel Images Using Artificial Neural Network

    PubMed Central

    Liu, Yu; Xia, Jun; Shi, Chun-Xiang; Hong, Yang

    2009-01-01

    The crowning objective of this research was to identify a better cloud classification method to upgrade the current window-based clustering algorithm used operationally for China’s first operational geostationary meteorological satellite FengYun-2C (FY-2C) data. First, the capabilities of six widely-used Artificial Neural Network (ANN) methods are analyzed, together with the comparison of two other methods: Principal Component Analysis (PCA) and a Support Vector Machine (SVM), using 2864 cloud samples manually collected by meteorologists in June, July, and August in 2007 from three FY-2C channel (IR1, 10.3–11.3 μm; IR2, 11.5–12.5 μm and WV 6.3–7.6 μm) imagery. The result shows that: (1) ANN approaches, in general, outperformed the PCA and the SVM given sufficient training samples and (2) among the six ANN networks, higher cloud classification accuracy was obtained with the Self-Organizing Map (SOM) and Probabilistic Neural Network (PNN). Second, to compare the ANN methods to the present FY-2C operational algorithm, this study implemented SOM, one of the best ANN network identified from this study, as an automated cloud classification system for the FY-2C multi-channel data. It shows that SOM method has improved the results greatly not only in pixel-level accuracy but also in cloud patch-level classification by more accurately identifying cloud types such as cumulonimbus, cirrus and clouds in high latitude. Findings of this study suggest that the ANN-based classifiers, in particular the SOM, can be potentially used as an improved Automated Cloud Classification Algorithm to upgrade the current window-based clustering method for the FY-2C operational products. PMID:22346714

  9. An Improved Cloud Classification Algorithm for China's FY-2C Multi-Channel Images Using Artificial Neural Network.

    PubMed

    Liu, Yu; Xia, Jun; Shi, Chun-Xiang; Hong, Yang

    2009-01-01

    The crowning objective of this research was to identify a better cloud classification method to upgrade the current window-based clustering algorithm used operationally for China's first operational geostationary meteorological satellite FengYun-2C (FY-2C) data. First, the capabilities of six widely-used Artificial Neural Network (ANN) methods are analyzed, together with the comparison of two other methods: Principal Component Analysis (PCA) and a Support Vector Machine (SVM), using 2864 cloud samples manually collected by meteorologists in June, July, and August in 2007 from three FY-2C channel (IR1, 10.3-11.3 μm; IR2, 11.5-12.5 μm and WV 6.3-7.6 μm) imagery. The result shows that: (1) ANN approaches, in general, outperformed the PCA and the SVM given sufficient training samples and (2) among the six ANN networks, higher cloud classification accuracy was obtained with the Self-Organizing Map (SOM) and Probabilistic Neural Network (PNN). Second, to compare the ANN methods to the present FY-2C operational algorithm, this study implemented SOM, one of the best ANN network identified from this study, as an automated cloud classification system for the FY-2C multi-channel data. It shows that SOM method has improved the results greatly not only in pixel-level accuracy but also in cloud patch-level classification by more accurately identifying cloud types such as cumulonimbus, cirrus and clouds in high latitude. Findings of this study suggest that the ANN-based classifiers, in particular the SOM, can be potentially used as an improved Automated Cloud Classification Algorithm to upgrade the current window-based clustering method for the FY-2C operational products.

  10. Biomarkers in rheumatic diseases: how can they facilitate diagnosis and assessment of disease activity?

    PubMed

    Mohan, Chandra; Assassi, Shervin

    2015-11-26

    Serological and proteomic biomarkers can help clinicians diagnose rheumatic diseases earlier and assess disease activity more accurately. These markers have been incorporated into the recently revised classification criteria of several diseases to enable early diagnosis and timely initiation of treatment. Furthermore, they also facilitate more accurate subclassification and more focused monitoring for the detection of certain disease manifestations, such as lung and renal involvement. These biomarkers can also make the assessment of disease activity and treatment response more reliable. Simultaneously, several new serological and proteomic biomarkers have become available in the routine clinical setting--for example, a protein biomarker panel for rheumatoid arthritis and a myositis antibody panel for dermatomyositis and polymyositis. This review will focus on commercially available antibody and proteomic biomarkers in rheumatoid arthritis, systemic lupus erythematosus, systemic sclerosis (scleroderma), dermatomyositis and polymyositis, and axial spondyloarthritis (including ankylosing spondylitis). It will discuss how these markers can facilitate early diagnosis as well as more accurate subclassification and assessment of disease activity in the clinical setting. The ultimate goal of current and future biomarkers in rheumatic diseases is to enable early detection of these diseases and their clinical manifestations, and to provide effective monitoring and treatment regimens that are tailored to each patient's needs and prognosis. © BMJ Publishing Group Ltd 2015.

  11. An integrated method for cancer classification and rule extraction from microarray data

    PubMed Central

    Huang, Liang-Tsung

    2009-01-01

    Different microarray techniques recently have been successfully used to investigate useful information for cancer diagnosis at the gene expression level due to their ability to measure thousands of gene expression levels in a massively parallel way. One important issue is to improve classification performance of microarray data. However, it would be ideal that influential genes and even interpretable rules can be explored at the same time to offer biological insight. Introducing the concepts of system design in software engineering, this paper has presented an integrated and effective method (named X-AI) for accurate cancer classification and the acquisition of knowledge from DNA microarray data. This method included a feature selector to systematically extract the relative important genes so as to reduce the dimension and retain as much as possible of the class discriminatory information. Next, diagonal quadratic discriminant analysis (DQDA) was combined to classify tumors, and generalized rule induction (GRI) was integrated to establish association rules which can give an understanding of the relationships between cancer classes and related genes. Two non-redundant datasets of acute leukemia were used to validate the proposed X-AI, showing significantly high accuracy for discriminating different classes. On the other hand, I have presented the abilities of X-AI to extract relevant genes, as well as to develop interpretable rules. Further, a web server has been established for cancer classification and it is freely available at . PMID:19272192

  12. Link prediction boosted psychiatry disorder classification for functional connectivity network

    NASA Astrophysics Data System (ADS)

    Li, Weiwei; Mei, Xue; Wang, Hao; Zhou, Yu; Huang, Jiashuang

    2017-02-01

    Functional connectivity network (FCN) is an effective tool in psychiatry disorders classification, and represents cross-correlation of the regional blood oxygenation level dependent signal. However, FCN is often incomplete for suffering from missing and spurious edges. To accurate classify psychiatry disorders and health control with the incomplete FCN, we first `repair' the FCN with link prediction, and then exact the clustering coefficients as features to build a weak classifier for every FCN. Finally, we apply a boosting algorithm to combine these weak classifiers for improving classification accuracy. Our method tested by three datasets of psychiatry disorder, including Alzheimer's Disease, Schizophrenia and Attention Deficit Hyperactivity Disorder. The experimental results show our method not only significantly improves the classification accuracy, but also efficiently reconstructs the incomplete FCN.

  13. A liver cirrhosis classification on B-mode ultrasound images by the use of higher order local autocorrelation features

    NASA Astrophysics Data System (ADS)

    Sasaki, Kenya; Mitani, Yoshihiro; Fujita, Yusuke; Hamamoto, Yoshihiko; Sakaida, Isao

    2017-02-01

    In this paper, in order to classify liver cirrhosis on regions of interest (ROIs) images from B-mode ultrasound images, we have proposed to use the higher order local autocorrelation (HLAC) features. In a previous study, we tried to classify liver cirrhosis by using a Gabor filter based approach. However, the classification performance of the Gabor feature was poor from our preliminary experimental results. In order accurately to classify liver cirrhosis, we examined to use the HLAC features for liver cirrhosis classification. The experimental results show the effectiveness of HLAC features compared with the Gabor feature. Furthermore, by using a binary image made by an adaptive thresholding method, the classification performance of HLAC features has improved.

  14. Addition of Histology to the Paris Classification of Pediatric Crohn Disease Alters Classification of Disease Location.

    PubMed

    Fernandes, Melissa A; Verstraete, Sofia G; Garnett, Elizabeth A; Heyman, Melvin B

    2016-02-01

    The aim of the study was to investigate the value of microscopic findings in the classification of pediatric Crohn disease (CD) by determining whether classification of disease changes significantly with inclusion of histologic findings. Sixty patients were randomly selected from a cohort of patients studied at the Pediatric Inflammatory Bowel Disease Clinic at the University of California, San Francisco Benioff Children's Hospital. Two physicians independently reviewed the electronic health records of the included patients to determine the Paris classification for each patient by adhering to present guidelines and then by including microscopic findings. Macroscopic and combined disease location classifications were discordant in 34 (56.6%), with no statistically significant differences between groups. Interobserver agreement was higher in the combined classification (κ = 0.73, 95% confidence interval 0.65-0.82) as opposed to when classification was limited to macroscopic findings (κ = 0.53, 95% confidence interval 0.40-0.58). When evaluating the proximal upper gastrointestinal tract (Paris L4a), the interobserver agreement was better in macroscopic compared with the combined classification. Disease extent classifications differed significantly when comparing isolated macroscopic findings (Paris classification) with the combined scheme that included microscopy. Further studies are needed to determine which scheme provides more accurate representation of disease extent.

  15. Area estimation of crops by digital analysis of Landsat data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Hixson, M. M.; Davis, B. J.

    1978-01-01

    The study for which the results are presented had these objectives: (1) to use Landsat data and computer-implemented pattern recognition to classify the major crops from regions encompassing different climates, soils, and crops; (2) to estimate crop areas for counties and states by using crop identification data obtained from the Landsat identifications; and (3) to evaluate the accuracy, precision, and timeliness of crop area estimates obtained from Landsat data. The paper describes the method of developing the training statistics and evaluating the classification accuracy. Landsat MSS data were adequate to accurately identify wheat in Kansas; corn and soybean estimates for Indiana were less accurate. Systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county, district, and state levels.

  16. Research on a Pulmonary Nodule Segmentation Method Combining Fast Self-Adaptive FCM and Classification

    PubMed Central

    Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai

    2015-01-01

    The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms. PMID:25945120

  17. SECURE INTERNET OF THINGS-BASED CLOUD FRAMEWORK TO CONTROL ZIKA VIRUS OUTBREAK.

    PubMed

    Sareen, Sanjay; Sood, Sandeep K; Gupta, Sunil Kumar

    2017-01-01

    Zika virus (ZikaV) is currently one of the most important emerging viruses in the world which has caused outbreaks and epidemics and has also been associated with severe clinical manifestations and congenital malformations. Traditional approaches to combat the ZikaV outbreak are not effective for detection and control. The aim of this study is to propose a cloud-based system to prevent and control the spread of Zika virus disease using integration of mobile phones and Internet of Things (IoT). A Naive Bayesian Network (NBN) is used to diagnose the possibly infected users, and Google Maps Web service is used to provide the geographic positioning system (GPS)-based risk assessment to prevent the outbreak. It is used to represent each ZikaV infected user, mosquito-dense sites, and breeding sites on the Google map that helps the government healthcare authorities to control such risk-prone areas effectively and efficiently. The performance and accuracy of the proposed system are evaluated using dataset for 2 million users. Our system provides high accuracy for initial diagnosis of different users according to their symptoms and appropriate GPS-based risk assessment. The cloud-based proposed system contributed to the accurate NBN-based classification of infected users and accurate identification of risk-prone areas using Google Maps.

  18. Comparative of signal processing techniques for micro-Doppler signature extraction with automotive radar systems

    NASA Astrophysics Data System (ADS)

    Rodriguez-Hervas, Berta; Maile, Michael; Flores, Benjamin C.

    2014-05-01

    In recent years, the automotive industry has experienced an evolution toward more powerful driver assistance systems that provide enhanced vehicle safety. These systems typically operate in the optical and microwave regions of the electromagnetic spectrum and have demonstrated high efficiency in collision and risk avoidance. Microwave radar systems are particularly relevant due to their operational robustness under adverse weather or illumination conditions. Our objective is to study different signal processing techniques suitable for extraction of accurate micro-Doppler signatures of slow moving objects in dense urban environments. Selection of the appropriate signal processing technique is crucial for the extraction of accurate micro-Doppler signatures that will lead to better results in a radar classifier system. For this purpose, we perform simulations of typical radar detection responses in common driving situations and conduct the analysis with several signal processing algorithms, including short time Fourier Transform, continuous wavelet or Kernel based analysis methods. We take into account factors such as the relative movement between the host vehicle and the target, and the non-stationary nature of the target's movement. A comparison of results reveals that short time Fourier Transform would be the best approach for detection and tracking purposes, while the continuous wavelet would be the best suited for classification purposes.

  19. Classification of Ancient Mammal Individuals Using Dental Pulp MALDI-TOF MS Peptide Profiling

    PubMed Central

    Tran, Thi-Nguyen-Ny; Aboudharam, Gérard; Gardeisen, Armelle; Davoust, Bernard; Bocquet-Appel, Jean-Pierre; Flaudrops, Christophe; Belghazi, Maya; Raoult, Didier; Drancourt, Michel

    2011-01-01

    Background The classification of ancient animal corpses at the species level remains a challenging task for forensic scientists and anthropologists. Severe damage and mixed, tiny pieces originating from several skeletons may render morphological classification virtually impossible. Standard approaches are based on sequencing mitochondrial and nuclear targets. Methodology/Principal Findings We present a method that can accurately classify mammalian species using dental pulp and mass spectrometry peptide profiling. Our work was organized into three successive steps. First, after extracting proteins from the dental pulp collected from 37 modern individuals representing 13 mammalian species, trypsin-digested peptides were used for matrix-assisted laser desorption/ionization time-of-flight mass spectrometry analysis. The resulting peptide profiles accurately classified every individual at the species level in agreement with parallel cytochrome b gene sequencing gold standard. Second, using a 279–modern spectrum database, we blindly classified 33 of 37 teeth collected in 37 modern individuals (89.1%). Third, we classified 10 of 18 teeth (56%) collected in 15 ancient individuals representing five mammal species including human, from five burial sites dating back 8,500 years. Further comparison with an upgraded database comprising ancient specimen profiles yielded 100% classification in ancient teeth. Peptide sequencing yield 4 and 16 different non-keratin proteins including collagen (alpha-1 type I and alpha-2 type I) in human ancient and modern dental pulp, respectively. Conclusions/Significance Mass spectrometry peptide profiling of the dental pulp is a new approach that can be added to the arsenal of species classification tools for forensics and anthropology as a complementary method to DNA sequencing. The dental pulp is a new source for collagen and other proteins for the species classification of modern and ancient mammal individuals. PMID:21364886

  20. Multiple Sparse Representations Classification

    PubMed Central

    Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik

    2015-01-01

    Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and sparsity level. PMID:26177106

  1. Examining the effectiveness of discriminant function analysis and cluster analysis in species identification of male field crickets based on their calling songs.

    PubMed

    Jaiswara, Ranjana; Nandi, Diptarup; Balakrishnan, Rohini

    2013-01-01

    Traditional taxonomy based on morphology has often failed in accurate species identification owing to the occurrence of cryptic species, which are reproductively isolated but morphologically identical. Molecular data have thus been used to complement morphology in species identification. The sexual advertisement calls in several groups of acoustically communicating animals are species-specific and can thus complement molecular data as non-invasive tools for identification. Several statistical tools and automated identifier algorithms have been used to investigate the efficiency of acoustic signals in species identification. Despite a plethora of such methods, there is a general lack of knowledge regarding the appropriate usage of these methods in specific taxa. In this study, we investigated the performance of two commonly used statistical methods, discriminant function analysis (DFA) and cluster analysis, in identification and classification based on acoustic signals of field cricket species belonging to the subfamily Gryllinae. Using a comparative approach we evaluated the optimal number of species and calling song characteristics for both the methods that lead to most accurate classification and identification. The accuracy of classification using DFA was high and was not affected by the number of taxa used. However, a constraint in using discriminant function analysis is the need for a priori classification of songs. Accuracy of classification using cluster analysis, which does not require a priori knowledge, was maximum for 6-7 taxa and decreased significantly when more than ten taxa were analysed together. We also investigated the efficacy of two novel derived acoustic features in improving the accuracy of identification. Our results show that DFA is a reliable statistical tool for species identification using acoustic signals. Our results also show that cluster analysis of acoustic signals in crickets works effectively for species classification and identification.

  2. Effective Feature Selection for Classification of Promoter Sequences.

    PubMed

    K, Kouser; P G, Lavanya; Rangarajan, Lalitha; K, Acharya Kshitish

    2016-01-01

    Exploring novel computational methods in making sense of biological data has not only been a necessity, but also productive. A part of this trend is the search for more efficient in silico methods/tools for analysis of promoters, which are parts of DNA sequences that are involved in regulation of expression of genes into other functional molecules. Promoter regions vary greatly in their function based on the sequence of nucleotides and the arrangement of protein-binding short-regions called motifs. In fact, the regulatory nature of the promoters seems to be largely driven by the selective presence and/or the arrangement of these motifs. Here, we explore computational classification of promoter sequences based on the pattern of motif distributions, as such classification can pave a new way of functional analysis of promoters and to discover the functionally crucial motifs. We make use of Position Specific Motif Matrix (PSMM) features for exploring the possibility of accurately classifying promoter sequences using some of the popular classification techniques. The classification results on the complete feature set are low, perhaps due to the huge number of features. We propose two ways of reducing features. Our test results show improvement in the classification output after the reduction of features. The results also show that decision trees outperform SVM (Support Vector Machine), KNN (K Nearest Neighbor) and ensemble classifier LibD3C, particularly with reduced features. The proposed feature selection methods outperform some of the popular feature transformation methods such as PCA and SVD. Also, the methods proposed are as accurate as MRMR (feature selection method) but much faster than MRMR. Such methods could be useful to categorize new promoters and explore regulatory mechanisms of gene expressions in complex eukaryotic species.

  3. Automatic detection of malaria parasite in blood images using two parameters.

    PubMed

    Kim, Jong-Dae; Nam, Kyeong-Min; Park, Chan-Young; Kim, Yu-Seop; Song, Hye-Jeong

    2015-01-01

    Malaria must be diagnosed quickly and accurately at the initial infection stage and treated early to cure it properly. The malaria diagnosis method using a microscope requires much labor and time of a skilled expert and the diagnosis results vary greatly between individual diagnosticians. Therefore, to be able to measure the malaria parasite infection quickly and accurately, studies have been conducted for automated classification techniques using various parameters. In this study, by measuring classification technique performance according to changes of two parameters, the parameter values were determined that best distinguish normal from plasmodium-infected red blood cells. To reduce the stain deviation of the acquired images, a principal component analysis (PCA) grayscale conversion method was used, and as parameters, we used a malaria infected area and a threshold value used in binarization. The parameter values with the best classification performance were determined by selecting the value (72) corresponding to the lowest error rate on the basis of cell threshold value 128 for the malaria threshold value for detecting plasmodium-infected red blood cells.

  4. Module modified acute physiology and chronic health evaluation II: predicting the mortality of neuro-critical disease.

    PubMed

    Su, Yingying; Wang, Miao; Liu, Yifei; Ye, Hong; Gao, Daiquan; Chen, Weibi; Zhang, Yunzhou; Zhang, Yan

    2014-12-01

    This study aimed to conduct and assess a module modified acute physiology and chronic health evaluation (MM-APACHE) II model, based on disease categories modified-acute physiology and chronic health evaluation (DCM-APACHE) II model, in predicting mortality more accurately in neuro-intensive care units (N-ICUs). In total, 1686 patients entered into this prospective study. Acute physiology and chronic health evaluation (APACHE) II scores of all patients on admission and worst 24-, 48-, 72-hour scores were obtained. Neurological diagnosis on admission was classified into five categories: cerebral infarction, intracranial hemorrhage, neurological infection, spinal neuromuscular (SNM) disease, and other neurological diseases. The APACHE II scores of cerebral infarction, intracranial hemorrhage, and neurological infection patients were used for building the MM-APACHE II model. There were 1386 cases for cerebral infarction disease, intracranial hemorrhage disease, and neurological infection disease. The logistic linear regression showed that 72-hour APACHE II score (Wals  =  173.04, P < 0.001) and disease classification (Wals  =  12.51, P  =  0.02) were of importance in forecasting hospital mortality. Module modified acute physiology and chronic health evaluation II model, built on the variables of the 72-hour APACHE II score and disease category, had good discrimination (area under the receiver operating characteristic curve (AU-ROC  =  0.830)) and calibration (χ2  =  12.518, P  =  0.20), and was better than the Knaus APACHE II model (AU-ROC  =  0.778). The APACHE II severity of disease classification system cannot provide accurate prognosis for all kinds of the diseases. A MM-APACHE II model can accurately predict hospital mortality for cerebral infarction, intracranial hemorrhage, and neurologic infection patients in N-ICU.

  5. A review of intelligent systems for heart sound signal analysis.

    PubMed

    Nabih-Ali, Mohammed; El-Dahshan, El-Sayed A; Yahia, Ashraf S

    2017-10-01

    Intelligent computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of physicians and reduce the time required for accurate diagnosis. CAD systems could provide physicians with a suggestion about the diagnostic of heart diseases. The objective of this paper is to review the recent published preprocessing, feature extraction and classification techniques and their state of the art of phonocardiogram (PCG) signal analysis. Published literature reviewed in this paper shows the potential of machine learning techniques as a design tool in PCG CAD systems and reveals that the CAD systems for PCG signal analysis are still an open problem. Related studies are compared to their datasets, feature extraction techniques and the classifiers they used. Current achievements and limitations in developing CAD systems for PCG signal analysis using machine learning techniques are presented and discussed. In the light of this review, a number of future research directions for PCG signal analysis are provided.

  6. Creating a Canonical Scientific and Technical Information Classification System for NCSTRL+

    NASA Technical Reports Server (NTRS)

    Tiffany, Melissa E.; Nelson, Michael L.

    1998-01-01

    The purpose of this paper is to describe the new subject classification system for the NCSTRL+ project. NCSTRL+ is a canonical digital library (DL) based on the Networked Computer Science Technical Report Library (NCSTRL). The current NCSTRL+ classification system uses the NASA Scientific and Technical (STI) subject classifications, which has a bias towards the aerospace, aeronautics, and engineering disciplines. Examination of other scientific and technical information classification systems showed similar discipline-centric weaknesses. Traditional, library-oriented classification systems represented all disciplines, but were too generalized to serve the needs of a scientific and technically oriented digital library. Lack of a suitable existing classification system led to the creation of a lightweight, balanced, general classification system that allows the mapping of more specialized classification schemes into the new framework. We have developed the following classification system to give equal weight to all STI disciplines, while being compact and lightweight.

  7. Java Web Start based software for automated quantitative nuclear analysis of prostate cancer and benign prostate hyperplasia.

    PubMed

    Singh, Swaroop S; Kim, Desok; Mohler, James L

    2005-05-11

    Androgen acts via androgen receptor (AR) and accurate measurement of the levels of AR protein expression is critical for prostate research. The expression of AR in paired specimens of benign prostate and prostate cancer from 20 African and 20 Caucasian Americans was compared to demonstrate an application of this system. A set of 200 immunopositive and 200 immunonegative nuclei were collected from the images using a macro developed in Image Pro Plus. Linear Discriminant and Logistic Regression analyses were performed on the data to generate classification coefficients. Classification coefficients render the automated image analysis software independent of the type of immunostaining or image acquisition system used. The image analysis software performs local segmentation and uses nuclear shape and size to detect prostatic epithelial nuclei. AR expression is described by (a) percentage of immunopositive nuclei; (b) percentage of immunopositive nuclear area; and (c) intensity of AR expression among immunopositive nuclei or areas. The percent positive nuclei and percent nuclear area were similar by race in both benign prostate hyperplasia and prostate cancer. In prostate cancer epithelial nuclei, African Americans exhibited 38% higher levels of AR immunostaining than Caucasian Americans (two sided Student's t-tests; P < 0.05). Intensity of AR immunostaining was similar between races in benign prostate. The differences measured in the intensity of AR expression in prostate cancer were consistent with previous studies. Classification coefficients are required due to non-standardized immunostaining and image collection methods across medical institutions and research laboratories and helps customize the software for the specimen under study. The availability of a free, automated system creates new opportunities for testing, evaluation and use of this image analysis system by many research groups who study nuclear protein expression.

  8. Grading the neuroendocrine tumors of the lung: an evidence-based proposal.

    PubMed

    Rindi, G; Klersy, C; Inzani, F; Fellegara, G; Ampollini, L; Ardizzoni, A; Campanini, N; Carbognani, P; De Pas, T M; Galetta, D; Granone, P L; Righi, L; Rusca, M; Spaggiari, L; Tiseo, M; Viale, G; Volante, M; Papotti, M; Pelosi, G

    2014-02-01

    Lung neuroendocrine tumors are catalogued in four categories by the World Health Organization (WHO 2004) classification. Its reproducibility and prognostic efficacy was disputed. The WHO 2010 classification of digestive neuroendocrine neoplasms is based on Ki67 proliferation assessment and proved prognostically effective. This study aims at comparing these two classifications and at defining a prognostic grading system for lung neuroendocrine tumors. The study included 399 patients who underwent surgery and with at least 1 year follow-up between 1989 and 2011. Data on 21 variables were collected, and performance of grading systems and their components was compared by Cox regression and multivariable analyses. All statistical tests were two-sided. At Cox analysis, WHO 2004 stratified patients into three major groups with statistically significant survival difference (typical carcinoid vs atypical carcinoid (AC), P=0.021; AC vs large-cell/small-cell lung neuroendocrine carcinomas, P<0.001). Optimal discrimination in three groups was observed by Ki67% (Ki67% cutoffs: G1 <4, G2 4-<25, G3 ≥25; G1 vs G2, P=0.021; and G2 vs G3, P≤0.001), mitotic count (G1 ≤2, G2 >2-47, G3 >47; G1 vs G2, P≤0.001; and G2 vs G3, P≤0.001), and presence of necrosis (G1 absent, G2 <10% of sample, G3 >10% of sample; G1 vs G2, P≤0.001; and G2 vs G3, P≤0.001) at uni and multivariable analyses. The combination of these three variables resulted in a simple and effective grading system. A three-tiers grading system based on Ki67 index, mitotic count, and necrosis with cutoffs specifically generated for lung neuroendocrine tumors is prognostically effective and accurate.

  9. A fuzzy measure approach to motion frame analysis for scene detection. M.S. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Leigh, Albert B.; Pal, Sankar K.

    1992-01-01

    This paper addresses a solution to the problem of scene estimation of motion video data in the fuzzy set theoretic framework. Using fuzzy image feature extractors, a new algorithm is developed to compute the change of information in each of two successive frames to classify scenes. This classification process of raw input visual data can be used to establish structure for correlation. The algorithm attempts to fulfill the need for nonlinear, frame-accurate access to video data for applications such as video editing and visual document archival/retrieval systems in multimedia environments.

  10. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning.

    PubMed

    Olesen, Alexander Neergaard; Christensen, Julie A E; Sorensen, Helge B D; Jennum, Poul J

    2016-08-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen's kappa of 0.74 indicating substantial agreement between automatic and manual scoring.

  11. Performance evaluation of MLP and RBF feed forward neural network for the recognition of off-line handwritten characters

    NASA Astrophysics Data System (ADS)

    Rishi, Rahul; Choudhary, Amit; Singh, Ravinder; Dhaka, Vijaypal Singh; Ahlawat, Savita; Rao, Mukta

    2010-02-01

    In this paper we propose a system for classification problem of handwritten text. The system is composed of preprocessing module, supervised learning module and recognition module on a very broad level. The preprocessing module digitizes the documents and extracts features (tangent values) for each character. The radial basis function network is used in the learning and recognition modules. The objective is to analyze and improve the performance of Multi Layer Perceptron (MLP) using RBF transfer functions over Logarithmic Sigmoid Function. The results of 35 experiments indicate that the Feed Forward MLP performs accurately and exhaustively with RBF. With the change in weight update mechanism and feature-drawn preprocessing module, the proposed system is competent with good recognition show.

  12. Swarm-wavelet based extreme learning machine for finger movement classification on transradial amputees.

    PubMed

    Anam, Khairul; Al-Jumaily, Adel

    2014-01-01

    The use of a small number of surface electromyography (EMG) channels on the transradial amputee in a myoelectric controller is a big challenge. This paper proposes a pattern recognition system using an extreme learning machine (ELM) optimized by particle swarm optimization (PSO). PSO is mutated by wavelet function to avoid trapped in a local minima. The proposed system is used to classify eleven imagined finger motions on five amputees by using only two EMG channels. The optimal performance of wavelet-PSO was compared to a grid-search method and standard PSO. The experimental results show that the proposed system is the most accurate classifier among other tested classifiers. It could classify 11 finger motions with the average accuracy of about 94 % across five amputees.

  13. Making Mosquito Taxonomy Useful: A Stable Classification of Tribe Aedini that Balances Utility with Current Knowledge of Evolutionary Relationships.

    PubMed

    Wilkerson, Richard C; Linton, Yvonne-Marie; Fonseca, Dina M; Schultz, Ted R; Price, Dana C; Strickman, Daniel A

    2015-01-01

    The tribe Aedini (Family Culicidae) contains approximately one-quarter of the known species of mosquitoes, including vectors of deadly or debilitating disease agents. This tribe contains the genus Aedes, which is one of the three most familiar genera of mosquitoes. During the past decade, Aedini has been the focus of a series of extensive morphology-based phylogenetic studies published by Reinert, Harbach, and Kitching (RH&K). Those authors created 74 new, elevated or resurrected genera from what had been the single genus Aedes, almost tripling the number of genera in the entire family Culicidae. The proposed classification is based on subjective assessments of the "number and nature of the characters that support the branches" subtending particular monophyletic groups in the results of cladistic analyses of a large set of morphological characters of representative species. To gauge the stability of RH&K's generic groupings we reanalyzed their data with unweighted parsimony jackknife and maximum-parsimony analyses, with and without ordering 14 of the characters as in RH&K. We found that their phylogeny was largely weakly supported and their taxonomic rankings failed priority and other useful taxon-naming criteria. Consequently, we propose simplified aedine generic designations that 1) restore a classification system that is useful for the operational community; 2) enhance the ability of taxonomists to accurately place new species into genera; 3) maintain the progress toward a natural classification based on monophyletic groups of species; and 4) correct the current classification system that is subject to instability as new species are described and existing species more thoroughly defined. We do not challenge the phylogenetic hypotheses generated by the above-mentioned series of morphological studies. However, we reduce the ranks of the genera and subgenera of RH&K to subgenera or informal species groups, respectively, to preserve stability as new data become available.

  14. Making Mosquito Taxonomy Useful: A Stable Classification of Tribe Aedini that Balances Utility with Current Knowledge of Evolutionary Relationships

    PubMed Central

    Wilkerson, Richard C.; Linton, Yvonne-Marie; Fonseca, Dina M.; Schultz, Ted R.; Price, Dana C.; Strickman, Daniel A.

    2015-01-01

    The tribe Aedini (Family Culicidae) contains approximately one-quarter of the known species of mosquitoes, including vectors of deadly or debilitating disease agents. This tribe contains the genus Aedes, which is one of the three most familiar genera of mosquitoes. During the past decade, Aedini has been the focus of a series of extensive morphology-based phylogenetic studies published by Reinert, Harbach, and Kitching (RH&K). Those authors created 74 new, elevated or resurrected genera from what had been the single genus Aedes, almost tripling the number of genera in the entire family Culicidae. The proposed classification is based on subjective assessments of the “number and nature of the characters that support the branches” subtending particular monophyletic groups in the results of cladistic analyses of a large set of morphological characters of representative species. To gauge the stability of RH&K’s generic groupings we reanalyzed their data with unweighted parsimony jackknife and maximum-parsimony analyses, with and without ordering 14 of the characters as in RH&K. We found that their phylogeny was largely weakly supported and their taxonomic rankings failed priority and other useful taxon-naming criteria. Consequently, we propose simplified aedine generic designations that 1) restore a classification system that is useful for the operational community; 2) enhance the ability of taxonomists to accurately place new species into genera; 3) maintain the progress toward a natural classification based on monophyletic groups of species; and 4) correct the current classification system that is subject to instability as new species are described and existing species more thoroughly defined. We do not challenge the phylogenetic hypotheses generated by the above-mentioned series of morphological studies. However, we reduce the ranks of the genera and subgenera of RH&K to subgenera or informal species groups, respectively, to preserve stability as new data become available. PMID:26226613

  15. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  16. Focal liver lesions segmentation and classification in nonenhanced T2-weighted MRI.

    PubMed

    Gatos, Ilias; Tsantis, Stavros; Karamesini, Maria; Spiliopoulos, Stavros; Karnabatidis, Dimitris; Hazle, John D; Kagadis, George C

    2017-07-01

    To automatically segment and classify focal liver lesions (FLLs) on nonenhanced T2-weighted magnetic resonance imaging (MRI) scans using a computer-aided diagnosis (CAD) algorithm. 71 FLLs (30 benign lesions, 19 hepatocellular carcinomas, and 22 metastases) on T2-weighted MRI scans were delineated by the proposed CAD scheme. The FLL segmentation procedure involved wavelet multiscale analysis to extract accurate edge information and mean intensity values for consecutive edges computed using horizontal and vertical analysis that were fed into the subsequent fuzzy C-means algorithm for final FLL border extraction. Texture information for each extracted lesion was derived using 42 first- and second-order textural features from grayscale value histogram, co-occurrence, and run-length matrices. Twelve morphological features were also extracted to capture any shape differentiation between classes. Feature selection was performed with stepwise multilinear regression analysis that led to a reduced feature subset. A multiclass Probabilistic Neural Network (PNN) classifier was then designed and used for lesion classification. PNN model evaluation was performed using the leave-one-out (LOO) method and receiver operating characteristic (ROC) curve analysis. The mean overlap between the automatically segmented FLLs and the manual segmentations performed by radiologists was 0.91 ± 0.12. The highest classification accuracies in the PNN model for the benign, hepatocellular carcinoma, and metastatic FLLs were 94.1%, 91.4%, and 94.1%, respectively, with sensitivity/specificity values of 90%/97.3%, 89.5%/92.2%, and 90.9%/95.6% respectively. The overall classification accuracy for the proposed system was 90.1%. Our diagnostic system using sophisticated FLL segmentation and classification algorithms is a powerful tool for routine clinical MRI-based liver evaluation and can be a supplement to contrast-enhanced MRI to prevent unnecessary invasive procedures. © 2017 American Association of Physicists in Medicine.

  17. [Research of electroencephalography representational emotion recognition based on deep belief networks].

    PubMed

    Yang, Hao; Zhang, Junran; Jiang, Xiaomei; Liu, Fei

    2018-04-01

    In recent years, with the rapid development of machine learning techniques,the deep learning algorithm has been widely used in one-dimensional physiological signal processing. In this paper we used electroencephalography (EEG) signals based on deep belief network (DBN) model in open source frameworks of deep learning to identify emotional state (positive, negative and neutrals), then the results of DBN were compared with support vector machine (SVM). The EEG signals were collected from the subjects who were under different emotional stimuli, and DBN and SVM were adopted to identify the EEG signals with changes of different characteristics and different frequency bands. We found that the average accuracy of differential entropy (DE) feature by DBN is 89.12%±6.54%, which has a better performance than previous research based on the same data set. At the same time, the classification effects of DBN are better than the results from traditional SVM (the average classification accuracy of 84.2%±9.24%) and its accuracy and stability have a better trend. In three experiments with different time points, single subject can achieve the consistent results of classification by using DBN (the mean standard deviation is1.44%), and the experimental results show that the system has steady performance and good repeatability. According to our research, the characteristic of DE has a better classification result than other characteristics. Furthermore, the Beta band and the Gamma band in the emotional recognition model have higher classification accuracy. To sum up, the performances of classifiers have a promotion by using the deep learning algorithm, which has a reference for establishing a more accurate system of emotional recognition. Meanwhile, we can trace through the results of recognition to find out the brain regions and frequency band that are related to the emotions, which can help us to understand the emotional mechanism better. This study has a high academic value and practical significance, so further investigation still needs to be done.

  18. Optimal Day-Ahead Scheduling of a Hybrid Electric Grid Using Weather Forecasts

    DTIC Science & Technology

    2013-12-01

    ahead scheduling, Weather forecast , Wind power , Photovoltaic Power 15. NUMBER OF PAGES 107 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...cost can be reached by accurately anticipating the future renewable power productions. This thesis suggests the use of weather forecasts to establish...reached by accurately anticipating the future renewable power productions. This thesis suggests the use of weather forecasts to establish day-ahead

  19. Deep neural network convolution (NNC) for three-class classification of diffuse lung disease opacities in high-resolution CT (HRCT): consolidation, ground-glass opacity (GGO), and normal opacity

    NASA Astrophysics Data System (ADS)

    Hashimoto, Noriaki; Suzuki, Kenji; Liu, Junchi; Hirano, Yasushi; MacMahon, Heber; Kido, Shoji

    2018-02-01

    Consolidation and ground-glass opacity (GGO) are two major types of opacities associated with diffuse lung diseases. Accurate detection and classification of such opacities are crucially important in the diagnosis of lung diseases, but the process is subjective, and suffers from interobserver variability. Our study purpose was to develop a deep neural network convolution (NNC) system for distinguishing among consolidation, GGO, and normal lung tissue in high-resolution CT (HRCT). We developed ensemble of two deep NNC models, each of which was composed of neural network regression (NNR) with an input layer, a convolution layer, a fully-connected hidden layer, and a fully-connected output layer followed by a thresholding layer. The output layer of each NNC provided a map for the likelihood of being each corresponding lung opacity of interest. The two NNC models in the ensemble were connected in a class-selection layer. We trained our NNC ensemble with pairs of input 2D axial slices and "teaching" probability maps for the corresponding lung opacity, which were obtained by combining three radiologists' annotations. We randomly selected 10 and 40 slices from HRCT scans of 172 patients for each class as a training and test set, respectively. Our NNC ensemble achieved an area under the receiver-operating-characteristic (ROC) curve (AUC) of 0.981 and 0.958 in distinction of consolidation and GGO, respectively, from normal opacity, yielding a classification accuracy of 93.3% among 3 classes. Thus, our deep-NNC-based system for classifying diffuse lung diseases achieved high accuracies for classification of consolidation, GGO, and normal opacity.

  20. Local pulmonary structure classification for computer-aided nodule detection

    NASA Astrophysics Data System (ADS)

    Bahlmann, Claus; Li, Xianlin; Okada, Kazunori

    2006-03-01

    We propose a new method of classifying the local structure types, such as nodules, vessels, and junctions, in thoracic CT scans. This classification is important in the context of computer aided detection (CAD) of lung nodules. The proposed method can be used as a post-process component of any lung CAD system. In such a scenario, the classification results provide an effective means of removing false positives caused by vessels and junctions thus improving overall performance. As main advantage, the proposed solution transforms the complex problem of classifying various 3D topological structures into much simpler 2D data clustering problem, to which more generic and flexible solutions are available in literature, and which is better suited for visualization. Given a nodule candidate, first, our solution robustly fits an anisotropic Gaussian to the data. The resulting Gaussian center and spread parameters are used to affine-normalize the data domain so as to warp the fitted anisotropic ellipsoid into a fixed-size isotropic sphere. We propose an automatic method to extract a 3D spherical manifold, containing the appropriate bounding surface of the target structure. Scale selection is performed by a data driven entropy minimization approach. The manifold is analyzed for high intensity clusters, corresponding to protruding structures. Techniques involve EMclustering with automatic mode number estimation, directional statistics, and hierarchical clustering with a modified Bhattacharyya distance. The estimated number of high intensity clusters explicitly determines the type of pulmonary structures: nodule (0), attached nodule (1), vessel (2), junction (>3). We show accurate classification results for selected examples in thoracic CT scans. This local procedure is more flexible and efficient than current state of the art and will help to improve the accuracy of general lung CAD systems.

  1. Identification of extremely premature infants at high risk of rehospitalization.

    PubMed

    Ambalavanan, Namasivayam; Carlo, Waldemar A; McDonald, Scott A; Yao, Qing; Das, Abhik; Higgins, Rosemary D

    2011-11-01

    Extremely low birth weight infants often require rehospitalization during infancy. Our objective was to identify at the time of discharge which extremely low birth weight infants are at higher risk for rehospitalization. Data from extremely low birth weight infants in Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network centers from 2002-2005 were analyzed. The primary outcome was rehospitalization by the 18- to 22-month follow-up, and secondary outcome was rehospitalization for respiratory causes in the first year. Using variables and odds ratios identified by stepwise logistic regression, scoring systems were developed with scores proportional to odds ratios. Classification and regression-tree analysis was performed by recursive partitioning and automatic selection of optimal cutoff points of variables. A total of 3787 infants were evaluated (mean ± SD birth weight: 787 ± 136 g; gestational age: 26 ± 2 weeks; 48% male, 42% black). Forty-five percent of the infants were rehospitalized by 18 to 22 months; 14.7% were rehospitalized for respiratory causes in the first year. Both regression models (area under the curve: 0.63) and classification and regression-tree models (mean misclassification rate: 40%-42%) were moderately accurate. Predictors for the primary outcome by regression were shunt surgery for hydrocephalus, hospital stay of >120 days for pulmonary reasons, necrotizing enterocolitis stage II or higher or spontaneous gastrointestinal perforation, higher fraction of inspired oxygen at 36 weeks, and male gender. By classification and regression-tree analysis, infants with hospital stays of >120 days for pulmonary reasons had a 66% rehospitalization rate compared with 42% without such a stay. The scoring systems and classification and regression-tree analysis models identified infants at higher risk of rehospitalization and might assist planning for care after discharge.

  2. Identification of Extremely Premature Infants at High Risk of Rehospitalization

    PubMed Central

    Carlo, Waldemar A.; McDonald, Scott A.; Yao, Qing; Das, Abhik; Higgins, Rosemary D.

    2011-01-01

    OBJECTIVE: Extremely low birth weight infants often require rehospitalization during infancy. Our objective was to identify at the time of discharge which extremely low birth weight infants are at higher risk for rehospitalization. METHODS: Data from extremely low birth weight infants in Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network centers from 2002–2005 were analyzed. The primary outcome was rehospitalization by the 18- to 22-month follow-up, and secondary outcome was rehospitalization for respiratory causes in the first year. Using variables and odds ratios identified by stepwise logistic regression, scoring systems were developed with scores proportional to odds ratios. Classification and regression-tree analysis was performed by recursive partitioning and automatic selection of optimal cutoff points of variables. RESULTS: A total of 3787 infants were evaluated (mean ± SD birth weight: 787 ± 136 g; gestational age: 26 ± 2 weeks; 48% male, 42% black). Forty-five percent of the infants were rehospitalized by 18 to 22 months; 14.7% were rehospitalized for respiratory causes in the first year. Both regression models (area under the curve: 0.63) and classification and regression-tree models (mean misclassification rate: 40%–42%) were moderately accurate. Predictors for the primary outcome by regression were shunt surgery for hydrocephalus, hospital stay of >120 days for pulmonary reasons, necrotizing enterocolitis stage II or higher or spontaneous gastrointestinal perforation, higher fraction of inspired oxygen at 36 weeks, and male gender. By classification and regression-tree analysis, infants with hospital stays of >120 days for pulmonary reasons had a 66% rehospitalization rate compared with 42% without such a stay. CONCLUSIONS: The scoring systems and classification and regression-tree analysis models identified infants at higher risk of rehospitalization and might assist planning for care after discharge. PMID:22007016

  3. Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.

    PubMed

    Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John

    2018-03-01

    Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.

  4. Design of monitoring system for mail-sorting based on the Profibus S7 series PLC

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Jia, S. H.; Wang, Y. H.; Liu, H.; Tang, G. C.

    2017-01-01

    With the rapid development of the postal express, the workload of mail sorting is increasing, but the automatic technology of mail sorting is not mature enough. In view of this, the system uses Siemens S7-300 PLC as the main station controller, PLC of Siemens S7-200/400 is from the station controller, through the man-machine interface configuration software MCGS, PROFIBUS-DP communication, RFID technology and mechanical sorting hand achieve mail classification sorting monitoring. Among them, distinguish mail-sorting by scanning RFID posted in the mail electronic bar code (fixed code), the system uses the corresponding controller on the acquisition of information processing, the processed information transmit to the sorting manipulator by PROFIBUS-DP. The system can realize accurate and efficient mail sorting, which will promote the development of mail sorting technology.

  5. Wildlife management by habitat units: A preliminary plan of action

    NASA Technical Reports Server (NTRS)

    Frentress, C. D.; Frye, R. G.

    1975-01-01

    Procedures for yielding vegetation type maps were developed using LANDSAT data and a computer assisted classification analysis (LARSYS) to assist in managing populations of wildlife species by defined area units. Ground cover in Travis County, Texas was classified on two occasions using a modified version of the unsupervised approach to classification. The first classification produced a total of 17 classes. Examination revealed that further grouping was justified. A second analysis produced 10 classes which were displayed on printouts which were later color-coded. The final classification was 82 percent accurate. While the classification map appeared to satisfactorily depict the existing vegetation, two classes were determined to contain significant error. The major sources of error could have been eliminated by stratifying cluster sites more closely among previously mapped soil associations that are identified with particular plant associations and by precisely defining class nomenclature using established criteria early in the analysis.

  6. Uav-Based Crops Classification with Joint Features from Orthoimage and Dsm Data

    NASA Astrophysics Data System (ADS)

    Liu, B.; Shi, Y.; Duan, Y.; Wu, W.

    2018-04-01

    Accurate crops classification remains a challenging task due to the same crop with different spectra and different crops with same spectrum phenomenon. Recently, UAV-based remote sensing approach gains popularity not only for its high spatial and temporal resolution, but also for its ability to obtain spectraand spatial data at the same time. This paper focus on how to take full advantages of spatial and spectrum features to improve crops classification accuracy, based on an UAV platform equipped with a general digital camera. Texture and spatial features extracted from the RGB orthoimage and the digital surface model of the monitoring area are analysed and integrated within a SVM classification framework. Extensive experiences results indicate that the overall classification accuracy is drastically improved from 72.9 % to 94.5 % when the spatial features are combined together, which verified the feasibility and effectiveness of the proposed method.

  7. [Definition and classification of pulmonary arterial hypertension].

    PubMed

    Nakanishi, Norifumi

    2008-11-01

    Pulmonary hypertension(PH) is a disorder that may occur either in the setting of a variety of underlying medical conditions or as a disease that uniquely affects the pulmonary vasculature. Because an accurate diagnosis of PH in a patient is essential to establish an effective treatment, a classification of PH has been helpful. The first classification, established at WHO Symposium in 1973, classified PH into groups based on the known cause and defined primary pulmonary hypertension (PPH) as a separate entity of unknown cause. In 1998, the second World Symposium on PPH was held in Evian. Evian classification introduced the concept of conditions that directly affected the pulmonary vasculature (i.e., PAH), which included PPH. In 2003, the third World Symposium on PAH convened in Venice. In Venice classification, the term 'PPH' was abandoned in favor of 'idiopathic' within the group of disease known as 'PAH'.

  8. The Role of Facial Attractiveness and Facial Masculinity/Femininity in Sex Classification of Faces

    PubMed Central

    Hoss, Rebecca A.; Ramsey, Jennifer L.; Griffin, Angela M.; Langlois, Judith H.

    2005-01-01

    We tested whether adults (Experiment 1) and 4–5-year-old children (Experiment 2) identify the sex of high attractive faces faster and more accurately than low attractive faces in a reaction time task. We also assessed whether facial masculinity/femininity facilitated identification of sex. Results showed that attractiveness facilitated adults’ sex classification of both female and male faces and children’s sex classification of female, but not male, faces. Moreover, attractiveness affected the speed and accuracy of sex classification independent of masculinity/femininity. High masculinity in male faces, but not high femininity in female faces, also facilitated sex classification for both adults and children. These findings provide important new data on how the facial cues of attractiveness and masculinity/femininity contribute to the task of sex classification and provide evidence for developmental differences in how adults and children use these cues. Additionally, these findings provide support for Langlois and Roggman’s (1990) averageness theory of attractiveness. PMID:16457167

  9. Detection of Hypertension Retinopathy Using Deep Learning and Boltzmann Machines

    NASA Astrophysics Data System (ADS)

    Triwijoyo, B. K.; Pradipto, Y. D.

    2017-01-01

    hypertensive retinopathy (HR) in the retina of the eye is disturbance caused by high blood pressure disease, where there is a systemic change of arterial in the blood vessels of the retina. Most heart attacks occur in patients caused by high blood pressure symptoms of undiagnosed. Hypertensive retinopathy Symptoms such as arteriolar narrowing, retinal haemorrhage and cotton wool spots. Based on this reasons, the early diagnosis of the symptoms of hypertensive retinopathy is very urgent to aim the prevention and treatment more accurate. This research aims to develop a system for early detection of hypertension retinopathy stage. The proposed method is to determine the combined features artery and vein diameter ratio (AVR) as well as changes position with Optic Disk (OD) in retinal images to review the classification of hypertensive retinopathy using Deep Neural Networks (DNN) and Boltzmann Machines approach. We choose this approach of because based on previous research DNN models were more accurate in the image pattern recognition, whereas Boltzmann machines selected because It requires speedy iteration in the process of learning neural network. The expected results from this research are designed a prototype system early detection of hypertensive retinopathy stage and analysed the effectiveness and accuracy of the proposed methods.

  10. Refining Time-Activity Classification of Human Subjects Using the Global Positioning System

    PubMed Central

    Hu, Maogui; Li, Wei; Li, Lianfa; Houston, Douglas; Wu, Jun

    2016-01-01

    Background Detailed spatial location information is important in accurately estimating personal exposure to air pollution. Global Position System (GPS) has been widely used in tracking personal paths and activities. Previous researchers have developed time-activity classification models based on GPS data, most of them were developed for specific regions. An adaptive model for time-location classification can be widely applied to air pollution studies that use GPS to track individual level time-activity patterns. Methods Time-activity data were collected for seven days using GPS loggers and accelerometers from thirteen adult participants from Southern California under free living conditions. We developed an automated model based on random forests to classify major time-activity patterns (i.e. indoor, outdoor-static, outdoor-walking, and in-vehicle travel). Sensitivity analysis was conducted to examine the contribution of the accelerometer data and the supplemental spatial data (i.e. roadway and tax parcel data) to the accuracy of time-activity classification. Our model was evaluated using both leave-one-fold-out and leave-one-subject-out methods. Results Maximum speeds in averaging time intervals of 7 and 5 minutes, and distance to primary highways with limited access were found to be the three most important variables in the classification model. Leave-one-fold-out cross-validation showed an overall accuracy of 99.71%. Sensitivities varied from 84.62% (outdoor walking) to 99.90% (indoor). Specificities varied from 96.33% (indoor) to 99.98% (outdoor static). The exclusion of accelerometer and ambient light sensor variables caused a slight loss in sensitivity for outdoor walking, but little loss in overall accuracy. However, leave-one-subject-out cross-validation showed considerable loss in sensitivity for outdoor static and outdoor walking conditions. Conclusions The random forests classification model can achieve high accuracy for the four major time-activity categories. The model also performed well with just GPS, road and tax parcel data. However, caution is warranted when generalizing the model developed from a small number of subjects to other populations. PMID:26919723

  11. A clinical decision-making mechanism for context-aware and patient-specific remote monitoring systems using the correlations of multiple vital signs.

    PubMed

    Forkan, Abdur Rahim Mohammad; Khalil, Ibrahim

    2017-02-01

    In home-based context-aware monitoring patient's real-time data of multiple vital signs (e.g. heart rate, blood pressure) are continuously generated from wearable sensors. The changes in such vital parameters are highly correlated. They are also patient-centric and can be either recurrent or can fluctuate. The objective of this study is to develop an intelligent method for personalized monitoring and clinical decision support through early estimation of patient-specific vital sign values, and prediction of anomalies using the interrelation among multiple vital signs. In this paper, multi-label classification algorithms are applied in classifier design to forecast these values and related abnormalities. We proposed a completely new approach of patient-specific vital sign prediction system using their correlations. The developed technique can guide healthcare professionals to make accurate clinical decisions. Moreover, our model can support many patients with various clinical conditions concurrently by utilizing the power of cloud computing technology. The developed method also reduces the rate of false predictions in remote monitoring centres. In the experimental settings, the statistical features and correlations of six vital signs are formulated as multi-label classification problem. Eight multi-label classification algorithms along with three fundamental machine learning algorithms are used and tested on a public dataset of 85 patients. Different multi-label classification evaluation measures such as Hamming score, F1-micro average, and accuracy are used for interpreting the prediction performance of patient-specific situation classifications. We achieved 90-95% Hamming score values across 24 classifier combinations for 85 different patients used in our experiment. The results are compared with single-label classifiers and without considering the correlations among the vitals. The comparisons show that multi-label method is the best technique for this problem domain. The evaluation results reveal that multi-label classification techniques using the correlations among multiple vitals are effective ways for early estimation of future values of those vitals. In context-aware remote monitoring this process can greatly help the doctors in quick diagnostic decision making. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. A web-based land cover classification system based on ontology model of different classification systems

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Chen, X.

    2016-12-01

    Land cover classification systems used in remote sensing image data have been developed to meet the needs for depicting land covers in scientific investigations and policy decisions. However, accuracy assessments of a spate of data sets demonstrate that compared with the real physiognomy, each of the thematic map of specific land cover classification system contains some unavoidable flaws and unintended deviation. This work proposes a web-based land cover classification system, an integrated prototype, based on an ontology model of various classification systems, each of which is assigned the same weight in the final determination of land cover type. Ontology, a formal explication of specific concepts and relations, is employed in this prototype to build up the connections among different systems to resolve the naming conflicts. The process is initialized by measuring semantic similarity between terminologies in the systems and the search key to produce certain set of satisfied classifications, and carries on through searching the predefined relations in concepts of all classification systems to generate classification maps with user-specified land cover type highlighted, based on probability calculated by votes from data sets with different classification system adopted. The present system is verified and validated by comparing the classification results with those most common systems. Due to full consideration and meaningful expression of each classification system using ontology and the convenience that the web brings with itself, this system, as a preliminary model, proposes a flexible and extensible architecture for classification system integration and data fusion, thereby providing a strong foundation for the future work.

  13. Hydrologic Landscape Regionalisation Using Deductive Classification and Random Forests

    PubMed Central

    Brown, Stuart C.; Lester, Rebecca E.; Versace, Vincent L.; Fawcett, Jonathon; Laurenson, Laurie

    2014-01-01

    Landscape classification and hydrological regionalisation studies are being increasingly used in ecohydrology to aid in the management and research of aquatic resources. We present a methodology for classifying hydrologic landscapes based on spatial environmental variables by employing non-parametric statistics and hybrid image classification. Our approach differed from previous classifications which have required the use of an a priori spatial unit (e.g. a catchment) which necessarily results in the loss of variability that is known to exist within those units. The use of a simple statistical approach to identify an appropriate number of classes eliminated the need for large amounts of post-hoc testing with different number of groups, or the selection and justification of an arbitrary number. Using statistical clustering, we identified 23 distinct groups within our training dataset. The use of a hybrid classification employing random forests extended this statistical clustering to an area of approximately 228,000 km2 of south-eastern Australia without the need to rely on catchments, landscape units or stream sections. This extension resulted in a highly accurate regionalisation at both 30-m and 2.5-km resolution, and a less-accurate 10-km classification that would be more appropriate for use at a continental scale. A smaller case study, of an area covering 27,000 km2, demonstrated that the method preserved the intra- and inter-catchment variability that is known to exist in local hydrology, based on previous research. Preliminary analysis linking the regionalisation to streamflow indices is promising suggesting that the method could be used to predict streamflow behaviour in ungauged catchments. Our work therefore simplifies current classification frameworks that are becoming more popular in ecohydrology, while better retaining small-scale variability in hydrology, thus enabling future attempts to explain and visualise broad-scale hydrologic trends at the scale of catchments and continents. PMID:25396410

  14. Hydrologic landscape regionalisation using deductive classification and random forests.

    PubMed

    Brown, Stuart C; Lester, Rebecca E; Versace, Vincent L; Fawcett, Jonathon; Laurenson, Laurie

    2014-01-01

    Landscape classification and hydrological regionalisation studies are being increasingly used in ecohydrology to aid in the management and research of aquatic resources. We present a methodology for classifying hydrologic landscapes based on spatial environmental variables by employing non-parametric statistics and hybrid image classification. Our approach differed from previous classifications which have required the use of an a priori spatial unit (e.g. a catchment) which necessarily results in the loss of variability that is known to exist within those units. The use of a simple statistical approach to identify an appropriate number of classes eliminated the need for large amounts of post-hoc testing with different number of groups, or the selection and justification of an arbitrary number. Using statistical clustering, we identified 23 distinct groups within our training dataset. The use of a hybrid classification employing random forests extended this statistical clustering to an area of approximately 228,000 km2 of south-eastern Australia without the need to rely on catchments, landscape units or stream sections. This extension resulted in a highly accurate regionalisation at both 30-m and 2.5-km resolution, and a less-accurate 10-km classification that would be more appropriate for use at a continental scale. A smaller case study, of an area covering 27,000 km2, demonstrated that the method preserved the intra- and inter-catchment variability that is known to exist in local hydrology, based on previous research. Preliminary analysis linking the regionalisation to streamflow indices is promising suggesting that the method could be used to predict streamflow behaviour in ungauged catchments. Our work therefore simplifies current classification frameworks that are becoming more popular in ecohydrology, while better retaining small-scale variability in hydrology, thus enabling future attempts to explain and visualise broad-scale hydrologic trends at the scale of catchments and continents.

  15. Effects of uncertainty and variability on population declines and IUCN Red List classifications.

    PubMed

    Rueda-Cediel, Pamela; Anderson, Kurt E; Regan, Tracey J; Regan, Helen M

    2018-01-22

    The International Union for Conservation of Nature (IUCN) Red List Categories and Criteria is a quantitative framework for classifying species according to extinction risk. Population models may be used to estimate extinction risk or population declines. Uncertainty and variability arise in threat classifications through measurement and process error in empirical data and uncertainty in the models used to estimate extinction risk and population declines. Furthermore, species traits are known to affect extinction risk. We investigated the effects of measurement and process error, model type, population growth rate, and age at first reproduction on the reliability of risk classifications based on projected population declines on IUCN Red List classifications. We used an age-structured population model to simulate true population trajectories with different growth rates, reproductive ages and levels of variation, and subjected them to measurement error. We evaluated the ability of scalar and matrix models parameterized with these simulated time series to accurately capture the IUCN Red List classification generated with true population declines. Under all levels of measurement error tested and low process error, classifications were reasonably accurate; scalar and matrix models yielded roughly the same rate of misclassifications, but the distribution of errors differed; matrix models led to greater overestimation of extinction risk than underestimations; process error tended to contribute to misclassifications to a greater extent than measurement error; and more misclassifications occurred for fast, rather than slow, life histories. These results indicate that classifications of highly threatened taxa (i.e., taxa with low growth rates) under criterion A are more likely to be reliable than for less threatened taxa when assessed with population models. Greater scrutiny needs to be placed on data used to parameterize population models for species with high growth rates, particularly when available evidence indicates a potential transition to higher risk categories. © 2018 Society for Conservation Biology.

  16. A science-based paradigm for the classification of synthetic vitreous fibers.

    PubMed

    McConnell, E E

    2000-08-01

    Synthetic vitreous fibers (SVFs) are a broad class of inorganic vitreous silicates used in a large number of applications including thermal and acoustical insulation and filtration. Historically, they have been grouped into somewhat artificial broad categories, e.g., glass, rock (stone), slag, or ceramic fibers based on the origin of the raw materials or the manufacturing process used to produce them. In turn, these broad categories have been used to classify SVFs according to their potential health effects, e.g., the International Agency for Research on Cancer and International Programme for Chemical Safety in 1988, based on the available health information at that time. During the past 10-15 years extensive new information has been developed on the health aspects of these fibers in humans, in experimental animals, and with in vitro test systems. Various chronic inhalation studies and intraperitoneal injection studies in rodents have clearly shown that within a given category of SVFs there can be a vast diversity of biological responses due to the different fiber compositions within that category. This information has been further buttressed by an in-depth knowledge of differences in the biopersistence of the various types of fibers in the lung after short-term exposure and their in vitro dissolution rates in fluids that mimic those found in the lung. This evolving body of information, which compliments and explains the results of chronic animal studies clearly show that these "broad" categories are somewhat archaic, oversimplistic, and do not represent current science. This new understanding of the relation between fiber composition, solubility, and biological activity requires a new classification system to more accurately reflect the potential health consequences of exposure to these materials. It is proposed that a new classification system be developed based on the results of short-term in vivo in combination with in vitro solubility studies. Indeed, the European Union has incorporated some of this knowledge, e.g., persistence in the lung into its recent Directive on fiber classification. Copyright 2000 Academic Press.

  17. Automatic 3d Building Model Generations with Airborne LiDAR Data

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.

  18. Aided diagnosis methods of breast cancer based on machine learning

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Wang, Nian; Cui, Xiaoyu

    2017-08-01

    In the field of medicine, quickly and accurately determining whether the patient is malignant or benign is the key to treatment. In this paper, K-Nearest Neighbor, Linear Discriminant Analysis, Logistic Regression were applied to predict the classification of thyroid,Her-2,PR,ER,Ki67,metastasis and lymph nodes in breast cancer, in order to recognize the benign and malignant breast tumors and achieve the purpose of aided diagnosis of breast cancer. The results showed that the highest classification accuracy of LDA was 88.56%, while the classification effect of KNN and Logistic Regression were better than that of LDA, the best accuracy reached 96.30%.

  19. Association Between Severity and the Determinant-Based Classification, Atlanta 2012 and Atlanta 1992, in Acute Pancreatitis

    PubMed Central

    Chen, Yuhui; Ke, Lu; Tong, Zhihui; Li, Weiqin; Li, Jieshou

    2015-01-01

    Abstract Recently, the determinant-based classification (DBC) and the Atlanta 2012 have been proposed to provide a basis for study and treatment of acute pancreatitis (AP). The present study aimed to evaluate the association between severity and the DBC, the Atlanta 2012 and the Atlanta 1992, in AP. Patients admitted to our center with AP from January 2007 to July 2013 were reviewed retrospectively. Patients were assigned to severity categories for all the 3 classification systems. The primary outcomes include long-term clinical prognosis (mortality and length-of-hospital stay), major complications (intraabdominal hemorrhage, multiple-organ dysfunction, single organ failure [OF], and sepsis) and clinical interventions (surgical drainage, continuous renal replace therapy [CRRT] lasting time, and mechanical ventilation [MV] lasting time). The classification systems were validated and compared in terms of these abovementioned primary outcomes. A total of 395 patients were enrolled in this retrospective study with an overall 8.86% in-hospital mortality. Intraabdominal hemorrhage was present in 27 (6.84%) of the patients, multiple-organ dysfunction in 73(18.48%), single OF in 67 (16.96%), and sepsis in 73(18.48%). For each classification system, different categories regarding severity were associated with statistically different clinical mortality, major complications, and clinical interventions (P < 0.05). However, the Atlanta 2012 and the DBC performed better than the Atlanta 1992, and they were comparable in predicting mortality (area under curve [AUC] 0.899 and 0.955 vs 0.585, P < 0.05); intraabdominal hemorrhage (AUC 0.930 and 0.961 vs 0.583, P < 0.05), multiple-organ dysfunction (AUC 0.858 and 0.881 vs 0.595, P < 0.05), sepsis (AUC 0.826 and 0.879 vs 0.590, P < 0.05), and surgical drainage (AUC 0.900 and 0.847 vs 0.606, P < 0.05). For continuous variables, the Atlanta 2012 and the DBC were also better than the Atlanta 1992, and they were similar in predicting CRRT lasting time (Somer D 0.379 and 0.360 vs 0.210, P < 0.05) and MV lasting time (Somer D 0.344 and 0.336 vs 0.186, P < 0.05). All the 3 classification systems accurately classify the severity of AP. However, the Atlanta 2012 and the DBC performed better than the Atlanta 1992, and they were comparable in predicting long-term clinical prognosis, major complications, and clinical interventions. PMID:25837754

  20. Classification of Aerial Photogrammetric 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Becker, C.; Häni, N.; Rosinskaya, E.; d'Angelo, E.; Strecha, C.

    2017-05-01

    We present a powerful method to extract per-point semantic class labels from aerial photogrammetry data. Labelling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.

Top