Sample records for standard classification methods

  1. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  2. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  3. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  4. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  5. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  6. Effective classification of the prevalence of Schistosoma mansoni.

    PubMed

    Mitchell, Shira A; Pagano, Marcello

    2012-12-01

    To present an effective classification method based on the prevalence of Schistosoma mansoni in the community. We created decision rules (defined by cut-offs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs while maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the De Vlas model for worm and egg distributions. We compare the proposed method with the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification. Modelling varying sensitivity lowers the lower cut-off more significantly than the upper cut-off, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%. Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method. © 2012 Blackwell Publishing Ltd.

  7. The Classification of Tongue Colors with Standardized Acquisition and ICC Profile Correction in Traditional Chinese Medicine

    PubMed Central

    Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng

    2016-01-01

    Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555

  8. The Classification of Tongue Colors with Standardized Acquisition and ICC Profile Correction in Traditional Chinese Medicine.

    PubMed

    Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng

    2016-01-01

    Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.

  9. 7 CFR 28.35 - Method of classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER... official cotton standards of the United States in effect at the time of classification. ...

  10. 7 CFR 28.35 - Method of classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Method of classification. 28.35 Section 28.35 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Classification § 28.35 Method of classification. All cotton samples shall be classified on the basis of the...

  11. Rapid assessment of urban wetlands: Do hydrogeomorpic classification and reference criteria work?

    EPA Science Inventory

    The Hydrogeomorphic (HGM) functional assessment method is predicated on the ability of a wetland classification method based on hydrology (HGM classification) and a visual assessment of disturbance and alteration to provide reference standards against which functions in individua...

  12. Methods for assessing the quality of mammalian embryos: How far we are from the gold standard?

    PubMed

    Rocha, José C; Passalia, Felipe; Matos, Felipe D; Maserati, Marc P; Alves, Mayra F; Almeida, Tamie G de; Cardoso, Bruna L; Basso, Andrea C; Nogueira, Marcelo F G

    2016-08-01

    Morphological embryo classification is of great importance for many laboratory techniques, from basic research to the ones applied to assisted reproductive technology. However, the standard classification method for both human and cattle embryos, is based on quality parameters that reflect the overall morphological quality of the embryo in cattle, or the quality of the individual embryonic structures, more relevant in human embryo classification. This assessment method is biased by the subjectivity of the evaluator and even though several guidelines exist to standardize the classification, it is not a method capable of giving reliable and trustworthy results. Latest approaches for the improvement of quality assessment include the use of data from cellular metabolism, a new morphological grading system, development kinetics and cleavage symmetry, embryo cell biopsy followed by pre-implantation genetic diagnosis, zona pellucida birefringence, ion release by the embryo cells and so forth. Nowadays there exists a great need for evaluation methods that are practical and non-invasive while being accurate and objective. A method along these lines would be of great importance to embryo evaluation by embryologists, clinicians and other professionals who work with assisted reproductive technology. Several techniques shows promising results in this sense, one being the use of digital images of the embryo as basis for features extraction and classification by means of artificial intelligence techniques (as genetic algorithms and artificial neural networks). This process has the potential to become an accurate and objective standard for embryo quality assessment.

  13. Methods for assessing the quality of mammalian embryos: How far we are from the gold standard?

    PubMed Central

    Rocha, José C.; Passalia, Felipe; Matos, Felipe D.; Maserati Jr, Marc P.; Alves, Mayra F.; de Almeida, Tamie G.; Cardoso, Bruna L.; Basso, Andrea C.; Nogueira, Marcelo F. G.

    2016-01-01

    Morphological embryo classification is of great importance for many laboratory techniques, from basic research to the ones applied to assisted reproductive technology. However, the standard classification method for both human and cattle embryos, is based on quality parameters that reflect the overall morphological quality of the embryo in cattle, or the quality of the individual embryonic structures, more relevant in human embryo classification. This assessment method is biased by the subjectivity of the evaluator and even though several guidelines exist to standardize the classification, it is not a method capable of giving reliable and trustworthy results. Latest approaches for the improvement of quality assessment include the use of data from cellular metabolism, a new morphological grading system, development kinetics and cleavage symmetry, embryo cell biopsy followed by pre-implantation genetic diagnosis, zona pellucida birefringence, ion release by the embryo cells and so forth. Nowadays there exists a great need for evaluation methods that are practical and non-invasive while being accurate and objective. A method along these lines would be of great importance to embryo evaluation by embryologists, clinicians and other professionals who work with assisted reproductive technology. Several techniques shows promising results in this sense, one being the use of digital images of the embryo as basis for features extraction and classification by means of artificial intelligence techniques (as genetic algorithms and artificial neural networks). This process has the potential to become an accurate and objective standard for embryo quality assessment. PMID:27584609

  14. The joint use of the tangential electric field and surface Laplacian in EEG classification.

    PubMed

    Carvalhaes, C G; de Barros, J Acacio; Perreau-Guimaraes, M; Suppes, P

    2014-01-01

    We investigate the joint use of the tangential electric field (EF) and the surface Laplacian (SL) derivation as a method to improve the classification of EEG signals. We considered five classification tasks to test the validity of such approach. In all five tasks, the joint use of the components of the EF and the SL outperformed the scalar potential. The smallest effect occurred in the classification of a mental task, wherein the average classification rate was improved by 0.5 standard deviations. The largest effect was obtained in the classification of visual stimuli and corresponded to an improvement of 2.1 standard deviations.

  15. Evaluation of air quality zone classification methods based on ambient air concentration exposure.

    PubMed

    Freeman, Brian; McBean, Ed; Gharabaghi, Bahram; Thé, Jesse

    2017-05-01

    Air quality zones are used by regulatory authorities to implement ambient air standards in order to protect human health. Air quality measurements at discrete air monitoring stations are critical tools to determine whether an air quality zone complies with local air quality standards or is noncompliant. This study presents a novel approach for evaluation of air quality zone classification methods by breaking the concentration distribution of a pollutant measured at an air monitoring station into compliance and exceedance probability density functions (PDFs) and then using Monte Carlo analysis with the Central Limit Theorem to estimate long-term exposure. The purpose of this paper is to compare the risk associated with selecting one ambient air classification approach over another by testing the possible exposure an individual living within a zone may face. The chronic daily intake (CDI) is utilized to compare different pollutant exposures over the classification duration of 3 years between two classification methods. Historical data collected from air monitoring stations in Kuwait are used to build representative models of 1-hr NO 2 and 8-hr O 3 within a zone that meets the compliance requirements of each method. The first method, the "3 Strike" method, is a conservative approach based on a winner-take-all approach common with most compliance classification methods, while the second, the 99% Rule method, allows for more robust analyses and incorporates long-term trends. A Monte Carlo analysis is used to model the CDI for each pollutant and each method with the zone at a single station and with multiple stations. The model assumes that the zone is already in compliance with air quality standards over the 3 years under the different classification methodologies. The model shows that while the CDI of the two methods differs by 2.7% over the exposure period for the single station case, the large number of samples taken over the duration period impacts the sensitivity of the statistical tests, causing the null hypothesis to fail. Local air quality managers can use either methodology to classify the compliance of an air zone, but must accept that the 99% Rule method may cause exposures that are statistically more significant than the 3 Strike method. A novel method using the Central Limit Theorem and Monte Carlo analysis is used to directly compare different air standard compliance classification methods by estimating the chronic daily intake of pollutants. This method allows air quality managers to rapidly see how individual classification methods may impact individual population groups, as well as to evaluate different pollutants based on dosage and exposure when complete health impacts are not known.

  16. Gold-standard for computer-assisted morphological sperm analysis.

    PubMed

    Chang, Violeta; Garcia, Alejandra; Hitschfeld, Nancy; Härtel, Steffen

    2017-04-01

    Published algorithms for classification of human sperm heads are based on relatively small image databases that are not open to the public, and thus no direct comparison is available for competing methods. We describe a gold-standard for morphological sperm analysis (SCIAN-MorphoSpermGS), a dataset of sperm head images with expert-classification labels in one of the following classes: normal, tapered, pyriform, small or amorphous. This gold-standard is for evaluating and comparing known techniques and future improvements to present approaches for classification of human sperm heads for semen analysis. Although this paper does not provide a computational tool for morphological sperm analysis, we present a set of experiments for comparing sperm head description and classification common techniques. This classification base-line is aimed to be used as a reference for future improvements to present approaches for human sperm head classification. The gold-standard provides a label for each sperm head, which is achieved by majority voting among experts. The classification base-line compares four supervised learning methods (1- Nearest Neighbor, naive Bayes, decision trees and Support Vector Machine (SVM)) and three shape-based descriptors (Hu moments, Zernike moments and Fourier descriptors), reporting the accuracy and the true positive rate for each experiment. We used Fleiss' Kappa Coefficient to evaluate the inter-expert agreement and Fisher's exact test for inter-expert variability and statistical significant differences between descriptors and learning techniques. Our results confirm the high degree of inter-expert variability in the morphological sperm analysis. Regarding the classification base line, we show that none of the standard descriptors or classification approaches is best suitable for tackling the problem of sperm head classification. We discovered that the correct classification rate was highly variable when trying to discriminate among non-normal sperm heads. By using the Fourier descriptor and SVM, we achieved the best mean correct classification: only 49%. We conclude that the SCIAN-MorphoSpermGS will provide a standard tool for evaluation of characterization and classification approaches for human sperm heads. Indeed, there is a clear need for a specific shape-based descriptor for human sperm heads and a specific classification approach to tackle the problem of high variability within subcategories of abnormal sperm cells. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The influence of different classification standards of age groups on prognosis in high-grade hemispheric glioma patients.

    PubMed

    Chen, Jian-Wu; Zhou, Chang-Fu; Lin, Zhi-Xiong

    2015-09-15

    Although age is thought to correlate with the prognosis of glioma patients, the most appropriate age-group classification standard to evaluate prognosis had not been fully studied. This study aimed to investigate the influence of age-group classification standards on the prognosis of patients with high-grade hemispheric glioma (HGG). This retrospective study of 125 HGG patients used three different classification standards of age-groups (≤ 50 and >50 years old, ≤ 60 and >60 years old, ≤ 45 and 45-65 and ≥ 65 years old) to evaluate the impact of age on prognosis. The primary end-point was overall survival (OS). The Kaplan-Meier method was applied for univariate analysis and Cox proportional hazards model for multivariate analysis. Univariate analysis showed a significant correlation between OS and all three classification standards of age-groups as well as between OS and pathological grade, gender, location of glioma, and regular chemotherapy and radiotherapy treatment. Multivariate analysis showed that the only independent predictors of OS were classification standard of age-groups ≤ 50 and > 50 years old, pathological grade and regular chemotherapy. In summary, the most appropriate classification standard of age-groups as an independent prognostic factor was ≤ 50 and > 50 years old. Pathological grade and chemotherapy were also independent predictors of OS in post-operative HGG patients. Copyright © 2015. Published by Elsevier B.V.

  18. Annual Book of ASTM Standards, Part 23: Water; Atmospheric Analysis.

    ERIC Educational Resources Information Center

    American Society for Testing and Materials, Philadelphia, PA.

    Standards for water and atmospheric analysis are compiled in this segment, Part 23, of the American Society for Testing and Materials (ASTM) annual book of standards. It contains all current formally approved ASTM standard and tentative test methods, definitions, recommended practices, proposed methods, classifications, and specifications. One…

  19. Comparison of hand-craft feature based SVM and CNN based deep learning framework for automatic polyp classification.

    PubMed

    Younghak Shin; Balasingham, Ilangko

    2017-07-01

    Colonoscopy is a standard method for screening polyps by highly trained physicians. Miss-detected polyps in colonoscopy are potential risk factor for colorectal cancer. In this study, we investigate an automatic polyp classification framework. We aim to compare two different approaches named hand-craft feature method and convolutional neural network (CNN) based deep learning method. Combined shape and color features are used for hand craft feature extraction and support vector machine (SVM) method is adopted for classification. For CNN approach, three convolution and pooling based deep learning framework is used for classification purpose. The proposed framework is evaluated using three public polyp databases. From the experimental results, we have shown that the CNN based deep learning framework shows better classification performance than the hand-craft feature based methods. It achieves over 90% of classification accuracy, sensitivity, specificity and precision.

  20. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods

    PubMed Central

    Burlina, Philippe; Billings, Seth; Joshi, Neil

    2017-01-01

    Objective To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Methods Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and “engineered” features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. Results The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). Conclusions This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification. PMID:28854220

  1. Quality assurance: The 10-Group Classification System (Robson classification), induction of labor, and cesarean delivery.

    PubMed

    Robson, Michael; Murphy, Martina; Byrne, Fionnuala

    2015-10-01

    Quality assurance in labor and delivery is needed. The method must be simple and consistent, and be of universal value. It needs to be clinically relevant, robust, and prospective, and must incorporate epidemiological variables. The 10-Group Classification System (TGCS) is a simple method providing a common starting point for further detailed analysis within which all perinatal events and outcomes can be measured and compared. The system is demonstrated in the present paper using data for 2013 from the National Maternity Hospital in Dublin, Ireland. Interpretation of the classification can be easily taught. The standard table can provide much insight into the philosophy of care in the population of women studied and also provide information on data quality. With standardization of audit of events and outcomes, any differences in either sizes of groups, events or outcomes can be explained only by poor data collection, significant epidemiological variables, or differences in practice. In April 2015, WHO proposed that the TGCS (also known as the Robson classification) is used as a global standard for assessing, monitoring, and comparing cesarean delivery rates within and between healthcare facilities. Copyright © 2015. Published by Elsevier Ireland Ltd.

  2. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods.

    PubMed

    Burlina, Philippe; Billings, Seth; Joshi, Neil; Albayda, Jemima

    2017-01-01

    To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and "engineered" features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification.

  3. Cascaded deep decision networks for classification of endoscopic images

    NASA Astrophysics Data System (ADS)

    Murthy, Venkatesh N.; Singh, Vivek; Sun, Shanhui; Bhattacharya, Subhabrata; Chen, Terrence; Comaniciu, Dorin

    2017-02-01

    Both traditional and wireless capsule endoscopes can generate tens of thousands of images for each patient. It is desirable to have the majority of irrelevant images filtered out by automatic algorithms during an offline review process or to have automatic indication for highly suspicious areas during an online guidance. This also applies to the newly invented endomicroscopy, where online indication of tumor classification plays a significant role. Image classification is a standard pattern recognition problem and is well studied in the literature. However, performance on the challenging endoscopic images still has room for improvement. In this paper, we present a novel Cascaded Deep Decision Network (CDDN) to improve image classification performance over standard Deep neural network based methods. During the learning phase, CDDN automatically builds a network which discards samples that are classified with high confidence scores by a previously trained network and concentrates only on the challenging samples which would be handled by the subsequent expert shallow networks. We validate CDDN using two different types of endoscopic imaging, which includes a polyp classification dataset and a tumor classification dataset. From both datasets we show that CDDN can outperform other methods by about 10%. In addition, CDDN can also be applied to other image classification problems.

  4. Workshop on Algorithms for Time-Series Analysis

    NASA Astrophysics Data System (ADS)

    Protopapas, Pavlos

    2012-04-01

    abstract-type="normal">SummaryThis Workshop covered the four major subjects listed below in two 90-minute sessions. Each talk or tutorial allowed questions, and concluded with a discussion. Classification: Automatic classification using machine-learning methods is becoming a standard in surveys that generate large datasets. Ashish Mahabal (Caltech) reviewed various methods, and presented examples of several applications. Time-Series Modelling: Suzanne Aigrain (Oxford University) discussed autoregressive models and multivariate approaches such as Gaussian Processes. Meta-classification/mixture of expert models: Karim Pichara (Pontificia Universidad Católica, Chile) described the substantial promise which machine-learning classification methods are now showing in automatic classification, and discussed how the various methods can be combined together. Event Detection: Pavlos Protopapas (Harvard) addressed methods of fast identification of events with low signal-to-noise ratios, enlarging on the characterization and statistical issues of low signal-to-noise ratios and rare events.

  5. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  6. Human Vision-Motivated Algorithm Allows Consistent Retinal Vessel Classification Based on Local Color Contrast for Advancing General Diagnostic Exams.

    PubMed

    Ivanov, Iliya V; Leitritz, Martin A; Norrenberg, Lars A; Völker, Michael; Dynowski, Marek; Ueffing, Marius; Dietter, Johannes

    2016-02-01

    Abnormalities of blood vessel anatomy, morphology, and ratio can serve as important diagnostic markers for retinal diseases such as AMD or diabetic retinopathy. Large cohort studies demand automated and quantitative image analysis of vascular abnormalities. Therefore, we developed an analytical software tool to enable automated standardized classification of blood vessels supporting clinical reading. A dataset of 61 images was collected from a total of 33 women and 8 men with a median age of 38 years. The pupils were not dilated, and images were taken after dark adaption. In contrast to current methods in which classification is based on vessel profile intensity averages, and similar to human vision, local color contrast was chosen as a discriminator to allow artery vein discrimination and arterial-venous ratio (AVR) calculation without vessel tracking. With 83% ± 1 standard error of the mean for our dataset, we achieved best classification for weighted lightness information from a combination of the red, green, and blue channels. Tested on an independent dataset, our method reached 89% correct classification, which, when benchmarked against conventional ophthalmologic classification, shows significantly improved classification scores. Our study demonstrates that vessel classification based on local color contrast can cope with inter- or intraimage lightness variability and allows consistent AVR calculation. We offer an open-source implementation of this method upon request, which can be integrated into existing tool sets and applied to general diagnostic exams.

  7. Automated artery-venous classification of retinal blood vessels based on structural mapping method

    NASA Astrophysics Data System (ADS)

    Joshi, Vinayak S.; Garvin, Mona K.; Reinhardt, Joseph M.; Abramoff, Michael D.

    2012-03-01

    Retinal blood vessels show morphologic modifications in response to various retinopathies. However, the specific responses exhibited by arteries and veins may provide a precise diagnostic information, i.e., a diabetic retinopathy may be detected more accurately with the venous dilatation instead of average vessel dilatation. In order to analyze the vessel type specific morphologic modifications, the classification of a vessel network into arteries and veins is required. We previously described a method for identification and separation of retinal vessel trees; i.e. structural mapping. Therefore, we propose the artery-venous classification based on structural mapping and identification of color properties prominent to the vessel types. The mean and standard deviation of each of green channel intensity and hue channel intensity are analyzed in a region of interest around each centerline pixel of a vessel. Using the vector of color properties extracted from each centerline pixel, it is classified into one of the two clusters (artery and vein), obtained by the fuzzy-C-means clustering. According to the proportion of clustered centerline pixels in a particular vessel, and utilizing the artery-venous crossing property of retinal vessels, each vessel is assigned a label of an artery or a vein. The classification results are compared with the manually annotated ground truth (gold standard). We applied the proposed method to a dataset of 15 retinal color fundus images resulting in an accuracy of 88.28% correctly classified vessel pixels. The automated classification results match well with the gold standard suggesting its potential in artery-venous classification and the respective morphology analysis.

  8. Preliminary report on the International Conference for the Development of Standards for the Treatment of Anorectal Malformations.

    PubMed

    Holschneider, Alexander; Hutson, John; Peña, Albert; Beket, Elhamy; Chatterjee, Subir; Coran, Arnold; Davies, Michael; Georgeson, Keith; Grosfeld, Jay; Gupta, Devendra; Iwai, Naomi; Kluth, Dieter; Martucciello, Giuseppe; Moore, Samuel; Rintala, Risto; Smith, E Durham; Sripathi, D V; Stephens, Douglas; Sen, Sudipta; Ure, Benno; Grasshoff, Sabine; Boemers, Thomas; Murphy, Feilin; Söylet, Yunus; Dübbers, Martin; Kunst, Marc

    2005-10-01

    Anorectal malformations (ARM) are common congenital anomalies seen throughout the world. Comparison of outcome data has been hindered because of confusion related to classification and assessment systems. The goals of the Krinkenbeck Conference on ARM was to develop standards for an International Classification of ARM based on a modification of fistula type and adding rare and regional variants, and design a system for comparable follow up studies. Lesions were classified into major clinical groups based on the fistula location (perineal, recto-urethral, recto-vesical, vestibular), cloacal lesions, those with no fistula and anal stenosis. Rare and regional variants included pouch colon, rectal atresia or stenosis, rectovaginal fistula, H-fistula and others. Groups would be analyzed according to the type of procedure performed stratified for confounding associated conditions such as sacral anomalies and tethered cord. A standard method for postoperative assessment of continence was determined. A new International diagnostic classification system, operative groupings and a method of postoperative assessment of continence was developed by consensus of a large contingent of participants experienced in the management of patients with ARM. These methods should allow for a common standardization of diagnosis and comparing postoperative results.

  9. Promoting consistent use of the communication function classification system (CFCS).

    PubMed

    Cunningham, Barbara Jane; Rosenbaum, Peter; Hidecker, Mary Jo Cooley

    2016-01-01

    We developed a Knowledge Translation (KT) intervention to standardize the way speech-language pathologists working in Ontario Canada's Preschool Speech and Language Program (PSLP) used the Communication Function Classification System (CFCS). This tool was being used as part of a provincial program evaluation and standardizing its use was critical for establishing reliability and validity within the provincial dataset. Two theoretical foundations - Diffusion of Innovations and the Communication Persuasion Matrix - were used to develop and disseminate the intervention to standardize use of the CFCS among a cohort speech-language pathologists. A descriptive pre-test/post-test study was used to evaluate the intervention. Fifty-two participants completed an electronic pre-test survey, reviewed intervention materials online, and then immediately completed an electronic post-test survey. The intervention improved clinicians' understanding of how the CFCS should be used, their intentions to use the tool in the standardized way, and their abilities to make correct classifications using the tool. Findings from this work will be shared with representatives of the Ontario PSLP. The intervention may be disseminated to all speech-language pathologists working in the program. This study can be used as a model for developing and disseminating KT interventions for clinicians in paediatric rehabilitation. The Communication Function Classification System (CFCS) is a new tool that allows speech-language pathologists to classify children's skills into five meaningful levels of function. There is uncertainty and inconsistent practice in the field about the methods for using this tool. This study used combined two theoretical frameworks to develop an intervention to standardize use of the CFCS among a cohort of speech-language pathologists. The intervention effectively increased clinicians' understanding of the methods for using the CFCS, ability to make correct classifications, and intention to use the tool in the standardized way in the future.

  10. Building the United States National Vegetation Classification

    USGS Publications Warehouse

    Franklin, S.B.; Faber-Langendoen, D.; Jennings, M.; Keeler-Wolf, T.; Loucks, O.; Peet, R.; Roberts, D.; McKerrow, A.

    2012-01-01

    The Federal Geographic Data Committee (FGDC) Vegetation Subcommittee, the Ecological Society of America Panel on Vegetation Classification, and NatureServe have worked together to develop the United States National Vegetation Classification (USNVC). The current standard was accepted in 2008 and fosters consistency across Federal agencies and non-federal partners for the description of each vegetation concept and its hierarchical classification. The USNVC is structured as a dynamic standard, where changes to types at any level may be proposed at any time as new information comes in. But, because much information already exists from previous work, the NVC partners first established methods for screening existing types to determine their acceptability with respect to the 2008 standard. Current efforts include a screening process to assign confidence to Association and Group level descriptions, and a review of the upper three levels of the classification. For the upper levels especially, the expectation is that the review process includes international scientists. Immediate future efforts include the review of remaining levels and the development of a proposal review process.

  11. Automated Feature Identification and Classification Using Automated Feature Weighted Self Organizing Map (FWSOM)

    NASA Astrophysics Data System (ADS)

    Starkey, Andrew; Usman Ahmad, Aliyu; Hamdoun, Hassan

    2017-10-01

    This paper investigates the application of a novel method for classification called Feature Weighted Self Organizing Map (FWSOM) that analyses the topology information of a converged standard Self Organizing Map (SOM) to automatically guide the selection of important inputs during training for improved classification of data with redundant inputs, examined against two traditional approaches namely neural networks and Support Vector Machines (SVM) for the classification of EEG data as presented in previous work. In particular, the novel method looks to identify the features that are important for classification automatically, and in this way the important features can be used to improve the diagnostic ability of any of the above methods. The paper presents the results and shows how the automated identification of the important features successfully identified the important features in the dataset and how this results in an improvement of the classification results for all methods apart from linear discriminatory methods which cannot separate the underlying nonlinear relationship in the data. The FWSOM in addition to achieving higher classification accuracy has given insights into what features are important in the classification of each class (left and right-hand movements), and these are corroborated by already published work in this area.

  12. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  13. Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.

    PubMed

    Liu, Da; Li, Jianxun

    2016-12-16

    Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.

  14. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  15. Wavelet images and Chou's pseudo amino acid composition for protein classification.

    PubMed

    Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra

    2012-08-01

    The last decade has seen an explosion in the collection of protein data. To actualize the potential offered by this wealth of data, it is important to develop machine systems capable of classifying and extracting features from proteins. Reliable machine systems for protein classification offer many benefits, including the promise of finding novel drugs and vaccines. In developing our system, we analyze and compare several feature extraction methods used in protein classification that are based on the calculation of texture descriptors starting from a wavelet representation of the protein. We then feed these texture-based representations of the protein into an Adaboost ensemble of neural network or a support vector machine classifier. In addition, we perform experiments that combine our feature extraction methods with a standard method that is based on the Chou's pseudo amino acid composition. Using several datasets, we show that our best approach outperforms standard methods. The Matlab code of the proposed protein descriptors is available at http://bias.csr.unibo.it/nanni/wave.rar .

  16. Classification of standard-like heterotic-string vacua

    NASA Astrophysics Data System (ADS)

    Faraggi, Alon E.; Rizos, John; Sonmez, Hasan

    2018-02-01

    We extend the free fermionic classification methodology to the class of standard-like heterotic-string vacua, in which the SO (10) GUT symmetry is broken at the string level to SU (3) × SU (2) × U(1) 2. The space of GGSO free phase configurations in this case is vastly enlarged compared to the corresponding SO (6) × SO (4) and SU (5) × U (1) vacua. Extracting substantial numbers of phenomenologically viable models therefore requires a modification of the classification methods. This is achieved by identifying conditions on the GGSO projection coefficients, which are satisfied at the SO (10) level by random phase configurations, and that lead to three generation models with the SO (10) symmetry broken to the SU (3) × SU (2) × U(1) 2 subgroup. Around each of these fertile SO (10) configurations, we perform a complete classification of standard-like models, by adding the SO (10) symmetry breaking basis vectors, and scanning all the associated GGSO phases. Following this methodology we are able to generate some 107 three generation Standard-like Models. We present the results of the classification and one exemplary model with distinct phenomenological properties, compared to previous SLM constructions.

  17. Evidence for the Existing American Nurses Association-Recognized Standardized Nursing Terminologies: A Systematic Review

    PubMed Central

    Tastan, Sevinc; Linch, Graciele C. F.; Keenan, Gail M.; Stifter, Janet; McKinney, Dawn; Fahey, Linda; Dunn Lopez, Karen; Yao, Yingwei; Wilkie, Diana J.

    2014-01-01

    Objective To determine the state of the science for the five standardized nursing terminology sets in terms of level of evidence and study focus. Design Systematic Review. Data sources Keyword search of PubMed, CINAHL, and EMBASE databases from 1960s to March 19, 2012 revealed 1,257 publications. Review Methods From abstract review we removed duplicate articles, those not in English or with no identifiable standardized nursing terminology, and those with a low-level of evidence. From full text review of the remaining 312 articles, eight trained raters used a coding system to record standardized nursing terminology names, publication year, country, and study focus. Inter-rater reliability confirmed the level of evidence. We analyzed coded results. Results On average there were 4 studies per year between 1985 and 1995. The yearly number increased to 14 for the decade between 1996–2005, 21 between 2006–2010, and 25 in 2011. Investigators conducted the research in 27 countries. By evidence level for the 312 studies 72.4% were descriptive, 18.9% were observational, and 8.7% were intervention studies. Of the 312 reports, 72.1% focused on North American Nursing Diagnosis-International, Nursing Interventions Classification, Nursing Outcome Classification, or some combination of those three standardized nursing terminologies; 9.6% on Omaha System; 7.1% on International Classification for Nursing Practice; 1.6% on Clinical Care Classification/Home Health Care Classification; 1.6% on Perioperative Nursing Data Set; and 8.0% on two or more standardized nursing terminology sets. There were studies in all 10 foci categories including those focused on concept analysis/classification infrastructure (n = 43), the identification of the standardized nursing terminology concepts applicable to a health setting from registered nurses’ documentation (n = 54), mapping one terminology to another (n = 58), implementation of standardized nursing terminologies into electronic health records (n = 12), and secondary use of electronic health record data (n = 19). Conclusions Findings reveal that the number of standardized nursing terminology publications increased primarily since 2000 with most focusing on North American Nursing Diagnosis-International, Nursing Interventions Classification, and Nursing Outcome Classification. The majority of the studies were descriptive, qualitative, or correlational designs that provide a strong base for understanding the validity and reliability of the concepts underlying the standardized nursing terminologies. There is evidence supporting the successful integration and use in electronic health records for two standardized nursing terminology sets: (1) the North American Nursing Diagnosis-International, Nursing Interventions Classification, and Nursing Outcome Classification set; and (2) the Omaha System set. Researchers, however, should continue to strengthen standardized nursing terminology study designs to promote continuous improvement of the standardized nursing terminologies and use in clinical practice. PMID:24412062

  18. Identification of an Efficient Gene Expression Panel for Glioblastoma Classification

    PubMed Central

    Zelaya, Ivette; Laks, Dan R.; Zhao, Yining; Kawaguchi, Riki; Gao, Fuying; Kornblum, Harley I.; Coppola, Giovanni

    2016-01-01

    We present here a novel genetic algorithm-based random forest (GARF) modeling technique that enables a reduction in the complexity of large gene disease signatures to highly accurate, greatly simplified gene panels. When applied to 803 glioblastoma multiforme samples, this method allowed the 840-gene Verhaak et al. gene panel (the standard in the field) to be reduced to a 48-gene classifier, while retaining 90.91% classification accuracy, and outperforming the best available alternative methods. Additionally, using this approach we produced a 32-gene panel which allows for better consistency between RNA-seq and microarray-based classifications, improving cross-platform classification retention from 69.67% to 86.07%. A webpage producing these classifications is available at http://simplegbm.semel.ucla.edu. PMID:27855170

  19. Classification of Magnetic Nanoparticle Systems—Synthesis, Standardization and Analysis Methods in the NanoMag Project

    PubMed Central

    Bogren, Sara; Fornara, Andrea; Ludwig, Frank; del Puerto Morales, Maria; Steinhoff, Uwe; Fougt Hansen, Mikkel; Kazakova, Olga; Johansson, Christer

    2015-01-01

    This study presents classification of different magnetic single- and multi-core particle systems using their measured dynamic magnetic properties together with their nanocrystal and particle sizes. The dynamic magnetic properties are measured with AC (dynamical) susceptometry and magnetorelaxometry and the size parameters are determined from electron microscopy and dynamic light scattering. Using these methods, we also show that the nanocrystal size and particle morphology determines the dynamic magnetic properties for both single- and multi-core particles. The presented results are obtained from the four year EU NMP FP7 project, NanoMag, which is focused on standardization of analysis methods for magnetic nanoparticles. PMID:26343639

  20. Classification of weld defect based on information fusion technology for radiographic testing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Hongquan; Liang, Zeming, E-mail: heavenlzm@126.com; Gao, Jianmin

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster–Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defectmore » feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.« less

  1. Classification of weld defect based on information fusion technology for radiographic testing system.

    PubMed

    Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying

    2016-03-01

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.

  2. SU-E-I-59: Investigation of the Usefulness of a Standard Deviation and Mammary Gland Density as Indexes for Mammogram Classification.

    PubMed

    Takarabe, S; Yabuuchi, H; Morishita, J

    2012-06-01

    To investigate the usefulness of the standard deviation of pixel values in a whole mammary glands region and the percentage of a high- density mammary glands region to a whole mammary glands region as features for classification of mammograms into four categories based on the ACR BI-RADS breast composition. We used 36 digital mediolateral oblique view mammograms (18 patients) approved by our IRB. These images were classified into the four categories of breast compositions by an experienced breast radiologist and the results of the classification were regarded as a gold standard. First, a whole mammary region in a breast was divided into two regions such as a high-density mammary glands region and a low/iso-density mammary glands region by using a threshold value that was obtained from the pixel values corresponding to a pectoral muscle region. Then the percentage of a high-density mammary glands region to a whole mammary glands region was calculated. In addition, as a new method, the standard deviation of pixel values in a whole mammary glands region was calculated as an index based on the intermingling of mammary glands and fats. Finally, all mammograms were classified by using the combination of the percentage of a high-density mammary glands region and the standard deviation of each image. The agreement rates of the classification between our proposed method and gold standard was 86% (31/36). This result signified that our method has the potential to classify mammograms. The combination of the standard deviation of pixel values in a whole mammary glands region and the percentage of a high-density mammary glands region to a whole mammary glands region was available as features to classify mammograms based on the ACR BI- RADS breast composition. © 2012 American Association of Physicists in Medicine.

  3. A label distance maximum-based classifier for multi-label learning.

    PubMed

    Liu, Xiaoli; Bao, Hang; Zhao, Dazhe; Cao, Peng

    2015-01-01

    Multi-label classification is useful in many bioinformatics tasks such as gene function prediction and protein site localization. This paper presents an improved neural network algorithm, Max Label Distance Back Propagation Algorithm for Multi-Label Classification. The method was formulated by modifying the total error function of the standard BP by adding a penalty term, which was realized by maximizing the distance between the positive and negative labels. Extensive experiments were conducted to compare this method against state-of-the-art multi-label methods on three popular bioinformatic benchmark datasets. The results illustrated that this proposed method is more effective for bioinformatic multi-label classification compared to commonly used techniques.

  4. BCDForest: a boosting cascade deep forest model towards the classification of cancer subtypes based on gene expression data.

    PubMed

    Guo, Yang; Liu, Shuhui; Li, Zhanhuai; Shang, Xuequn

    2018-04-11

    The classification of cancer subtypes is of great importance to cancer disease diagnosis and therapy. Many supervised learning approaches have been applied to cancer subtype classification in the past few years, especially of deep learning based approaches. Recently, the deep forest model has been proposed as an alternative of deep neural networks to learn hyper-representations by using cascade ensemble decision trees. It has been proved that the deep forest model has competitive or even better performance than deep neural networks in some extent. However, the standard deep forest model may face overfitting and ensemble diversity challenges when dealing with small sample size and high-dimensional biology data. In this paper, we propose a deep learning model, so-called BCDForest, to address cancer subtype classification on small-scale biology datasets, which can be viewed as a modification of the standard deep forest model. The BCDForest distinguishes from the standard deep forest model with the following two main contributions: First, a named multi-class-grained scanning method is proposed to train multiple binary classifiers to encourage diversity of ensemble. Meanwhile, the fitting quality of each classifier is considered in representation learning. Second, we propose a boosting strategy to emphasize more important features in cascade forests, thus to propagate the benefits of discriminative features among cascade layers to improve the classification performance. Systematic comparison experiments on both microarray and RNA-Seq gene expression datasets demonstrate that our method consistently outperforms the state-of-the-art methods in application of cancer subtype classification. The multi-class-grained scanning and boosting strategy in our model provide an effective solution to ease the overfitting challenge and improve the robustness of deep forest model working on small-scale data. Our model provides a useful approach to the classification of cancer subtypes by using deep learning on high-dimensional and small-scale biology data.

  5. A new hierarchical method for inter-patient heartbeat classification using random projections and RR intervals

    PubMed Central

    2014-01-01

    Background The inter-patient classification schema and the Association for the Advancement of Medical Instrumentation (AAMI) standards are important to the construction and evaluation of automated heartbeat classification systems. The majority of previously proposed methods that take the above two aspects into consideration use the same features and classification method to classify different classes of heartbeats. The performance of the classification system is often unsatisfactory with respect to the ventricular ectopic beat (VEB) and supraventricular ectopic beat (SVEB). Methods Based on the different characteristics of VEB and SVEB, a novel hierarchical heartbeat classification system was constructed. This was done in order to improve the classification performance of these two classes of heartbeats by using different features and classification methods. First, random projection and support vector machine (SVM) ensemble were used to detect VEB. Then, the ratio of the RR interval was compared to a predetermined threshold to detect SVEB. The optimal parameters for the classification models were selected on the training set and used in the independent testing set to assess the final performance of the classification system. Meanwhile, the effect of different lead configurations on the classification results was evaluated. Results Results showed that the performance of this classification system was notably superior to that of other methods. The VEB detection sensitivity was 93.9% with a positive predictive value of 90.9%, and the SVEB detection sensitivity was 91.1% with a positive predictive value of 42.2%. In addition, this classification process was relatively fast. Conclusions A hierarchical heartbeat classification system was proposed based on the inter-patient data division to detect VEB and SVEB. It demonstrated better classification performance than existing methods. It can be regarded as a promising system for detecting VEB and SVEB of unknown patients in clinical practice. PMID:24981916

  6. On-line analysis of algae in water by discrete three-dimensional fluorescence spectroscopy.

    PubMed

    Zhao, Nanjing; Zhang, Xiaoling; Yin, Gaofang; Yang, Ruifang; Hu, Li; Chen, Shuang; Liu, Jianguo; Liu, Wenqing

    2018-03-19

    In view of the problem of the on-line measurement of algae classification, a method of algae classification and concentration determination based on the discrete three-dimensional fluorescence spectra was studied in this work. The discrete three-dimensional fluorescence spectra of twelve common species of algae belonging to five categories were analyzed, the discrete three-dimensional standard spectra of five categories were built, and the recognition, classification and concentration prediction of algae categories were realized by the discrete three-dimensional fluorescence spectra coupled with non-negative weighted least squares linear regression analysis. The results show that similarities between discrete three-dimensional standard spectra of different categories were reduced and the accuracies of recognition, classification and concentration prediction of the algae categories were significantly improved. By comparing with that of the chlorophyll a fluorescence excitation spectra method, the recognition accuracy rate in pure samples by discrete three-dimensional fluorescence spectra is improved 1.38%, and the recovery rate and classification accuracy in pure diatom samples 34.1% and 46.8%, respectively; the recognition accuracy rate of mixed samples by discrete-three dimensional fluorescence spectra is enhanced by 26.1%, the recovery rate of mixed samples with Chlorophyta 37.8%, and the classification accuracy of mixed samples with diatoms 54.6%.

  7. Strength Analysis on Ship Ladder Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Budianto; Wahyudi, M. T.; Dinata, U.; Ruddianto; Eko P., M. M.

    2018-01-01

    In designing the ship’s structure, it should refer to the rules in accordance with applicable classification standards. In this case, designing Ladder (Staircase) on a Ferry Ship which is set up, it must be reviewed based on the loads during ship operations, either during sailing or at port operations. The classification rules in ship design refer to the calculation of the structure components described in Classification calculation method and can be analysed using the Finite Element Method. Classification Regulations used in the design of Ferry Ships used BKI (Bureau of Classification Indonesia). So the rules for the provision of material composition in the mechanical properties of the material should refer to the classification of the used vessel. The analysis in this structure used program structure packages based on Finite Element Method. By using structural analysis on Ladder (Ladder), it obtained strength and simulation structure that can withstand load 140 kg both in static condition, dynamic, and impact. Therefore, the result of the analysis included values of safety factors in the ship is to keep the structure safe but the strength of the structure is not excessive.

  8. Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches

    PubMed Central

    Hauschild, Anne-Christin; Kopczynski, Dominik; D’Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan

    2013-01-01

    Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME). We manually generated a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors’ results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications. PMID:24957992

  9. Peak detection method evaluation for ion mobility spectrometry by using machine learning approaches.

    PubMed

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan

    2013-04-16

    Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors' results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications.

  10. Development of a database of health insurance claims: standardization of disease classifications and anonymous record linkage.

    PubMed

    Kimura, Shinya; Sato, Toshihiko; Ikeda, Shunya; Noda, Mitsuhiko; Nakayama, Takeo

    2010-01-01

    Health insurance claims (ie, receipts) record patient health care treatments and expenses and, although created for the health care payment system, are potentially useful for research. Combining different types of receipts generated for the same patient would dramatically increase the utility of these receipts. However, technical problems, including standardization of disease names and classifications, and anonymous linkage of individual receipts, must be addressed. In collaboration with health insurance societies, all information from receipts (inpatient, outpatient, and pharmacy) was collected. To standardize disease names and classifications, we developed a computer-aided post-entry standardization method using a disease name dictionary based on International Classification of Diseases (ICD)-10 classifications. We also developed an anonymous linkage system by using an encryption code generated from a combination of hash values and stream ciphers. Using different sets of the original data (data set 1: insurance certificate number, name, and sex; data set 2: insurance certificate number, date of birth, and relationship status), we compared the percentage of successful record matches obtained by using data set 1 to generate key codes with the percentage obtained when both data sets were used. The dictionary's automatic conversion of disease names successfully standardized 98.1% of approximately 2 million new receipts entered into the database. The percentage of anonymous matches was higher for the combined data sets (98.0%) than for data set 1 (88.5%). The use of standardized disease classifications and anonymous record linkage substantially contributed to the construction of a large, chronologically organized database of receipts. This database is expected to aid in epidemiologic and health services research using receipt information.

  11. Territories typification technique with use of statistical models

    NASA Astrophysics Data System (ADS)

    Galkin, V. I.; Rastegaev, A. V.; Seredin, V. V.; Andrianov, A. V.

    2018-05-01

    Territories typification is required for solution of many problems. The results of geological zoning received by means of various methods do not always agree. That is why the main goal of the research given is to develop a technique of obtaining a multidimensional standard classified indicator for geological zoning. In the course of the research, the probabilistic approach was used. In order to increase the reliability of geological information classification, the authors suggest using complex multidimensional probabilistic indicator P K as a criterion of the classification. The second criterion chosen is multidimensional standard classified indicator Z. These can serve as characteristics of classification in geological-engineering zoning. Above mentioned indicators P K and Z are in good correlation. Correlation coefficient values for the entire territory regardless of structural solidity equal r = 0.95 so each indicator can be used in geological-engineering zoning. The method suggested has been tested and the schematic map of zoning has been drawn.

  12. Semi-supervised SVM for individual tree crown species classification

    NASA Astrophysics Data System (ADS)

    Dalponte, Michele; Ene, Liviu Theodor; Marconcini, Mattia; Gobakken, Terje; Næsset, Erik

    2015-12-01

    In this paper a novel semi-supervised SVM classifier is presented, specifically developed for tree species classification at individual tree crown (ITC) level. In ITC tree species classification, all the pixels belonging to an ITC should have the same label. This assumption is used in the learning of the proposed semi-supervised SVM classifier (ITC-S3VM). This method exploits the information contained in the unlabeled ITC samples in order to improve the classification accuracy of a standard SVM. The ITC-S3VM method can be easily implemented using freely available software libraries. The datasets used in this study include hyperspectral imagery and laser scanning data acquired over two boreal forest areas characterized by the presence of three information classes (Pine, Spruce, and Broadleaves). The experimental results quantify the effectiveness of the proposed approach, which provides classification accuracies significantly higher (from 2% to above 27%) than those obtained by the standard supervised SVM and by a state-of-the-art semi-supervised SVM (S3VM). Particularly, by reducing the number of training samples (i.e. from 100% to 25%, and from 100% to 5% for the two datasets, respectively) the proposed method still exhibits results comparable to the ones of a supervised SVM trained with the full available training set. This property of the method makes it particularly suitable for practical forest inventory applications in which collection of in situ information can be very expensive both in terms of cost and time.

  13. Development of the Gross Motor Function Classification System (1997)

    ERIC Educational Resources Information Center

    Morris, Christopher

    2008-01-01

    To address the need for a standardized system to classify the gross motor function of children with cerebral palsy, the authors developed a five-level classification system analogous to the staging and grading systems used in medicine. Nominal group process and Delphi survey consensus methods were used to examine content validity and revise the…

  14. Multiple Signal Classification for Determining Direction of Arrival of Frequency Hopping Spread Spectrum Signals

    DTIC Science & Technology

    2014-03-27

    42 4.2.3 Number of Hops Hs . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.4 Number of Sensors M... 45 4.5 Standard deviation vs. Ns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.6 Bias...laboratory MTM multiple taper method MUSIC multiple signal classification MVDR minimum variance distortionless reposnse PSK phase shift keying QAM

  15. [A New Distance Metric between Different Stellar Spectra: the Residual Distribution Distance].

    PubMed

    Liu, Jie; Pan, Jing-chang; Luo, A-li; Wei, Peng; Liu, Meng

    2015-12-01

    Distance metric is an important issue for the spectroscopic survey data processing, which defines a calculation method of the distance between two different spectra. Based on this, the classification, clustering, parameter measurement and outlier data mining of spectral data can be carried out. Therefore, the distance measurement method has some effect on the performance of the classification, clustering, parameter measurement and outlier data mining. With the development of large-scale stellar spectral sky surveys, how to define more efficient distance metric on stellar spectra has become a very important issue in the spectral data processing. Based on this problem and fully considering of the characteristics and data features of the stellar spectra, a new distance measurement method of stellar spectra named Residual Distribution Distance is proposed. While using this method to measure the distance, the two spectra are firstly scaled and then the standard deviation of the residual is used the distance. Different from the traditional distance metric calculation methods of stellar spectra, when used to calculate the distance between stellar spectra, this method normalize the two spectra to the same scale, and then calculate the residual corresponding to the same wavelength, and the standard error of the residual spectrum is used as the distance measure. The distance measurement method can be used for stellar classification, clustering and stellar atmospheric physical parameters measurement and so on. This paper takes stellar subcategory classification as an example to test the distance measure method. The results show that the distance defined by the proposed method is more effective to describe the gap between different types of spectra in the classification than other methods, which can be well applied in other related applications. At the same time, this paper also studies the effect of the signal to noise ratio (SNR) on the performance of the proposed method. The result show that the distance is affected by the SNR. The smaller the signal-to-noise ratio is, the greater impact is on the distance; While SNR is larger than 10, the signal-to-noise ratio has little effect on the performance for the classification.

  16. 7 CFR 27.36 - Classification and Micronaire determinations based on official standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Classification and Micronaire determinations based on... COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification and Micronaire Determinations § 27.36 Classification and Micronaire...

  17. 7 CFR 27.36 - Classification and Micronaire determinations based on official standards.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Classification and Micronaire determinations based on... COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification and Micronaire Determinations § 27.36 Classification and Micronaire...

  18. Data preprocessing methods of FT-NIR spectral data for the classification cooking oil

    NASA Astrophysics Data System (ADS)

    Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli

    2014-12-01

    This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.

  19. An Expert System for Classifying Stars on the MK Spectral Classification System

    NASA Astrophysics Data System (ADS)

    Corbally, Christopher J.; Gray, R. O.

    2013-01-01

    We will describe an expert computer system designed to classify stellar spectra on the MK Spectral Classification system employing methods similar to those of humans who make direct comparison with the MK classification standards. Like an expert human classifier, MKCLASS first comes up with a rough spectral type, and then refines that type by direct comparison with MK standards drawn from a standards library using spectral criteria appropriate to the spectral class. Certain common spectral-type peculiarities can also be detected by the program. The program is also capable of identifying WD spectra and carbon stars and giving appropriate (but currently approximate) spectral types on the relevant systems. We will show comparisons between spectral types (including luminosity types) performed by MKCLASS and humans. The program currently is capable of competent classifications in the violet-green region, but plans are underway to extend the spectral criteria into the red and near-infrared regions. Two standard libraries with resolutions of 1.8 and 3.6Å are now available, but a higher-resolution standard library, using the new spectrograph on the Vatican Advanced Technology Telescope, is currently under preparation. Once that library is available, MKCLASS and the spectral libraries will be made available to the astronomical community.

  20. Application of FT-IR Classification Method in Silica-Plant Extracts Composites Quality Testing

    NASA Astrophysics Data System (ADS)

    Bicu, A.; Drumea, V.; Mihaiescu, D. E.; Purcareanu, B.; Florea, M. A.; Trică, B.; Vasilievici, G.; Draga, S.; Buse, E.; Olariu, L.

    2018-06-01

    Our present work is concerned with the validation and quality testing efforts of mesoporous silica - plant extracts composites, in order to sustain the standardization process of plant-based pharmaceutical products. The synthesis of the silica support were performed by using a TEOS based synthetic route and CTAB as a template, at room temperature and normal pressure. The silica support was analyzed by advanced characterization methods (SEM, TEM, BET, DLS and FT-IR), and loaded with Calendula officinalis and Salvia officinalis standardized extracts. Further desorption studies were performed in order to prove the sustained release properties of the final materials. Intermediate and final product identification was performed by a FT-IR classification method, using the MID-range of the IR spectra, and statistical representative samples from repetitive synthetic stages. The obtained results recommend this analytical method as a fast and cost effective alternative to the classic identification methods.

  1. 32 CFR 2001.10 - Classification standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Classification standards. 2001.10 Section 2001... Classification § 2001.10 Classification standards. Identifying or describing damage to the national security. Section 1.1(a) of the Order specifies the conditions that must be met when making classification decisions...

  2. 32 CFR 2001.10 - Classification standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Classification standards. 2001.10 Section 2001... Classification § 2001.10 Classification standards. Identifying or describing damage to the national security. Section 1.1(a) of the Order specifies the conditions that must be met when making classification decisions...

  3. 32 CFR 2001.10 - Classification standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Classification standards. 2001.10 Section 2001... Classification § 2001.10 Classification standards. Identifying or describing damage to the national security. Section 1.1(a) of the Order specifies the conditions that must be met when making classification decisions...

  4. 32 CFR 2001.10 - Classification standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification standards. 2001.10 Section 2001... Classification § 2001.10 Classification standards. Identifying or describing damage to the national security. Section 1.1(a) of the Order specifies the conditions that must be met when making classification decisions...

  5. 32 CFR 2001.10 - Classification standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Classification standards. 2001.10 Section 2001... Classification § 2001.10 Classification standards. Identifying or describing damage to the national security. Section 1.1(a) of the Order specifies the conditions that must be met when making classification decisions...

  6. Land cover and forest formation distributions for St. Kitts, Nevis, St. Eustatius, Grenada and Barbados from decision tree classification of cloud-cleared satellite imagery. Caribbean Journal of Science. 44(2):175-198.

    Treesearch

    E.H. Helmer; T.A. Kennaway; D.H. Pedreros; M.L. Clark; H. Marcano-Vega; L.L. Tieszen; S.R. Schill; C.M.S. Carrington

    2008-01-01

    Satellite image-based mapping of tropical forests is vital to conservation planning. Standard methods for automated image classification, however, limit classification detail in complex tropical landscapes. In this study, we test an approach to Landsat image interpretation on four islands of the Lesser Antilles, including Grenada and St. Kitts, Nevis and St. Eustatius...

  7. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  8. Feasibility of Active Machine Learning for Multiclass Compound Classification.

    PubMed

    Lang, Tobias; Flachsenberg, Florian; von Luxburg, Ulrike; Rarey, Matthias

    2016-01-25

    A common task in the hit-to-lead process is classifying sets of compounds into multiple, usually structural classes, which build the groundwork for subsequent SAR studies. Machine learning techniques can be used to automate this process by learning classification models from training compounds of each class. Gathering class information for compounds can be cost-intensive as the required data needs to be provided by human experts or experiments. This paper studies whether active machine learning can be used to reduce the required number of training compounds. Active learning is a machine learning method which processes class label data in an iterative fashion. It has gained much attention in a broad range of application areas. In this paper, an active learning method for multiclass compound classification is proposed. This method selects informative training compounds so as to optimally support the learning progress. The combination with human feedback leads to a semiautomated interactive multiclass classification procedure. This method was investigated empirically on 15 compound classification tasks containing 86-2870 compounds in 3-38 classes. The empirical results show that active learning can solve these classification tasks using 10-80% of the data which would be necessary for standard learning techniques.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The first part covers standards for gaseous fuels. The standard part covers standards on coal and coke including the classification of coals, determination of major elements in coal ash and trace elements in coal, metallurgical properties of coal and coke, methods of analysis of coal and coke, petrographic analysis of coal and coke, physical characteristics of coal, quality assurance and sampling.

  10. Using Different Standardized Methods for Species Identification: A Case Study Using Beaks from Three Ommastrephid Species

    NASA Astrophysics Data System (ADS)

    Hu, Guanyu; Fang, Zhou; Liu, Bilin; Chen, Xinjun; Staples, Kevin; Chen, Yong

    2018-04-01

    The cephalopod beak is a vital hard structure with a stable configuration and has been widely used for the identification of cephalopod species. This study was conducted to determine the best standardization method for identifying different species by measuring 12 morphological variables of the beaks of Illex argentinus, Ommastrephes bartramii, and Dosidicus gigas that were collected by Chinese jigging vessels. To remove the effects of size, these morphometric variables were standardized using three methods. The average ratios of the upper beak morphological variables and upper crest length of O. bartramii and D. gigas were found to be greater than those of I. argentinus. However, for lower beaks, only the average of LRL (lower rostrum length)/ LCL (lower crest length), LRW (lower rostrum width)/ LCL, and LLWL (lower lateral wall length)/ LCL of O. bartramii and D. gigas were greater than those of I. argentinus. The ratios of beak morphological variables and crest length were found to be all significantly different among the three species ( P < 0.001). Among the three standardization methods, the correct classification rate of stepwise discriminant analysis (SDA) was the highest using the ratios of beak morphological variables and crest length. Compared with hood length, the correct classification rate was slightly higher when using beak variables standardized by crest length using an allometric model. The correct classification rate of the lower beak was also found to be greater than that of the upper beak. This study indicates that the ratios of beak morphological variables to crest length could be used for interspecies and intraspecies identification. Meanwhile, the lower beak variables were found to be more effective than upper beak variables in classifying beaks found in the stomachs of predators.

  11. [Relationship between crown form of upper central incisors and papilla filling in Chinese Han-nationality youth].

    PubMed

    Yang, X; Le, D; Zhang, Y L; Liang, L Z; Yang, G; Hu, W J

    2016-10-18

    To explore a crown form classification method for upper central incisor which is more objective and scientific than traditional classification method based on the standardized photography technique. To analyze the relationship between crown form of upper central incisors and papilla filling in periodontally healthy Chinese Han-nationality youth. In the study, 180 periodontally healthy Chinese youth ( 75 males, and 105 females ) aged 20-30 (24.3±4.5) years were included. With the standardized upper central incisor photography technique, pictures of 360 upper central incisors were obtained. Each tooth was classified as triangular, ovoid or square by 13 experienced specialist majors in prothodontics independently and the final classification result was decided by most evaluators in order to ensure objectivity. The standardized digital photo was also used to evaluate the gingival papilla filling situation. The papilla filling result was recorded as present or absent according to naked eye observation. The papilla filling rates of different crown forms were analyzed. Statistical analyses were performed with SPSS 19.0. The proportions of triangle, ovoid and square forms of upper central incisor in Chinese Han-nationality youth were 31.4% (113/360), 37.2% (134/360) and 31.4% (113/360 ), respectively, and no statistical difference was found between the males and females. Average κ value between each two evaluators was 0.381. Average κ value was raised up to 0.563 when compared with the final classification result. In the study, 24 upper central incisors without contact were excluded, and the papilla filling rates of triangle, ovoid and square crown were 56.4% (62/110), 69.6% (87/125), 76.2% (77/101) separately. The papilla filling rate of square form was higher (P=0.007). The proportion of clinical crown form of upper central incisor in Chinese Han-nationality youth is obtained. Compared with triangle form, square form is found to favor a gingival papilla that fills the interproximal embrasure space. The consistency of the present classification method for upper central incisor is not satisfying, which indicates that a new classification method, more scientific and objective than the present one, is to be found.

  12. Mutual information criterion for feature selection with application to classification of breast microcalcifications

    NASA Astrophysics Data System (ADS)

    Diamant, Idit; Shalhon, Moran; Goldberger, Jacob; Greenspan, Hayit

    2016-03-01

    Classification of clustered breast microcalcifications into benign and malignant categories is an extremely challenging task for computerized algorithms and expert radiologists alike. In this paper we present a novel method for feature selection based on mutual information (MI) criterion for automatic classification of microcalcifications. We explored the MI based feature selection for various texture features. The proposed method was evaluated on a standardized digital database for screening mammography (DDSM). Experimental results demonstrate the effectiveness and the advantage of using the MI-based feature selection to obtain the most relevant features for the task and thus to provide for improved performance as compared to using all features.

  13. Methodological Issues in Predicting Pediatric Epilepsy Surgery Candidates Through Natural Language Processing and Machine Learning

    PubMed Central

    Cohen, Kevin Bretonnel; Glass, Benjamin; Greiner, Hansel M.; Holland-Bouley, Katherine; Standridge, Shannon; Arya, Ravindra; Faist, Robert; Morita, Diego; Mangano, Francesco; Connolly, Brian; Glauser, Tracy; Pestian, John

    2016-01-01

    Objective: We describe the development and evaluation of a system that uses machine learning and natural language processing techniques to identify potential candidates for surgical intervention for drug-resistant pediatric epilepsy. The data are comprised of free-text clinical notes extracted from the electronic health record (EHR). Both known clinical outcomes from the EHR and manual chart annotations provide gold standards for the patient’s status. The following hypotheses are then tested: 1) machine learning methods can identify epilepsy surgery candidates as well as physicians do and 2) machine learning methods can identify candidates earlier than physicians do. These hypotheses are tested by systematically evaluating the effects of the data source, amount of training data, class balance, classification algorithm, and feature set on classifier performance. The results support both hypotheses, with F-measures ranging from 0.71 to 0.82. The feature set, classification algorithm, amount of training data, class balance, and gold standard all significantly affected classification performance. It was further observed that classification performance was better than the highest agreement between two annotators, even at one year before documented surgery referral. The results demonstrate that such machine learning methods can contribute to predicting pediatric epilepsy surgery candidates and reducing lag time to surgery referral. PMID:27257386

  14. Optimizing taxonomic classification of marker-gene amplicon sequences with QIIME 2's q2-feature-classifier plugin.

    PubMed

    Bokulich, Nicholas A; Kaehler, Benjamin D; Rideout, Jai Ram; Dillon, Matthew; Bolyen, Evan; Knight, Rob; Huttley, Gavin A; Gregory Caporaso, J

    2018-05-17

    Taxonomic classification of marker-gene sequences is an important step in microbiome analysis. We present q2-feature-classifier ( https://github.com/qiime2/q2-feature-classifier ), a QIIME 2 plugin containing several novel machine-learning and alignment-based methods for taxonomy classification. We evaluated and optimized several commonly used classification methods implemented in QIIME 1 (RDP, BLAST, UCLUST, and SortMeRNA) and several new methods implemented in QIIME 2 (a scikit-learn naive Bayes machine-learning classifier, and alignment-based taxonomy consensus methods based on VSEARCH, and BLAST+) for classification of bacterial 16S rRNA and fungal ITS marker-gene amplicon sequence data. The naive-Bayes, BLAST+-based, and VSEARCH-based classifiers implemented in QIIME 2 meet or exceed the species-level accuracy of other commonly used methods designed for classification of marker gene sequences that were evaluated in this work. These evaluations, based on 19 mock communities and error-free sequence simulations, including classification of simulated "novel" marker-gene sequences, are available in our extensible benchmarking framework, tax-credit ( https://github.com/caporaso-lab/tax-credit-data ). Our results illustrate the importance of parameter tuning for optimizing classifier performance, and we make recommendations regarding parameter choices for these classifiers under a range of standard operating conditions. q2-feature-classifier and tax-credit are both free, open-source, BSD-licensed packages available on GitHub.

  15. An ordinal classification approach for CTG categorization.

    PubMed

    Georgoulas, George; Karvelis, Petros; Gavrilis, Dimitris; Stylios, Chrysostomos D; Nikolakopoulos, George

    2017-07-01

    Evaluation of cardiotocogram (CTG) is a standard approach employed during pregnancy and delivery. But, its interpretation requires high level expertise to decide whether the recording is Normal, Suspicious or Pathological. Therefore, a number of attempts have been carried out over the past three decades for development automated sophisticated systems. These systems are usually (multiclass) classification systems that assign a category to the respective CTG. However most of these systems usually do not take into consideration the natural ordering of the categories associated with CTG recordings. In this work, an algorithm that explicitly takes into consideration the ordering of CTG categories, based on binary decomposition method, is investigated. Achieved results, using as a base classifier the C4.5 decision tree classifier, prove that the ordinal classification approach is marginally better than the traditional multiclass classification approach, which utilizes the standard C4.5 algorithm for several performance criteria.

  16. Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.

    2009-02-01

    A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.

  17. Summary of tracking and identification methods

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Yang, Chun; Kadar, Ivan

    2014-06-01

    Over the last two decades, many solutions have arisen to combine target tracking estimation with classification methods. Target tracking includes developments from linear to non-linear and Gaussian to non-Gaussian processing. Pattern recognition includes detection, classification, recognition, and identification methods. Integrating tracking and pattern recognition has resulted in numerous approaches and this paper seeks to organize the various approaches. We discuss the terminology so as to have a common framework for various standards such as the NATO STANAG 4162 - Identification Data Combining Process. In a use case, we provide a comparative example highlighting that location information (as an example) with additional mission objectives from geographical, human, social, cultural, and behavioral modeling is needed to determine identification as classification alone does not allow determining identification or intent.

  18. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  19. A Biochar Classification System and Associated Test Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camps-Arbestain, Marta; Amonette, James E.; Singh, Balwant

    2015-02-18

    In this chapter, a biochar classification system related to its use as soil amendment is proposed. This document builds upon previous work and constrains its scope to materials with properties that satisfy the criteria for biochar as defined by either the International Biochar Initiative (IBI) Biochar Standards or the European Biochar Community (EBC) Standards, and it is intended to minimise the need for testing in addition to those required according to the above-mentioned standards. The classification system envisions enabling stakeholders and commercial entities to (i) identify the most suitable biochar to fulfil the requirements for a particular soil and/or land-use,more » and (ii) distinguish the application of biochar for specific niches (e.g., soilless agriculture). It is based on the best current knowledge and the intention is to periodically review and update the document based on new data and knowledge that become available in the scientific literature. The main thrust of this classification system is based on the direct or indirect beneficial effects that biochar provides from its application to soil. We have classified the potential beneficial effects of biochar application to soils into five categories with their corresponding classes, where applicable: (i) carbon (C) storage value, (ii) fertiliser value, (iii) liming value, (iv) particle-size, and (v) use in soil-less agriculture. A summary of recommended test methods is provided at the end of the chapter.« less

  20. Applications of remote sensing, volume 3

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. Of the four change detection techniques (post classification comparison, delta data, spectral/temporal, and layered spectral temporal), the post classification comparison was selected for further development. This was based upon test performances of the four change detection method, straightforwardness of the procedures, and the output products desired. A standardized modified, supervised classification procedure for analyzing the Texas coastal zone data was compiled. This procedure was developed in order that all quadrangles in the study are would be classified using similar analysis techniques to allow for meaningful comparisons and evaluations of the classifications.

  1. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  2. Three methods for integration of environmental risk into the benefit-risk assessment of veterinary medicinal products.

    PubMed

    Chapman, Jennifer L; Porsch, Lucas; Vidaurre, Rodrigo; Backhaus, Thomas; Sinclair, Chris; Jones, Glyn; Boxall, Alistair B A

    2017-12-15

    Veterinary medicinal products (VMPs) require, as part of the European Union (EU) authorization process, consideration of both risks and benefits. Uses of VMPs have multiple risks (e.g., risks to the animal being treated, to the person administering the VMP) including risks to the environment. Environmental risks are not directly comparable to therapeutic benefits; there is no standardized approach to compare both environmental risks and therapeutic benefits. We have developed three methods for communicating and comparing therapeutic benefits and environmental risks for the benefit-risk assessment that supports the EU authorization process. Two of these methods support independent product evaluation (i.e., a summative classification and a visual scoring matrix classification); the other supports a comparative evaluation between alternative products (i.e., a comparative classification). The methods and the challenges to implementing a benefit-risk assessment including environmental risk are presented herein; how these concepts would work in current policy is discussed. Adaptability to scientific and policy development is considered. This work is an initial step in the development of a standardized methodology for integrated decision-making for VMPs. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Interoperability of Medication Classification Systems: Lessons Learned Mapping Established Pharmacologic Classes (EPCs) to SNOMED CT

    PubMed Central

    Nelson, Scott D; Parker, Jaqui; Lario, Robert; Winnenburg, Rainer; Erlbaum, Mark S.; Lincoln, Michael J.; Bodenreider, Olivier

    2018-01-01

    Interoperability among medication classification systems is known to be limited. We investigated the mapping of the Established Pharmacologic Classes (EPCs) to SNOMED CT. We compared lexical and instance-based methods to an expert-reviewed reference standard to evaluate contributions of these methods. Of the 543 EPCs, 284 had an equivalent SNOMED CT class, 205 were more specific, and 54 could not be mapped. Precision, recall, and F1 score were 0.416, 0.620, and 0.498 for lexical mapping and 0.616, 0.504, and 0.554 for instance-based mapping. Each automatic method has strengths, weaknesses, and unique contributions in mapping between medication classification systems. In our experience, it was beneficial to consider the mapping provided by both automated methods for identifying potential matches, gaps, inconsistencies, and opportunities for quality improvement between classifications. However, manual review by subject matter experts is still needed to select the most relevant mappings. PMID:29295234

  4. Nonlinear features for classification and pose estimation of machined parts from single views

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-10-01

    A new nonlinear feature extraction method is presented for classification and pose estimation of objects from single views. The feature extraction method is called the maximum representation and discrimination feature (MRDF) method. The nonlinear MRDF transformations to use are obtained in closed form, and offer significant advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We consider MRDFs on image data, provide a new 2-stage nonlinear MRDF solution, and show it specializes to well-known linear and nonlinear image processing transforms under certain conditions. We show the use of MRDF in estimating the class and pose of images of rendered solid CAD models of machine parts from single views using a feature-space trajectory neural network classifier. We show new results with better classification and pose estimation accuracy than are achieved by standard principal component analysis and Fukunaga-Koontz feature extraction methods.

  5. Voxel classification based airway tree segmentation

    NASA Astrophysics Data System (ADS)

    Lo, Pechin; de Bruijne, Marleen

    2008-03-01

    This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.

  6. General methodology for simultaneous representation and discrimination of multiple object classes

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    We address a new general method for linear and nonlinear feature extraction for simultaneous representation and classification. We call this approach the maximum representation and discrimination feature (MRDF) method. We develop a novel nonlinear eigenfeature extraction technique to represent data with closed-form solutions and use it to derive a nonlinear MRDF algorithm. Results of the MRDF method on synthetic databases are shown and compared with results from standard Fukunaga-Koontz transform and Fisher discriminant function methods. The method is also applied to an automated product inspection problem and for classification and pose estimation of two similar objects under 3D aspect angle variations.

  7. A pilot study to explore the feasibility of using theClinical Care Classification System for developing a reliable costing method for nursing services.

    PubMed

    Dykes, Patricia C; Wantland, Dean; Whittenburg, Luann; Lipsitz, Stuart; Saba, Virginia K

    2013-01-01

    While nursing activities represent a significant proportion of inpatient care, there are no reliable methods for determining nursing costs based on the actual services provided by the nursing staff. Capture of data to support accurate measurement and reporting on the cost of nursing services is fundamental to effective resource utilization. Adopting standard terminologies that support tracking both the quality and the cost of care could reduce the data entry burden on direct care providers. This pilot study evaluated the feasibility of using a standardized nursing terminology, the Clinical Care Classification System (CCC), for developing a reliable costing method for nursing services. Two different approaches are explored; the Relative Value Unit RVU and the simple cost-to-time methods. We found that the simple cost-to-time method was more accurate and more transparent in its derivation than the RVU method and may support a more consistent and reliable approach for costing nursing services.

  8. Standardization of uveitis nomenclature for reporting clinical data. Results of the First International Workshop.

    PubMed

    Jabs, Douglas A; Nussenblatt, Robert B; Rosenbaum, James T

    2005-09-01

    To begin a process of standardizing the methods for reporting clinical data in the field of uveitis. Consensus workshop. Members of an international working group were surveyed about diagnostic terminology, inflammation grading schema, and outcome measures, and the results used to develop a series of proposals to better standardize the use of these entities. Small groups employed nominal group techniques to achieve consensus on several of these issues. The group affirmed that an anatomic classification of uveitis should be used as a framework for subsequent work on diagnostic criteria for specific uveitic syndromes, and that the classification of uveitis entities should be on the basis of the location of the inflammation and not on the presence of structural complications. Issues regarding the use of the terms "intermediate uveitis," "pars planitis," "panuveitis," and descriptors of the onset and course of the uveitis were addressed. The following were adopted: standardized grading schema for anterior chamber cells, anterior chamber flare, and for vitreous haze; standardized methods of recording structural complications of uveitis; standardized definitions of outcomes, including "inactive" inflammation, "improvement'; and "worsening" of the inflammation, and "corticosteroid sparing," and standardized guidelines for reporting visual acuity outcomes. A process of standardizing the approach to reporting clinical data in uveitis research has begun, and several terms have been standardized.

  9. Texture characterization for joint compression and classification based on human perception in the wavelet domain.

    PubMed

    Fahmy, Gamal; Black, John; Panchanathan, Sethuraman

    2006-06-01

    Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.

  10. Validation of the international labour office digitized standard images for recognition and classification of radiographs of pneumoconiosis.

    PubMed

    Halldin, Cara N; Petsonk, Edward L; Laney, A Scott

    2014-03-01

    Chest radiographs are recommended for prevention and detection of pneumoconiosis. In 2011, the International Labour Office (ILO) released a revision of the International Classification of Radiographs of Pneumoconioses that included a digitized standard images set. The present study compared results of classifications of digital chest images performed using the new ILO 2011 digitized standard images to classification approaches used in the past. Underground coal miners (N = 172) were examined using both digital and film-screen radiography (FSR) on the same day. Seven National Institute for Occupational Safety and Health-certified B Readers independently classified all 172 digital radiographs, once using the ILO 2011 digitized standard images (DRILO2011-D) and once using digitized standard images used in the previous research (DRRES). The same seven B Readers classified all the miners' chest films using the ILO film-based standards. Agreement between classifications of FSR and digital radiography was identical, using a standard image set (either DRILO2011-D or DRRES). The overall weighted κ value was 0.58. Some specific differences in the results were seen and noted. However, intrareader variability in this study was similar to the published values and did not appear to be affected by the use of the new ILO 2011 digitized standard images. These findings validate the use of the ILO digitized standard images for classification of small pneumoconiotic opacities. When digital chest radiographs are obtained and displayed appropriately, results of pneumoconiosis classifications using the 2011 ILO digitized standards are comparable to film-based ILO classifications and to classifications using earlier research standards. Published by Elsevier Inc.

  11. Analysis of steranes and triterpanes in geolipid extracts by automatic classification of mass spectra

    NASA Technical Reports Server (NTRS)

    Wardroper, A. M. K.; Brooks, P. W.; Humberston, M. J.; Maxwell, J. R.

    1977-01-01

    A computer method is described for the automatic classification of triterpanes and steranes into gross structural type from their mass spectral characteristics. The method has been applied to the spectra obtained by gas-chromatographic/mass-spectroscopic analysis of two mixtures of standards and of hydrocarbon fractions isolated from Green River and Messel oil shales. Almost all of the steranes and triterpanes identified previously in both shales were classified, in addition to a number of new components. The results indicate that classification of such alkanes is possible with a laboratory computer system. The method has application to diagenesis and maturation studies as well as to oil/oil and oil/source rock correlations in which rapid screening of large numbers of samples is required.

  12. Classification and pose estimation of objects using nonlinear features

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.

  13. Photometric brown-dwarf classification. I. A method to identify and accurately classify large samples of brown dwarfs without spectroscopy

    NASA Astrophysics Data System (ADS)

    Skrzypek, N.; Warren, S. J.; Faherty, J. K.; Mortlock, D. J.; Burgasser, A. J.; Hewett, P. C.

    2015-02-01

    Aims: We present a method, named photo-type, to identify and accurately classify L and T dwarfs onto the standard spectral classification system using photometry alone. This enables the creation of large and deep homogeneous samples of these objects efficiently, without the need for spectroscopy. Methods: We created a catalogue of point sources with photometry in 8 bands, ranging from 0.75 to 4.6 μm, selected from an area of 3344 deg2, by combining SDSS, UKIDSS LAS, and WISE data. Sources with 13.0 0.8, were then classified by comparison against template colours of quasars, stars, and brown dwarfs. The L and T templates, spectral types L0 to T8, were created by identifying previously known sources with spectroscopic classifications, and fitting polynomial relations between colour and spectral type. Results: Of the 192 known L and T dwarfs with reliable photometry in the surveyed area and magnitude range, 189 are recovered by our selection and classification method. We have quantified the accuracy of the classification method both externally, with spectroscopy, and internally, by creating synthetic catalogues and accounting for the uncertainties. We find that, brighter than J = 17.5, photo-type classifications are accurate to one spectral sub-type, and are therefore competitive with spectroscopic classifications. The resultant catalogue of 1157 L and T dwarfs will be presented in a companion paper.

  14. Brain-computer interfacing under distraction: an evaluation study

    NASA Astrophysics Data System (ADS)

    Brandl, Stephanie; Frølich, Laura; Höhne, Johannes; Müller, Klaus-Robert; Samek, Wojciech

    2016-10-01

    Objective. While motor-imagery based brain-computer interfaces (BCIs) have been studied over many years by now, most of these studies have taken place in controlled lab settings. Bringing BCI technology into everyday life is still one of the main challenges in this field of research. Approach. This paper systematically investigates BCI performance under 6 types of distractions that mimic out-of-lab environments. Main results. We report results of 16 participants and show that the performance of the standard common spatial patterns (CSP) + regularized linear discriminant analysis classification pipeline drops significantly in this ‘simulated’ out-of-lab setting. We then investigate three methods for improving the performance: (1) artifact removal, (2) ensemble classification, and (3) a 2-step classification approach. While artifact removal does not enhance the BCI performance significantly, both ensemble classification and the 2-step classification combined with CSP significantly improve the performance compared to the standard procedure. Significance. Systematically analyzing out-of-lab scenarios is crucial when bringing BCI into everyday life. Algorithms must be adapted to overcome nonstationary environments in order to tackle real-world challenges.

  15. Inter-Labeler and Intra-Labeler Variability of Condition Severity Classification Models Using Active and Passive Learning Methods

    PubMed Central

    Nissim, Nir; Shahar, Yuval; Boland, Mary Regina; Tatonetti, Nicholas P; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert

    2018-01-01

    Background and Objectives Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers’ learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. Methods We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. Results The AL methods produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p = 0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275 to 0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers’ different models during the training phase, compared to the variance of the induced models’ AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods. The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p = 0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p = 0.29), as was the difference between the Combination_XA and Exploitation methods (p = 0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p = 0.014), but not when using any of the three AL methods. Conclusions The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group’s individual labelers. Finally, using the AL methods when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. PMID:28456512

  16. Peculiarities of use of ECOC and AdaBoost based classifiers for thematic processing of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Dementev, A. O.; Dmitriev, E. V.; Kozoderov, V. V.; Egorov, V. D.

    2017-10-01

    Hyperspectral imaging is up-to-date promising technology widely applied for the accurate thematic mapping. The presence of a large number of narrow survey channels allows us to use subtle differences in spectral characteristics of objects and to make a more detailed classification than in the case of using standard multispectral data. The difficulties encountered in the processing of hyperspectral images are usually associated with the redundancy of spectral information which leads to the problem of the curse of dimensionality. Methods currently used for recognizing objects on multispectral and hyperspectral images are usually based on standard base supervised classification algorithms of various complexity. Accuracy of these algorithms can be significantly different depending on considered classification tasks. In this paper we study the performance of ensemble classification methods for the problem of classification of the forest vegetation. Error correcting output codes and boosting are tested on artificial data and real hyperspectral images. It is demonstrates, that boosting gives more significant improvement when used with simple base classifiers. The accuracy in this case in comparable the error correcting output code (ECOC) classifier with Gaussian kernel SVM base algorithm. However the necessity of boosting ECOC with Gaussian kernel SVM is questionable. It is demonstrated, that selected ensemble classifiers allow us to recognize forest species with high enough accuracy which can be compared with ground-based forest inventory data.

  17. Inter-labeler and intra-labeler variability of condition severity classification models using active and passive learning methods.

    PubMed

    Nissim, Nir; Shahar, Yuval; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert

    2017-09-01

    Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers' learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. The AL methods: produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p=0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275-0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers' different models during the training phase, compared to the variance of the induced models' AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p=0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p=0.29), as was the difference between the Combination_XA and Exploitation methods (p=0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p=0.014), but not when using any of the three AL methods. The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group's individual labelers. Finally, using the AL methods: when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. 7 CFR 27.14 - Filing of classification and Micronaire determination requests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Filing of classification and Micronaire determination... STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification Requests § 27.14 Filing of classification and Micronaire determination requests...

  19. 7 CFR 27.14 - Filing of classification and Micronaire determination requests.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Filing of classification and Micronaire determination... STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification Requests § 27.14 Filing of classification and Micronaire determination requests...

  20. 7 CFR 27.87 - Fees; classification and Micronaire determination information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Fees; classification and Micronaire determination... STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Costs of Classification and Micronaire § 27.87 Fees; classification and Micronaire determination...

  1. 7 CFR 27.87 - Fees; classification and Micronaire determination information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Fees; classification and Micronaire determination... STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Costs of Classification and Micronaire § 27.87 Fees; classification and Micronaire determination...

  2. Accounting for both local aquatic community composition and bioavailability in setting site-specific quality standards for zinc.

    PubMed

    Peters, Adam; Simpson, Peter; Moccia, Alessandra

    2014-01-01

    Recent years have seen considerable improvement in water quality standards (QS) for metals by taking account of the effect of local water chemistry conditions on their bioavailability. We describe preliminary efforts to further refine water quality standards, by taking account of the composition of the local ecological community (the ultimate protection objective) in addition to bioavailability. Relevance of QS to the local ecological community is critical as it is important to minimise instances where quality classification using QS does not reconcile with a quality classification based on an assessment of the composition of the local ecology (e.g. using benthic macroinvertebrate quality assessment metrics such as River InVertebrate Prediction and Classification System (RIVPACS)), particularly where ecology is assessed to be at good or better status, whilst chemical quality is determined to be failing relevant standards. The alternative approach outlined here describes a method to derive a site-specific species sensitivity distribution (SSD) based on the ecological community which is expected to be present at the site in the absence of anthropogenic pressures (reference conditions). The method combines a conventional laboratory ecotoxicity dataset normalised for bioavailability with field measurements of the response of benthic macroinvertebrate abundance to chemical exposure. Site-specific QSref are then derived from the 5%ile of this SSD. Using this method, site QSref have been derived for zinc in an area impacted by historic mining activities. Application of QSref can result in greater agreement between chemical and ecological metrics of environmental quality compared with the use of either conventional (QScon) or bioavailability-based QS (QSbio). In addition to zinc, the approach is likely to be applicable to other metals and possibly other types of chemical stressors (e.g. pesticides). However, the methodology for deriving site-specific targets requires additional development and validation before they can be robustly applied during surface water classification.

  3. a Hyperspectral Image Classification Method Using Isomap and Rvm

    NASA Astrophysics Data System (ADS)

    Chang, H.; Wang, T.; Fang, H.; Su, Y.

    2018-04-01

    Classification is one of the most significant applications of hyperspectral image processing and even remote sensing. Though various algorithms have been proposed to implement and improve this application, there are still drawbacks in traditional classification methods. Thus further investigations on some aspects, such as dimension reduction, data mining, and rational use of spatial information, should be developed. In this paper, we used a widely utilized global manifold learning approach, isometric feature mapping (ISOMAP), to address the intrinsic nonlinearities of hyperspectral image for dimension reduction. Considering the impropriety of Euclidean distance in spectral measurement, we applied spectral angle (SA) for substitute when constructed the neighbourhood graph. Then, relevance vector machines (RVM) was introduced to implement classification instead of support vector machines (SVM) for simplicity, generalization and sparsity. Therefore, a probability result could be obtained rather than a less convincing binary result. Moreover, taking into account the spatial information of the hyperspectral image, we employ a spatial vector formed by different classes' ratios around the pixel. At last, we combined the probability results and spatial factors with a criterion to decide the final classification result. To verify the proposed method, we have implemented multiple experiments with standard hyperspectral images compared with some other methods. The results and different evaluation indexes illustrated the effectiveness of our method.

  4. A new blood vessel extraction technique using edge enhancement and object classification.

    PubMed

    Badsha, Shahriar; Reza, Ahmed Wasif; Tan, Kim Geok; Dimyati, Kaharudin

    2013-12-01

    Diabetic retinopathy (DR) is increasing progressively pushing the demand of automatic extraction and classification of severity of diseases. Blood vessel extraction from the fundus image is a vital and challenging task. Therefore, this paper presents a new, computationally simple, and automatic method to extract the retinal blood vessel. The proposed method comprises several basic image processing techniques, namely edge enhancement by standard template, noise removal, thresholding, morphological operation, and object classification. The proposed method has been tested on a set of retinal images. The retinal images were collected from the DRIVE database and we have employed robust performance analysis to evaluate the accuracy. The results obtained from this study reveal that the proposed method offers an average accuracy of about 97 %, sensitivity of 99 %, specificity of 86 %, and predictive value of 98 %, which is superior to various well-known techniques.

  5. [Severity classification of chronic obstructive pulmonary disease based on deep learning].

    PubMed

    Ying, Jun; Yang, Ceyuan; Li, Quanzheng; Xue, Wanguo; Li, Tanshi; Cao, Wenzhe

    2017-12-01

    In this paper, a deep learning method has been raised to build an automatic classification algorithm of severity of chronic obstructive pulmonary disease. Large sample clinical data as input feature were analyzed for their weights in classification. Through feature selection, model training, parameter optimization and model testing, a classification prediction model based on deep belief network was built to predict severity classification criteria raised by the Global Initiative for Chronic Obstructive Lung Disease (GOLD). We get accuracy over 90% in prediction for two different standardized versions of severity criteria raised in 2007 and 2011 respectively. Moreover, we also got the contribution ranking of different input features through analyzing the model coefficient matrix and confirmed that there was a certain degree of agreement between the more contributive input features and the clinical diagnostic knowledge. The validity of the deep belief network model was proved by this result. This study provides an effective solution for the application of deep learning method in automatic diagnostic decision making.

  6. Quantum Cascade Laser-Based Infrared Microscopy for Label-Free and Automated Cancer Classification in Tissue Sections.

    PubMed

    Kuepper, Claus; Kallenbach-Thieltges, Angela; Juette, Hendrik; Tannapfel, Andrea; Großerueschkamp, Frederik; Gerwert, Klaus

    2018-05-16

    A feasibility study using a quantum cascade laser-based infrared microscope for the rapid and label-free classification of colorectal cancer tissues is presented. Infrared imaging is a reliable, robust, automated, and operator-independent tissue classification method that has been used for differential classification of tissue thin sections identifying tumorous regions. However, long acquisition time by the so far used FT-IR-based microscopes hampered the clinical translation of this technique. Here, the used quantum cascade laser-based microscope provides now infrared images for precise tissue classification within few minutes. We analyzed 110 patients with UICC-Stage II and III colorectal cancer, showing 96% sensitivity and 100% specificity of this label-free method as compared to histopathology, the gold standard in routine clinical diagnostics. The main hurdle for the clinical translation of IR-Imaging is overcome now by the short acquisition time for high quality diagnostic images, which is in the same time range as frozen sections by pathologists.

  7. Evaluating Chemical Persistence in a Multimedia Environment: ACART Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, D.H.; McKone, T.E.; Kastenberg, W.E.

    1999-02-01

    For the thousands of chemicals continuously released into the environment, it is desirable to make prospective assessments of those likely to be persistent. Persistent chemicals are difficult to remove if adverse health or ecological effects are later discovered. A tiered approach using a classification scheme and a multimedia model for determining persistence is presented. Using specific criteria for persistence, a classification tree is developed to classify a chemical as ''persistent'' or ''non-persistent'' based on the chemical properties. In this approach, the classification is derived from the results of a standardized unit world multimedia model. Thus, the classifications are more robustmore » for multimedia pollutants than classifications using a single medium half-life. The method can be readily implemented and provides insight without requiring extensive and often unavailable data. This method can be used to classify chemicals when only a few properties are known and be used to direct further data collection. Case studies are presented to demonstrate the advantages of the approach.« less

  8. Label-aligned Multi-task Feature Learning for Multimodal Classification of Alzheimer’s Disease and Mild Cognitive Impairment

    PubMed Central

    Zu, Chen; Jie, Biao; Liu, Mingxia; Chen, Songcan

    2015-01-01

    Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI. PMID:26572145

  9. Vehicle Classification Using the Discrete Fourier Transform with Traffic Inductive Sensors.

    PubMed

    Lamas-Seco, José J; Castro, Paula M; Dapena, Adriana; Vazquez-Araujo, Francisco J

    2015-10-26

    Inductive Loop Detectors (ILDs) are the most commonly used sensors in traffic management systems. This paper shows that some spectral features extracted from the Fourier Transform (FT) of inductive signatures do not depend on the vehicle speed. Such a property is used to propose a novel method for vehicle classification based on only one signature acquired from a sensor single-loop, in contrast to standard methods using two sensor loops. Our proposal will be evaluated by means of real inductive signatures captured with our hardware prototype.

  10. 7 CFR 51.1860 - Color classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Color classification. 51.1860 Section 51.1860... STANDARDS) United States Standards for Fresh Tomatoes 1 Color Classification § 51.1860 Color classification... illustrating the color classification requirements, as set forth in this section. This visual aid may be...

  11. 7 CFR 51.1436 - Color classifications.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Color classifications. 51.1436 Section 51.1436... STANDARDS) United States Standards for Grades of Shelled Pecans Color Classifications § 51.1436 Color classifications. (a) The skin color of pecan kernels may be described in terms of the color classifications...

  12. 7 CFR 51.1436 - Color classifications.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Color classifications. 51.1436 Section 51.1436... STANDARDS) United States Standards for Grades of Shelled Pecans Color Classifications § 51.1436 Color classifications. (a) The skin color of pecan kernels may be described in terms of the color classifications...

  13. 7 CFR 51.1436 - Color classifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Color classifications. 51.1436 Section 51.1436... STANDARDS) United States Standards for Grades of Shelled Pecans Color Classifications § 51.1436 Color classifications. (a) The skin color of pecan kernels may be described in terms of the color classifications...

  14. Evaluating multimedia chemical persistence: Classification and regression tree analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, D.H.; McKone, T.E.; Kastenberg, W.E.

    2000-04-01

    For the thousands of chemicals continuously released into the environment, it is desirable to make prospective assessments of those likely to be persistent. Widely distributed persistent chemicals are impossible to remove from the environment and remediation by natural processes may take decades, which is problematic if adverse health or ecological effects are discovered after prolonged release into the environment. A tiered approach using a classification scheme and a multimedia model for determining persistence is presented. Using specific criteria for persistence, a classification tree is developed to classify a chemical as persistent or nonpersistent based on the chemical properties. In thismore » approach, the classification is derived from the results of a standardized unit world multimedia model. Thus, the classifications are more robust for multimedia pollutants than classifications using a single medium half-life. The method can be readily implemented and provides insight without requiring extensive and often unavailable data. This method can be used to classify chemicals when only a few properties are known and can be used to direct further data collection. Case studies are presented to demonstrate the advantages of the approach.« less

  15. Application of Convolutional Neural Network in Classification of High Resolution Agricultural Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Yao, C.; Zhang, Y.; Zhang, Y.; Liu, H.

    2017-09-01

    With the rapid development of Precision Agriculture (PA) promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN). For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.

  16. Comparison of ambulatory blood pressure reference standards in children evaluated for hypertension

    PubMed Central

    Jones, Deborah P.; Richey, Phyllis A.; Alpert, Bruce S.

    2009-01-01

    Objective The purpose of this study was to systematically compare methods for standardization of blood pressure levels obtained by ambulatory blood pressure monitoring (ABPM) in a group of 111 children studied at our institution. Methods Blood pressure indices, blood pressure loads and standard deviation scores were calculated using he original ABPM and the modified reference standards. Bland—Altman plots and kappa statistics for the level of agreement were generated. Results Overall, the agreement between the two methods was excellent; however, approximately 5% of children were classified differently by one as compared with the other method. Conclusion Depending on which version of the German Working Group’s reference standards is used for interpretation of ABPM data, the classification of the individual as having hypertension or normal blood pressure may vary. PMID:19433980

  17. Application of different classification methods for litho-fluid facies prediction: a case study from the offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia; Ciabarri, Fabio

    2017-10-01

    In this work we test four classification methods for litho-fluid facies identification in a clastic reservoir located in the offshore Nile Delta. The ultimate goal of this study is to find an optimal classification method for the area under examination. The geologic context of the investigated area allows us to consider three different facies in the classification: shales, brine sands and gas sands. The depth at which the reservoir zone is located (2300-2700 m) produces a significant overlap of the P- and S-wave impedances of brine sands and gas sands that makes discrimination between these two litho-fluid classes particularly problematic. The classification is performed on the feature space defined by the elastic properties that are derived from recorded reflection seismic data by means of amplitude versus angle Bayesian inversion. As classification methods we test both deterministic and probabilistic approaches: the quadratic discriminant analysis and the neural network methods belong to the first group, whereas the standard Bayesian approach and the Bayesian approach that includes a 1D Markov chain a priori model to constrain the vertical continuity of litho-fluid facies belong to the second group. The ability of each method to discriminate the different facies is evaluated both on synthetic seismic data (computed on the basis of available borehole information) and on field seismic data. The outcomes of each classification method are compared with the known facies profile derived from well log data and the goodness of the results is quantitatively evaluated using the so-called confusion matrix. The results show that all methods return vertical facies profiles in which the main reservoir zone is correctly identified. However, the consideration of as much prior information as possible in the classification process is the winning choice for deriving a reliable and physically plausible predicted facies profile.

  18. 7 CFR 28.40 - Terms defined; cotton classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Terms defined; cotton classification. 28.40 Section 28... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Classification § 28.40 Terms defined; cotton classification. For the purposes of classification of any cotton or...

  19. 7 CFR 28.40 - Terms defined; cotton classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Terms defined; cotton classification. 28.40 Section 28... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Classification § 28.40 Terms defined; cotton classification. For the purposes of classification of any cotton or...

  20. 7 CFR 28.40 - Terms defined; cotton classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Terms defined; cotton classification. 28.40 Section 28... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Classification § 28.40 Terms defined; cotton classification. For the purposes of classification of any cotton or...

  1. 7 CFR 28.40 - Terms defined; cotton classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Terms defined; cotton classification. 28.40 Section 28... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Classification § 28.40 Terms defined; cotton classification. For the purposes of classification of any cotton or...

  2. 7 CFR 28.40 - Terms defined; cotton classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Terms defined; cotton classification. 28.40 Section 28... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Classification § 28.40 Terms defined; cotton classification. For the purposes of classification of any cotton or...

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The first part covers standards for gaseous fuels. The second part covers standards on coal and coke including the classification of coals, determination of major elements in coal ash and trace elements in coal, metallurgical properties of coal and coke, methods of analysis of coal and coke, petrogrpahic analysis of coal and coke, physical characteristics of coal, quality assurance and sampling.

  4. A thyroid nodule classification method based on TI-RADS

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Yang, Yang; Peng, Bo; Chen, Qin

    2017-07-01

    Thyroid Imaging Reporting and Data System(TI-RADS) is a valuable tool for differentiating the benign and the malignant thyroid nodules. In clinic, doctors can determine the extent of being benign or malignant in terms of different classes by using TI-RADS. Classification represents the degree of malignancy of thyroid nodules. TI-RADS as a classification standard can be used to guide the ultrasonic doctor to examine thyroid nodules more accurately and reliably. In this paper, we aim to classify the thyroid nodules with the help of TI-RADS. To this end, four ultrasound signs, i.e., cystic and solid, echo pattern, boundary feature and calcification of thyroid nodules are extracted and converted into feature vectors. Then semi-supervised fuzzy C-means ensemble (SS-FCME) model is applied to obtain the classification results. The experimental results demonstrate that the proposed method can help doctors diagnose the thyroid nodules effectively.

  5. Machine learning in soil classification.

    PubMed

    Bhattacharya, B; Solomatine, D P

    2006-03-01

    In a number of engineering problems, e.g. in geotechnics, petroleum engineering, etc. intervals of measured series data (signals) are to be attributed a class maintaining the constraint of contiguity and standard classification methods could be inadequate. Classification in this case needs involvement of an expert who observes the magnitude and trends of the signals in addition to any a priori information that might be available. In this paper, an approach for automating this classification procedure is presented. Firstly, a segmentation algorithm is developed and applied to segment the measured signals. Secondly, the salient features of these segments are extracted using boundary energy method. Based on the measured data and extracted features to assign classes to the segments classifiers are built; they employ Decision Trees, ANN and Support Vector Machines. The methodology was tested in classifying sub-surface soil using measured data from Cone Penetration Testing and satisfactory results were obtained.

  6. Master standard data quantity food production code. Macro elements for synthesizing production labor time.

    PubMed

    Matthews, M E; Waldvogel, C F; Mahaffey, M J; Zemel, P C

    1978-06-01

    Preparation procedures of standardized quantity formulas were analyzed for similarities and differences in production activities, and three entrée classifications were developed, based on these activities. Two formulas from each classification were selected, preparation procedures were divided into elements of production, and the MSD Quantity Food Production Code was applied. Macro elements not included in the existing Code were simulated, coded, assigned associated Time Measurement Units, and added to the MSD Quantity Food Production Code. Repeated occurrence of similar elements within production methods indicated that macro elements could be synthesized for use within one or more entrée classifications. Basic elements were grouped, simulated, and macro elements were derived. Macro elements were applied in the simulated production of 100 portions of each entrée formula. Total production time for each formula and average production time for each entrée classification were calculated. Application of macro elements indicated that this method of predetermining production time was feasible and could be adapted by quantity foodservice managers as a decision technique used to evaluate menu mix, production personnel schedules, and allocation of equipment usage. These macro elements could serve as a basis for further development and refinement of other macro elements which could be applied to a variety of menu item formulas.

  7. Unsupervised Wishart Classfication of Wetlands in Newfoundland, Canada Using Polsar Data Based on Fisher Linear Discriminant Analysis

    NASA Astrophysics Data System (ADS)

    Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Homayouni, S.

    2016-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) imagery is a complex multi-dimensional dataset, which is an important source of information for various natural resources and environmental classification and monitoring applications. PolSAR imagery produces valuable information by observing scattering mechanisms from different natural and man-made objects. Land cover mapping using PolSAR data classification is one of the most important applications of SAR remote sensing earth observations, which have gained increasing attention in the recent years. However, one of the most challenging aspects of classification is selecting features with maximum discrimination capability. To address this challenge, a statistical approach based on the Fisher Linear Discriminant Analysis (FLDA) and the incorporation of physical interpretation of PolSAR data into classification is proposed in this paper. After pre-processing of PolSAR data, including the speckle reduction, the H/α classification is used in order to classify the basic scattering mechanisms. Then, a new method for feature weighting, based on the fusion of FLDA and physical interpretation, is implemented. This method proves to increase the classification accuracy as well as increasing between-class discrimination in the final Wishart classification. The proposed method was applied to a full polarimetric C-band RADARSAT-2 data set from Avalon area, Newfoundland and Labrador, Canada. This imagery has been acquired in June 2015, and covers various types of wetlands including bogs, fens, marshes and shallow water. The results were compared with the standard Wishart classification, and an improvement of about 20% was achieved in the overall accuracy. This method provides an opportunity for operational wetland classification in northern latitude with high accuracy using only SAR polarimetric data.

  8. Automated source classification of new transient sources

    NASA Astrophysics Data System (ADS)

    Oertel, M.; Kreikenbohm, A.; Wilms, J.; DeLuca, A.

    2017-10-01

    The EXTraS project harvests the hitherto unexplored temporal domain information buried in the serendipitous data collected by the European Photon Imaging Camera (EPIC) onboard the ESA XMM-Newton mission since its launch. This includes a search for fast transients, missed by standard image analysis, and a search and characterization of variability in hundreds of thousands of sources. We present an automated classification scheme for new transient sources in the EXTraS project. The method is as follows: source classification features of a training sample are used to train machine learning algorithms (performed in R; randomForest (Breiman, 2001) in supervised mode) which are then tested on a sample of known source classes and used for classification.

  9. Development and initial validation of the Classification of Early-Onset Scoliosis (C-EOS).

    PubMed

    Williams, Brendan A; Matsumoto, Hiroko; McCalla, Daren J; Akbarnia, Behrooz A; Blakemore, Laurel C; Betz, Randal R; Flynn, John M; Johnston, Charles E; McCarthy, Richard E; Roye, David P; Skaggs, David L; Smith, John T; Snyder, Brian D; Sponseller, Paul D; Sturm, Peter F; Thompson, George H; Yazici, Muharrem; Vitale, Michael G

    2014-08-20

    Early-onset scoliosis is a heterogeneous condition, with highly variable manifestations and natural history. No standardized classification system exists to describe and group patients, to guide optimal care, or to prognosticate outcomes within this population. A classification system for early-onset scoliosis is thus a necessary prerequisite to the timely evolution of care of these patients. Fifteen experienced surgeons participated in a nominal group technique designed to achieve a consensus-based classification system for early-onset scoliosis. A comprehensive list of factors important in managing early-onset scoliosis was generated using a standardized literature review, semi-structured interviews, and open forum discussion. Three group meetings and two rounds of surveying guided the selection of classification components, subgroupings, and cut-points. Initial validation of the system was conducted using an interobserver reliability assessment based on the classification of a series of thirty cases. Nominal group technique was used to identify three core variables (major curve angle, etiology, and kyphosis) with high group content validity scores. Age and curve progression ranked slightly lower. Participants evaluated the cases of thirty patients with early-onset scoliosis for reliability testing. The mean kappa value for etiology (0.64) was substantial, while the mean kappa values for major curve angle (0.95) and kyphosis (0.93) indicated almost perfect agreement. The final classification consisted of a continuous age prefix, etiology (congenital or structural, neuromuscular, syndromic, and idiopathic), major curve angle (1, 2, 3, or 4), and kyphosis (-, N, or +) variables, and an optional progression modifier (P0, P1, or P2). Utilizing formal consensus-building methods in a large group of surgeons experienced in treating early-onset scoliosis, a novel classification system for early-onset scoliosis was developed with all core components demonstrating substantial to excellent interobserver reliability. This classification system will serve as a foundation to guide ongoing research efforts and standardize communication in the clinical setting. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.

  10. 77 FR 53224 - Coastal and Marine Ecological Classification Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-31

    ... DEPARTMENT OF THE INTERIOR Geological Survey [USGS-GX12EE000101000] Coastal and Marine Ecological... of coastal and marine ecological classification standard. SUMMARY: The Federal Geographic Data Committee (FGDC) has endorsed the Coastal and Marine Ecological Classification Standard (CMECS) as the first...

  11. Comparison of several chemometric methods of libraries and classifiers for the analysis of expired drugs based on Raman spectra.

    PubMed

    Gao, Qun; Liu, Yan; Li, Hao; Chen, Hui; Chai, Yifeng; Lu, Feng

    2014-06-01

    Some expired drugs are difficult to detect by conventional means. If they are repackaged and sold back into market, they will constitute a new public health challenge. For the detection of repackaged expired drugs within specification, paracetamol tablet from a manufacturer was used as a model drug in this study for comparison of Raman spectra-based library verification and classification methods. Raman spectra of different batches of paracetamol tablets were collected and a library including standard spectra of unexpired batches of tablets was established. The Raman spectrum of each sample was identified by cosine and correlation with the standard spectrum. The average HQI of the suspicious samples and the standard spectrum were calculated. The optimum threshold values were 0.997 and 0.998 respectively as a result of ROC and four evaluations, for which the accuracy was up to 97%. Three supervised classifiers, PLS-DA, SVM and k-NN, were chosen to establish two-class classification models and compared subsequently. They were used to establish a classification of expired batches and an unexpired batch, and predict the suspect samples. The average accuracy was 90.12%, 96.80% and 89.37% respectively. Different pre-processing techniques were tried to find that first derivative was optimal for methods of libraries and max-min normalization was optimal for that of classifiers. The results obtained from these studies indicated both libraries and classifier methods could detect the expired drugs effectively, and they should be used complementarily in the fast-screening. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. A Standard-Driven Data Dictionary for Data Harmonization of Heterogeneous Datasets in Urban Geological Information Systems

    NASA Astrophysics Data System (ADS)

    Liu, G.; Wu, C.; Li, X.; Song, P.

    2013-12-01

    The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.

  13. Automated analysis of food-borne pathogens using a novel microbial cell culture, sensing and classification system.

    PubMed

    Xiang, Kun; Li, Yinglei; Ford, William; Land, Walker; Schaffer, J David; Congdon, Robert; Zhang, Jing; Sadik, Omowunmi

    2016-02-21

    We hereby report the design and implementation of an Autonomous Microbial Cell Culture and Classification (AMC(3)) system for rapid detection of food pathogens. Traditional food testing methods require multistep procedures and long incubation period, and are thus prone to human error. AMC(3) introduces a "one click approach" to the detection and classification of pathogenic bacteria. Once the cultured materials are prepared, all operations are automatic. AMC(3) is an integrated sensor array platform in a microbial fuel cell system composed of a multi-potentiostat, an automated data collection system (Python program, Yocto Maxi-coupler electromechanical relay module) and a powerful classification program. The classification scheme consists of Probabilistic Neural Network (PNN), Support Vector Machines (SVM) and General Regression Neural Network (GRNN) oracle-based system. Differential Pulse Voltammetry (DPV) is performed on standard samples or unknown samples. Then, using preset feature extractions and quality control, accepted data are analyzed by the intelligent classification system. In a typical use, thirty-two extracted features were analyzed to correctly classify the following pathogens: Escherichia coli ATCC#25922, Escherichia coli ATCC#11775, and Staphylococcus epidermidis ATCC#12228. 85.4% accuracy range was recorded for unknown samples, and within a shorter time period than the industry standard of 24 hours.

  14. 77 FR 60475 - Draft of SWGDOC Standard Classification of Typewritten Text

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-03

    ... DEPARTMENT OF JUSTICE Office of Justice Programs [OJP (NIJ) Docket No. 1607] Draft of SWGDOC Standard Classification of Typewritten Text AGENCY: National Institute of Justice, DOJ. ACTION: Notice and..., ``SWGDOC Standard Classification of Typewritten Text''. The opportunity to provide comments on this...

  15. Measurement properties of gingival biotype evaluation methods.

    PubMed

    Alves, Patrick Henry Machado; Alves, Thereza Cristina Lira Pacheco; Pegoraro, Thiago Amadei; Costa, Yuri Martins; Bonfante, Estevam Augusto; de Almeida, Ana Lúcia Pompéia Fraga

    2018-06-01

    There are numerous methods to measure the dimensions of the gingival tissue, but few have compared the effectiveness of one method over another. This study aimed to describe a new method and to estimate the validity of gingival biotype assessment with the aid of computed tomography scanning (CTS). In each patient different methods of evaluation of the gingival thickness were used: transparency of periodontal probe, transgingival, photography, and a new method of CTS). Intrarater and interrater reliability considering the categorical classification of the gingival biotype were estimated with Cohen's kappa coefficient, intraclass correlation coefficient (ICC), and ANOVA (P < .05). The criterion validity of the CTS was determined using the transgingival method as the reference standard. Sensitivity and specificity values were computed along with theirs 95% CI. Twelve patients were subjected to assessment of their gingival thickness. The highest agreement was found between transgingival and CTS (86.1%). The comparison between the categorical classifications of CTS and the transgingival method (reference standard) showed high specificity (94.92%) and low sensitivity (53.85%) for definition of a thin biotype. The new method of CTS assessment to classify gingival tissue thickness can be considered reliable and clinically useful to diagnose thick biotype. © 2018 Wiley Periodicals, Inc.

  16. Automated classification of Acid Rock Drainage potential from Corescan drill core imagery

    NASA Astrophysics Data System (ADS)

    Cracknell, M. J.; Jackson, L.; Parbhakar-Fox, A.; Savinova, K.

    2017-12-01

    Classification of the acid forming potential of waste rock is important for managing environmental hazards associated with mining operations. Current methods for the classification of acid rock drainage (ARD) potential usually involve labour intensive and subjective assessment of drill core and/or hand specimens. Manual methods are subject to operator bias, human error and the amount of material that can be assessed within a given time frame is limited. The automated classification of ARD potential documented here is based on the ARD Index developed by Parbhakar-Fox et al. (2011). This ARD Index involves the combination of five indicators: A - sulphide content; B - sulphide alteration; C - sulphide morphology; D - primary neutraliser content; and E - sulphide mineral association. Several components of the ARD Index require accurate identification of sulphide minerals. This is achieved by classifying Corescan Red-Green-Blue true colour images into the presence or absence of sulphide minerals using supervised classification. Subsequently, sulphide classification images are processed and combined with Corescan SWIR-based mineral classifications to obtain information on sulphide content, indices representing sulphide textures (disseminated versus massive and degree of veining), and spatially associated minerals. This information is combined to calculate ARD Index indicator values that feed into the classification of ARD potential. Automated ARD potential classifications of drill core samples associated with a porphyry Cu-Au deposit are compared to manually derived classifications and those obtained by standard static geochemical testing and X-ray diffractometry analyses. Results indicate a high degree of similarity between automated and manual ARD potential classifications. Major differences between approaches are observed in sulphide and neutraliser mineral percentages, likely due to the subjective nature of manual estimates of mineral content. The automated approach presented here for the classification of ARD potential offers rapid, repeatable and accurate outcomes comparable to manually derived classifications. Methods for automated ARD classifications from digital drill core data represent a step-change for geoenvironmental management practices in the mining industry.

  17. Classification of Brazilian and foreign gasolines adulterated with alcohol using infrared spectroscopy.

    PubMed

    da Silva, Neirivaldo C; Pimentel, Maria Fernanda; Honorato, Ricardo S; Talhavini, Marcio; Maldaner, Adriano O; Honorato, Fernanda A

    2015-08-01

    The smuggling of products across the border regions of many countries is a practice to be fought. Brazilian authorities are increasingly worried about the illicit trade of fuels along the frontiers of the country. In order to confirm this as a crime, the Federal Police must have a means of identifying the origin of the fuel. This work describes the development of a rapid and nondestructive methodology to classify gasoline as to its origin (Brazil, Venezuela and Peru), using infrared spectroscopy and multivariate classification. Partial Least Squares Discriminant Analysis (PLS-DA) and Soft Independent Modeling Class Analogy (SIMCA) models were built. Direct standardization (DS) was employed aiming to standardize the spectra obtained in different laboratories of the border units of the Federal Police. Two approaches were considered in this work: (1) local and (2) global classification models. When using Approach 1, the PLS-DA achieved 100% correct classification, and the deviation of the predicted values for the secondary instrument considerably decreased after performing DS. In this case, SIMCA models were not efficient in the classification, even after standardization. Using a global model (Approach 2), both PLS-DA and SIMCA techniques were effective after performing DS. Considering that real situations may involve questioned samples from other nations (such as Peru), the SIMCA method developed according to Approach 2 is a more adequate, since the sample will be classified neither as Brazil nor Venezuelan. This methodology could be applied to other forensic problems involving the chemical classification of a product, provided that a specific modeling is performed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Estimating local scaling properties for the classification of interstitial lung disease patterns

    NASA Astrophysics Data System (ADS)

    Huber, Markus B.; Nagarajan, Mahesh B.; Leinsinger, Gerda; Ray, Lawrence A.; Wismueller, Axel

    2011-03-01

    Local scaling properties of texture regions were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honeycombing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and the estimation of local scaling properties with Scaling Index Method (SIM). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions including the Bonferroni correction. The best classification results were obtained by the set of SIM features, which performed significantly better than all the standard GLCM and MD features (p < 0.005) for both classifiers with the highest accuracy (94.1%, 93.7%; for the k-NN and RBFN classifier, respectively). The best standard texture features were the GLCM features 'homogeneity' (91.8%, 87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced texture features using local scaling properties can provide superior classification performance in computer-assisted diagnosis of interstitial lung diseases when compared to standard texture analysis methods.

  19. 46 CFR 108.109 - Classification society standards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Classification society standards. 108.109 Section 108.109 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) A-MOBILE OFFSHORE DRILLING UNITS DESIGN AND EQUIPMENT General § 108.109 Classification society standards. (a) Any person who desires to...

  20. 46 CFR 108.109 - Classification society standards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Classification society standards. 108.109 Section 108.109 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) A-MOBILE OFFSHORE DRILLING UNITS DESIGN AND EQUIPMENT General § 108.109 Classification society standards. (a) Any person who desires to...

  1. 46 CFR 108.109 - Classification society standards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Classification society standards. 108.109 Section 108.109 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) A-MOBILE OFFSHORE DRILLING UNITS DESIGN AND EQUIPMENT General § 108.109 Classification society standards. (a) Any person who desires to...

  2. 46 CFR 108.109 - Classification society standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Classification society standards. 108.109 Section 108.109 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) A-MOBILE OFFSHORE DRILLING UNITS DESIGN AND EQUIPMENT General § 108.109 Classification society standards. (a) Any person who desires to...

  3. 46 CFR 108.109 - Classification society standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Classification society standards. 108.109 Section 108.109 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) A-MOBILE OFFSHORE DRILLING UNITS DESIGN AND EQUIPMENT General § 108.109 Classification society standards. (a) Any person who desires to...

  4. 78 FR 9055 - National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-07

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the..., Medical Systems Administrator, Classifications and Public Health Data Standards Staff, NCHS, 3311 Toledo...

  5. 75 FR 68608 - Information Collection; Request for Authorization of Additional Classification and Rate, Standard...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ... Authorization of Additional Classification and Rate, Standard Form 1444 AGENCY: Department of Defense (DOD... of Additional Classification and Rate, Standard Form 1444. DATES: Comments may be submitted on or.../or business confidential information provided. FOR FURTHER INFORMATION CONTACT: Mr. Ernest Woodson...

  6. A new IRT-based standard setting method: application to eCat-listening.

    PubMed

    García, Pablo Eduardo; Abad, Francisco José; Olea, Julio; Aguado, David

    2013-01-01

    Criterion-referenced interpretations of tests are highly necessary, which usually involves the difficult task of establishing cut scores. Contrasting with other Item Response Theory (IRT)-based standard setting methods, a non-judgmental approach is proposed in this study, in which Item Characteristic Curve (ICC) transformations lead to the final cut scores. eCat-Listening, a computerized adaptive test for the evaluation of English Listening, was administered to 1,576 participants, and the proposed standard setting method was applied to classify them into the performance standards of the Common European Framework of Reference for Languages (CEFR). The results showed a classification closely related to relevant external measures of the English language domain, according to the CEFR. It is concluded that the proposed method is a practical and valid standard setting alternative for IRT-based tests interpretations.

  7. The Value of Ensari’s Proposal in Evaluating the Mucosal Pathology of Childhood Celiac Disease: Old Classification versus New Version

    PubMed Central

    Güreşci, Servet; Hızlı, Şamil; Şimşek, Gülçin Güler

    2012-01-01

    Objective: Small intestinal biopsy remains the gold standard in diagnosing celiac disease (CD); however, the wide spectrum of histopathological states and differential diagnosis of CD is still a diagnostic problem for pathologists. Recently, Ensari reviewed the literature and proposed an update of the histopathological diagnosis and classification for CD. Materials and Methods: In this study, the histopathological materials of 54 children in whom CD was diagnosed at our hospital were reviewed to compare the previous Marsh and Modified Marsh-Oberhuber classifications with this new proposal. Results: In this study, we show that the Ensari classification is as accurate as the Marsh and Modified Marsh classifications in describing the consecutive states of mucosal damage seen in CD. Conclusions: Ensari’s classification is simple, practical and facilitative in diagnosing and subtyping of mucosal pathology of CD. PMID:25207015

  8. 7 CFR 51.1904 - Maturity classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Maturity classification. 51.1904 Section 51.1904... STANDARDS) United States Consumer Standards for Fresh Tomatoes Size and Maturity Classification § 51.1904 Maturity classification. Tomatoes which are characteristically red when ripe, but are not overripe or soft...

  9. 7 CFR 51.1860 - Color classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Color classification. 51.1860 Section 51.1860... (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Fresh Tomatoes 1 Color Classification § 51.1860 Color classification. (a) The following terms may be used, when specified in connection with...

  10. 7 CFR 51.1860 - Color classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Color classification. 51.1860 Section 51.1860... (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Fresh Tomatoes 1 Color Classification § 51.1860 Color classification. (a) The following terms may be used, when specified in connection with...

  11. 7 CFR 51.1904 - Maturity classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Maturity classification. 51.1904 Section 51.1904... STANDARDS) United States Consumer Standards for Fresh Tomatoes Size and Maturity Classification § 51.1904 Maturity classification. Tomatoes which are characteristically red when ripe, but are not overripe or soft...

  12. 7 CFR 51.1904 - Maturity classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Maturity classification. 51.1904 Section 51.1904... STANDARDS) United States Consumer Standards for Fresh Tomatoes Size and Maturity Classification § 51.1904 Maturity classification. Tomatoes which are characteristically red when ripe, but are not overripe or soft...

  13. 7 CFR 51.2281 - Color classifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Color classifications. 51.2281 Section 51.2281... STANDARDS) United States Standards for Shelled English Walnuts (Juglans Regia) Color Requirements § 51.2281 Color classifications. The following classifications are provided to describe the color of any lot...

  14. Automatic parquet block sorting using real-time spectral classification

    NASA Astrophysics Data System (ADS)

    Astrom, Anders; Astrand, Erik; Johansson, Magnus

    1999-03-01

    This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.

  15. 75 FR 39265 - National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-08

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the... Prevention, Classifications and Public Health Data Standards, 3311 Toledo Road, Room 2337, Hyattsville, MD...

  16. 78 FR 53148 - National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-28

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the... Administrator, Classifications and Public Health Data Standards Staff, NCHS, 3311 Toledo Road, Room 2337...

  17. 48 CFR 219.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 219.303 Section 219.303 Federal... Programs 219.303 Determining North American Industry Classification System (NAICS) codes and size standards...

  18. 48 CFR 219.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 219.303 Section 219.303 Federal... Programs 219.303 Determining North American Industry Classification System (NAICS) codes and size standards...

  19. 48 CFR 219.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 219.303 Section 219.303 Federal... Determining North American Industry Classification System (NAICS) codes and size standards. Contracting...

  20. 48 CFR 219.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 219.303 Section 219.303 Federal... Determining North American Industry Classification System (NAICS) codes and size standards. Contracting...

  1. 48 CFR 19.303 - Determining North American Industry Classification System codes and size standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Industry Classification System codes and size standards. 19.303 Section 19.303 Federal Acquisition... of Small Business Status for Small Business Programs 19.303 Determining North American Industry... North American Industry Classification System (NAICS) code and related small business size standard and...

  2. Meta-learning framework applied in bioinformatics inference system design.

    PubMed

    Arredondo, Tomás; Ormazábal, Wladimir

    2015-01-01

    This paper describes a meta-learner inference system development framework which is applied and tested in the implementation of bioinformatic inference systems. These inference systems are used for the systematic classification of the best candidates for inclusion in bacterial metabolic pathway maps. This meta-learner-based approach utilises a workflow where the user provides feedback with final classification decisions which are stored in conjunction with analysed genetic sequences for periodic inference system training. The inference systems were trained and tested with three different data sets related to the bacterial degradation of aromatic compounds. The analysis of the meta-learner-based framework involved contrasting several different optimisation methods with various different parameters. The obtained inference systems were also contrasted with other standard classification methods with accurate prediction capabilities observed.

  3. Agricultural Land Cover from Multitemporal C-Band SAR Data

    NASA Astrophysics Data System (ADS)

    Skriver, H.

    2013-12-01

    Henning Skriver DTU Space, Technical University of Denmark Ørsteds Plads, Building 348, DK-2800 Lyngby e-mail: hs@space.dtu.dk Problem description This paper focuses on land cover type from SAR data using high revisit acquisitions, including single and dual polarisation and fully polarimetric data, at C-band. The data set were acquired during an ESA-supported campaign, AgriSAR09, with the Radarsat-2 system. Ground surveys to obtain detailed land cover maps were performed during the campaign. Classification methods using single- and dual-polarisation data, and fully polarimetric data are used with multitemporal data with short revisit time. Results for airborne campaigns have previously been reported in Skriver et al. (2011) and Skriver (2012). In this paper, the short revisit satellite SAR data will be used to assess the trade-off between polarimetric SAR data and data as single or dual polarisation SAR data. This is particularly important in relation to the future GMES Sentinel-1 SAR satellites, where two satellites with a relatively wide swath will ensure a short revisit time globally. Questions dealt with are: which accuracy can we expect from a mission like the Sentinel-1, what is the improvement of using polarimetric SAR compared to single or dual polarisation SAR, and what is the optimum number of acquisitions needed. Methodology The data have sufficient number of looks for the Gaussian assumption to be valid for the backscatter coefficients for the individual polarizations. The classification method used for these data is therefore the standard Bayesian classification method for multivariate Gaussian statistics. For the full-polarimetric cases two classification methods have been applied, the standard ML Wishart classifier, and a method based on a reversible transform of the covariance matrix into backscatter intensities. The following pre-processing steps were performed on both data sets: The scattering matrix data in the form of SLC products were coregistered, converted to covariance matrix format and multilooked to a specific equivalent number of looks. Results The multitemporal data improve significantly the classification results, and single acquisition data cannot provide the necessary classification performance. The multitemporal data are especially important for the single and dual polarization data, but less important for the fully polarimetric data. The satellite data set produces realistic classification results based on about 2000 fields. The best classification results for the single-polarized mode provide classification errors in the mid-twenties. Using the dual-polarized mode reduces the classification error with about 5 percentage points, whereas the polarimetric mode reduces it with about 10 percentage points. These results show, that it will be possible to obtain reasonable results with relatively simple systems with short revisit time. This very important result shows that systems like the Sentinel-1 mission will be able to produce fairly good results for global land cover classification. References Skriver, H. et al., 2011, 'Crop Classification using Short-Revisit Multitemporal SAR Data', IEEE J. Sel. Topics in Appl. Earth Obs. Rem. Sens., vol. 4, pp. 423-431. Skriver, H., 2012, 'Crop classification by multitemporal C- and L-band single- and dual-polarization and fully polarimetric SAR', IEEE Trans. Geosc. Rem. Sens., vol. 50, pp. 2138-2149.

  4. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    PubMed Central

    Shi, Lei; Wan, Youchuan; Gao, Xianjun

    2018-01-01

    In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721

  5. Comparison of ambulatory blood pressure reference standards in children evaluated for hypertension.

    PubMed

    Jones, Deborah P; Richey, Phyllis A; Alpert, Bruce S

    2009-06-01

    The purpose of this study was to systematically compare methods for standardization of blood pressure levels obtained by ambulatory blood pressure monitoring (ABPM) in a group of 111 children studied at our institution. Blood pressure indices, blood pressure loads and standard deviation scores were calculated using the original ABPM and the modified reference standards. Bland-Altman plots and kappa statistics for the level of agreement were generated. Overall, the agreement between the two methods was excellent; however, approximately 5% of children were classified differently by one as compared with the other method. Depending on which version of the German Working Group's reference standards is used for interpretation of ABPM data, the classification of the individual as having hypertension or normal blood pressure may vary.

  6. 7 CFR 51.2559 - Size classifications.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Size classifications. 51.2559 Section 51.2559... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2559 Size classifications. (a... the following size classifications. (1) Jumbo Whole Kernels: 80 percent or more by weight shall be...

  7. 7 CFR 51.3198 - Size classifications.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Size classifications. 51.3198 Section 51.3198... STANDARDS) United States Standards for Grades of Bermuda-Granex-Grano Type Onions Size Classifications § 51.3198 Size classifications. Size shall be specified in connection with the grade in terms of minimum...

  8. 7 CFR 51.3198 - Size classifications.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Size classifications. 51.3198 Section 51.3198... STANDARDS) United States Standards for Grades of Bermuda-Granex-Grano Type Onions Size Classifications § 51.3198 Size classifications. Size shall be specified in connection with the grade in terms of minimum...

  9. 7 CFR 51.2559 - Size classifications.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Size classifications. 51.2559 Section 51.2559... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2559 Size classifications. (a... the following size classifications. (1) Jumbo Whole Kernels: 80 percent or more by weight shall be...

  10. 7 CFR 51.3198 - Size classifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Size classifications. 51.3198 Section 51.3198... STANDARDS) United States Standards for Grades of Bermuda-Granex-Grano Type Onions Size Classifications § 51.3198 Size classifications. Size shall be specified in connection with the grade in terms of minimum...

  11. Automated object-based classification of topography from SRTM data

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2012-01-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060

  12. Automated object-based classification of topography from SRTM data

    NASA Astrophysics Data System (ADS)

    Drăguţ, Lucian; Eisank, Clemens

    2012-03-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.

  13. Classification of Ancient Mammal Individuals Using Dental Pulp MALDI-TOF MS Peptide Profiling

    PubMed Central

    Tran, Thi-Nguyen-Ny; Aboudharam, Gérard; Gardeisen, Armelle; Davoust, Bernard; Bocquet-Appel, Jean-Pierre; Flaudrops, Christophe; Belghazi, Maya; Raoult, Didier; Drancourt, Michel

    2011-01-01

    Background The classification of ancient animal corpses at the species level remains a challenging task for forensic scientists and anthropologists. Severe damage and mixed, tiny pieces originating from several skeletons may render morphological classification virtually impossible. Standard approaches are based on sequencing mitochondrial and nuclear targets. Methodology/Principal Findings We present a method that can accurately classify mammalian species using dental pulp and mass spectrometry peptide profiling. Our work was organized into three successive steps. First, after extracting proteins from the dental pulp collected from 37 modern individuals representing 13 mammalian species, trypsin-digested peptides were used for matrix-assisted laser desorption/ionization time-of-flight mass spectrometry analysis. The resulting peptide profiles accurately classified every individual at the species level in agreement with parallel cytochrome b gene sequencing gold standard. Second, using a 279–modern spectrum database, we blindly classified 33 of 37 teeth collected in 37 modern individuals (89.1%). Third, we classified 10 of 18 teeth (56%) collected in 15 ancient individuals representing five mammal species including human, from five burial sites dating back 8,500 years. Further comparison with an upgraded database comprising ancient specimen profiles yielded 100% classification in ancient teeth. Peptide sequencing yield 4 and 16 different non-keratin proteins including collagen (alpha-1 type I and alpha-2 type I) in human ancient and modern dental pulp, respectively. Conclusions/Significance Mass spectrometry peptide profiling of the dental pulp is a new approach that can be added to the arsenal of species classification tools for forensics and anthropology as a complementary method to DNA sequencing. The dental pulp is a new source for collagen and other proteins for the species classification of modern and ancient mammal individuals. PMID:21364886

  14. Comprehension and reproducibility of the Judet and Letournel classification

    PubMed Central

    Polesello, Giancarlo Cavalli; Nunes, Marcus Aurelius Araujo; Azuaga, Thiago Leonardi; de Queiroz, Marcelo Cavalheiro; Honda, Emerson Kyoshi; Ono, Nelson Keiske

    2012-01-01

    Objective To evaluate the effectiveness of the method of radiographic interpretation of acetabular fractures, according to the classification of Judet and Letournel, used by a group of residents of Orthopedics at a university hospital. Methods We selected ten orthopedic residents, who were divided into two groups; one group received training in a methodology for the classification of acetabular fractures, which involves transposing the radiographic images to a graphic two-dimensional representation. We classified fifty cases of acetabular fracture on two separate occasions, and determined the intraobserver and interobserver agreement. Result The success rate was 16.2% (10-26%) for the trained group and 22.8% (10-36%) for the untrained group. The mean kappa coefficients for interobserver and intraobserver agreement in the trained group were 0.08 and 0.12, respectively, and for the untrained group, 0.14 and 0.29. Conclusion Training in the method of radiographic interpretation of acetabular fractures was not effective for assisting in the classification of acetabular fractures. Level of evidence I, Testing of previously developed diagnostic criteria on consecutive patients (with universally applied reference "gold" standard). PMID:24453583

  15. Compensation for Asbestos-Related Diseases in Japan: Utilization of Standard Classifications of Industry and Occupations

    PubMed

    Sawanyawisuth, Kittisak; Furuya, Sugio; Park, Eun-Kee; Myong, Jun-Pyo; Ramos-Bonilla, Juan Pablo; Chimed Ochir, Odgerel; Takahashi, Ken

    2017-07-27

    Background: Asbestos-related diseases (ARD) are occupational hazards with high mortality rates. To identify asbestos exposure by previous occupation is the main issue for ARD compensation for workers. This study aimed to identify risk groups by applying standard classifications of industries and occupations to a national database of compensated ARD victims in Japan. Methods: We identified occupations that carry a risk of asbestos exposure according to the International Standard Industrial Classification of All Economic Activities (ISIC). ARD compensation data from Japan between 2006 and 2013 were retrieved. Each compensated worker was classified by job section and group according to the ISIC code. Risk ratios for compensation were calculated according to the percentage of workers compensated because of ARD in each ISIC category. Results: In total, there were 6,916 workers with ARD who received compensation in Japan between 2008 and 2013. ISIC classification section F (construction) had the highest compensated risk ratio of 6.3. Section C (manufacturing) and section F (construction) had the largest number of compensated workers (2,868 and 3,463, respectively). In the manufacturing section C, 9 out of 13 divisions had a risk ratio of more than 1. For ISIC divisions in the construction section, construction of buildings (division 41) had the highest number of workers registering claims (2,504). Conclusion: ISIC classification of occupations that are at risk of developing ARD can be used to identify the actual risk of workers’ compensation at the national level. Creative Commons Attribution License

  16. 7 CFR 51.1903 - Size classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Size classification. 51.1903 Section 51.1903... STANDARDS) United States Consumer Standards for Fresh Tomatoes Size and Maturity Classification § 51.1903 Size classification. The following terms may be used for describing the size of the tomatoes in any lot...

  17. 7 CFR 51.1402 - Size classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Size classification. 51.1402 Section 51.1402... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Size Classification § 51.1402 Size classification. Size of pecans may be specified in connection with the grade in accordance with one of the...

  18. 7 CFR 27.45 - No storage of cotton for classification at disapproved place.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false No storage of cotton for classification at disapproved... STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Cotton Class Certificates § 27.45 No storage of cotton for classification at disapproved place. No...

  19. 7 CFR 27.45 - No storage of cotton for classification at disapproved place.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false No storage of cotton for classification at disapproved... STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Cotton Class Certificates § 27.45 No storage of cotton for classification at disapproved place. No...

  20. 7 CFR 28.177 - Request for classification and comparison of cotton.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Request for classification and comparison of cotton... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.177 Request for classification and comparison of cotton. The applicant shall make a separate...

  1. 7 CFR 28.177 - Request for classification and comparison of cotton.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Request for classification and comparison of cotton... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.177 Request for classification and comparison of cotton. The applicant shall make a separate...

  2. 7 CFR 27.45 - No storage of cotton for classification at disapproved place.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false No storage of cotton for classification at disapproved... STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Cotton Class Certificates § 27.45 No storage of cotton for classification at disapproved place. No...

  3. 7 CFR 27.45 - No storage of cotton for classification at disapproved place.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false No storage of cotton for classification at disapproved... STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Cotton Class Certificates § 27.45 No storage of cotton for classification at disapproved place. No...

  4. 7 CFR 27.45 - No storage of cotton for classification at disapproved place.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false No storage of cotton for classification at disapproved... STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Cotton Class Certificates § 27.45 No storage of cotton for classification at disapproved place. No...

  5. 7 CFR 28.177 - Request for classification and comparison of cotton.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Request for classification and comparison of cotton... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.177 Request for classification and comparison of cotton. The applicant shall make a separate...

  6. 7 CFR 28.177 - Request for classification and comparison of cotton.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Request for classification and comparison of cotton... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.177 Request for classification and comparison of cotton. The applicant shall make a separate...

  7. 7 CFR 28.177 - Request for classification and comparison of cotton.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Request for classification and comparison of cotton... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.177 Request for classification and comparison of cotton. The applicant shall make a separate...

  8. 7 CFR 51.1402 - Size classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Size classification. 51.1402 Section 51.1402... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Size Classification § 51.1402 Size classification. Size of pecans may be specified in connection with the grade in accordance with one of the...

  9. 7 CFR 51.1903 - Size classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Size classification. 51.1903 Section 51.1903... STANDARDS) United States Consumer Standards for Fresh Tomatoes Size and Maturity Classification § 51.1903 Size classification. The following terms may be used for describing the size of the tomatoes in any lot...

  10. 7 CFR 51.1402 - Size classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Size classification. 51.1402 Section 51.1402... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Size Classification § 51.1402 Size classification. Size of pecans may be specified in connection with the grade in accordance with one of the...

  11. 7 CFR 51.1903 - Size classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Size classification. 51.1903 Section 51.1903... STANDARDS) United States Consumer Standards for Fresh Tomatoes Size and Maturity Classification § 51.1903 Size classification. The following terms may be used for describing the size of the tomatoes in any lot...

  12. 7 CFR 27.64 - Application for review of classification and for Micronaire determination; filing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Application for review of classification and for... AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification Reviews and Micronaire Determinations § 27.64 Application for review of...

  13. 7 CFR 27.64 - Application for review of classification and for Micronaire determination; filing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Application for review of classification and for... AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification Reviews and Micronaire Determinations § 27.64 Application for review of...

  14. Speech Enhancement based on the Dominant Classification Between Speech and Noise Using Feature Data in Spectrogram of Observation Signal

    NASA Astrophysics Data System (ADS)

    Nomura, Yukihiro; Lu, Jianming; Sekiya, Hiroo; Yahagi, Takashi

    This paper presents a speech enhancement using the classification between the dominants of speech and noise. In our system, a new classification scheme between the dominants of speech and noise is proposed. The proposed classifications use the standard deviation of the spectrum of observation signal in each band. We introduce two oversubtraction factors for the dominants of speech and noise, respectively. And spectral subtraction is carried out after the classification. The proposed method is tested on several noise types from the Noisex-92 database. From the investigation of segmental SNR, Itakura-Saito distance measure, inspection of spectrograms and listening tests, the proposed system is shown to be effective to reduce background noise. Moreover, the enhanced speech using our system generates less musical noise and distortion than that of conventional systems.

  15. The future of transposable element annotation and their classification in the light of functional genomics - what we can learn from the fables of Jean de la Fontaine?

    PubMed

    Arensburger, Peter; Piégu, Benoît; Bigot, Yves

    2016-01-01

    Transposable element (TE) science has been significantly influenced by the pioneering ideas of David Finnegan near the end of the last century, as well as by the classification systems that were subsequently developed. Today, whole genome TE annotation is mostly done using tools that were developed to aid gene annotation rather than to specifically study TEs. We argue that further progress in the TE field is impeded both by current TE classification schemes and by a failure to recognize that TE biology is fundamentally different from that of multicellular organisms. Novel genome wide TE annotation methods are helping to redefine our understanding of TE sequence origins and evolution. We briefly discuss some of these new methods as well as ideas for possible alternative classification schemes. Our hope is to encourage the formation of a society to organize a larger debate on these questions and to promote the adoption of standards for annotation and an improved TE classification.

  16. A Machine Learning-based Method for Question Type Classification in Biomedical Question Answering.

    PubMed

    Sarrouti, Mourad; Ouatik El Alaoui, Said

    2017-05-18

    Biomedical question type classification is one of the important components of an automatic biomedical question answering system. The performance of the latter depends directly on the performance of its biomedical question type classification system, which consists of assigning a category to each question in order to determine the appropriate answer extraction algorithm. This study aims to automatically classify biomedical questions into one of the four categories: (1) yes/no, (2) factoid, (3) list, and (4) summary. In this paper, we propose a biomedical question type classification method based on machine learning approaches to automatically assign a category to a biomedical question. First, we extract features from biomedical questions using the proposed handcrafted lexico-syntactic patterns. Then, we feed these features for machine-learning algorithms. Finally, the class label is predicted using the trained classifiers. Experimental evaluations performed on large standard annotated datasets of biomedical questions, provided by the BioASQ challenge, demonstrated that our method exhibits significant improved performance when compared to four baseline systems. The proposed method achieves a roughly 10-point increase over the best baseline in terms of accuracy. Moreover, the obtained results show that using handcrafted lexico-syntactic patterns as features' provider of support vector machine (SVM) lead to the highest accuracy of 89.40 %. The proposed method can automatically classify BioASQ questions into one of the four categories: yes/no, factoid, list, and summary. Furthermore, the results demonstrated that our method produced the best classification performance compared to four baseline systems.

  17. Automated grain extraction and classification by combining improved region growing segmentation and shape descriptors in electromagnetic mill classification system

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian

    2018-04-01

    In this paper, the automatic method of grain detection and classification has been presented. As input, it uses a single digital image obtained from milling process of the copper ore with an high-quality digital camera. The grinding process is an extremely energy and cost consuming process, thus granularity evaluation process should be performed with high efficiency and time consumption. The method proposed in this paper is based on the three-stage image processing. First, using Seeded Region Growing (SRG) segmentation with proposed adaptive thresholding based on the calculation of Relative Standard Deviation (RSD) all grains are detected. In the next step results of the detection are improved using information about the shape of the detected grains using distance map. Finally, each grain in the sample is classified into one of the predefined granularity class. The quality of the proposed method has been obtained by using nominal granularity samples, also with a comparison to the other methods.

  18. 48 CFR 19.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 19.303 Section 19.303 Federal Acquisition... Classification System (NAICS) codes and size standards. (a) The contracting officer shall determine the...

  19. An Evaluation of Feature Learning Methods for High Resolution Image Classification

    NASA Astrophysics Data System (ADS)

    Tokarczyk, P.; Montoya, J.; Schindler, K.

    2012-07-01

    Automatic image classification is one of the fundamental problems of remote sensing research. The classification problem is even more challenging in high-resolution images of urban areas, where the objects are small and heterogeneous. Two questions arise, namely which features to extract from the raw sensor data to capture the local radiometry and image structure at each pixel or segment, and which classification method to apply to the feature vectors. While classifiers are nowadays well understood, selecting the right features remains a largely empirical process. Here we concentrate on the features. Several methods are evaluated which allow one to learn suitable features from unlabelled image data by analysing the image statistics. In a comparative study, we evaluate unsupervised feature learning with different linear and non-linear learning methods, including principal component analysis (PCA) and deep belief networks (DBN). We also compare these automatically learned features with popular choices of ad-hoc features including raw intensity values, standard combinations like the NDVI, a few PCA channels, and texture filters. The comparison is done in a unified framework using the same images, the target classes, reference data and a Random Forest classifier.

  20. HMMBinder: DNA-Binding Protein Prediction Using HMM Profile Based Features.

    PubMed

    Zaman, Rianon; Chowdhury, Shahana Yasmin; Rashid, Mahmood A; Sharma, Alok; Dehzangi, Abdollah; Shatabda, Swakkhar

    2017-01-01

    DNA-binding proteins often play important role in various processes within the cell. Over the last decade, a wide range of classification algorithms and feature extraction techniques have been used to solve this problem. In this paper, we propose a novel DNA-binding protein prediction method called HMMBinder. HMMBinder uses monogram and bigram features extracted from the HMM profiles of the protein sequences. To the best of our knowledge, this is the first application of HMM profile based features for the DNA-binding protein prediction problem. We applied Support Vector Machines (SVM) as a classification technique in HMMBinder. Our method was tested on standard benchmark datasets. We experimentally show that our method outperforms the state-of-the-art methods found in the literature.

  1. Systematic Model-in-the-Loop Test of Embedded Control Systems

    NASA Astrophysics Data System (ADS)

    Krupp, Alexander; Müller, Wolfgang

    Current model-based development processes offer new opportunities for verification automation, e.g., in automotive development. The duty of functional verification is the detection of design flaws. Current functional verification approaches exhibit a major gap between requirement definition and formal property definition, especially when analog signals are involved. Besides lack of methodical support for natural language formalization, there does not exist a standardized and accepted means for formal property definition as a target for verification planning. This article addresses several shortcomings of embedded system verification. An Enhanced Classification Tree Method is developed based on the established Classification Tree Method for Embeded Systems CTM/ES which applies a hardware verification language to define a verification environment.

  2. Sentiment analysis of feature ranking methods for classification accuracy

    NASA Astrophysics Data System (ADS)

    Joseph, Shashank; Mugauri, Calvin; Sumathy, S.

    2017-11-01

    Text pre-processing and feature selection are important and critical steps in text mining. Text pre-processing of large volumes of datasets is a difficult task as unstructured raw data is converted into structured format. Traditional methods of processing and weighing took much time and were less accurate. To overcome this challenge, feature ranking techniques have been devised. A feature set from text preprocessing is fed as input for feature selection. Feature selection helps improve text classification accuracy. Of the three feature selection categories available, the filter category will be the focus. Five feature ranking methods namely: document frequency, standard deviation information gain, CHI-SQUARE, and weighted-log likelihood -ratio is analyzed.

  3. On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP.

    PubMed

    Winkler, Irene; Debener, Stefan; Müller, Klaus-Robert; Tangermann, Michael

    2015-01-01

    Standard artifact removal methods for electroencephalographic (EEG) signals are either based on Independent Component Analysis (ICA) or they regress out ocular activity measured at electrooculogram (EOG) channels. Successful ICA-based artifact reduction relies on suitable pre-processing. Here we systematically evaluate the effects of high-pass filtering at different frequencies. Offline analyses were based on event-related potential data from 21 participants performing a standard auditory oddball task and an automatic artifactual component classifier method (MARA). As a pre-processing step for ICA, high-pass filtering between 1-2 Hz consistently produced good results in terms of signal-to-noise ratio (SNR), single-trial classification accuracy and the percentage of `near-dipolar' ICA components. Relative to no artifact reduction, ICA-based artifact removal significantly improved SNR and classification accuracy. This was not the case for a regression-based approach to remove EOG artifacts.

  4. Strength in Numbers: Using Big Data to Simplify Sentiment Classification.

    PubMed

    Filippas, Apostolos; Lappas, Theodoros

    2017-09-01

    Sentiment classification, the task of assigning a positive or negative label to a text segment, is a key component of mainstream applications such as reputation monitoring, sentiment summarization, and item recommendation. Even though the performance of sentiment classification methods has steadily improved over time, their ever-increasing complexity renders them comprehensible by only a shrinking minority of expert practitioners. For all others, such highly complex methods are black-box predictors that are hard to tune and even harder to justify to decision makers. Motivated by these shortcomings, we introduce BigCounter: a new algorithm for sentiment classification that substitutes algorithmic complexity with Big Data. Our algorithm combines standard data structures with statistical testing to deliver accurate and interpretable predictions. It is also parameter free and suitable for use virtually "out of the box," which makes it appealing for organizations wanting to leverage their troves of unstructured data without incurring the significant expense of creating in-house teams of data scientists. Finally, BigCounter's efficient and parallelizable design makes it applicable to very large data sets. We apply our method on such data sets toward a study on the limits of Big Data for sentiment classification. Our study finds that, after a certain point, predictive performance tends to converge and additional data have little benefit. Our algorithmic design and findings provide the foundations for future research on the data-over-computation paradigm for classification problems.

  5. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  6. 7 CFR 30.1 - Definitions of terms used in classification of leaf tobacco.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Definitions of terms used in classification of leaf... STANDARD CONTAINER REGULATIONS TOBACCO STOCKS AND STANDARDS Classification of Leaf Tobacco Covering Classes, Types and Groups of Grades § 30.1 Definitions of terms used in classification of leaf tobacco. For the...

  7. 7 CFR 30.1 - Definitions of terms used in classification of leaf tobacco.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Definitions of terms used in classification of leaf... STANDARD CONTAINER REGULATIONS TOBACCO STOCKS AND STANDARDS Classification of Leaf Tobacco Covering Classes, Types and Groups of Grades § 30.1 Definitions of terms used in classification of leaf tobacco. For the...

  8. 7 CFR 30.1 - Definitions of terms used in classification of leaf tobacco.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Definitions of terms used in classification of leaf... STANDARD CONTAINER REGULATIONS TOBACCO STOCKS AND STANDARDS Classification of Leaf Tobacco Covering Classes, Types and Groups of Grades § 30.1 Definitions of terms used in classification of leaf tobacco. For the...

  9. 7 CFR 30.1 - Definitions of terms used in classification of leaf tobacco.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Definitions of terms used in classification of leaf... STANDARD CONTAINER REGULATIONS TOBACCO STOCKS AND STANDARDS Classification of Leaf Tobacco Covering Classes, Types and Groups of Grades § 30.1 Definitions of terms used in classification of leaf tobacco. For the...

  10. 7 CFR 30.1 - Definitions of terms used in classification of leaf tobacco.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Definitions of terms used in classification of leaf... STANDARD CONTAINER REGULATIONS TOBACCO STOCKS AND STANDARDS Classification of Leaf Tobacco Covering Classes, Types and Groups of Grades § 30.1 Definitions of terms used in classification of leaf tobacco. For the...

  11. Cluster lot quality assurance sampling: effect of increasing the number of clusters on classification precision and operational feasibility.

    PubMed

    Okayasu, Hiromasa; Brown, Alexandra E; Nzioki, Michael M; Gasasira, Alex N; Takane, Marina; Mkanda, Pascal; Wassilak, Steven G F; Sutter, Roland W

    2014-11-01

    To assess the quality of supplementary immunization activities (SIAs), the Global Polio Eradication Initiative (GPEI) has used cluster lot quality assurance sampling (C-LQAS) methods since 2009. However, since the inception of C-LQAS, questions have been raised about the optimal balance between operational feasibility and precision of classification of lots to identify areas with low SIA quality that require corrective programmatic action. To determine if an increased precision in classification would result in differential programmatic decision making, we conducted a pilot evaluation in 4 local government areas (LGAs) in Nigeria with an expanded LQAS sample size of 16 clusters (instead of the standard 6 clusters) of 10 subjects each. The results showed greater heterogeneity between clusters than the assumed standard deviation of 10%, ranging from 12% to 23%. Comparing the distribution of 4-outcome classifications obtained from all possible combinations of 6-cluster subsamples to the observed classification of the 16-cluster sample, we obtained an exact match in classification in 56% to 85% of instances. We concluded that the 6-cluster C-LQAS provides acceptable classification precision for programmatic action. Considering the greater resources required to implement an expanded C-LQAS, the improvement in precision was deemed insufficient to warrant the effort. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  12. Non-target adjacent stimuli classification improves performance of classical ERP-based brain computer interface

    NASA Astrophysics Data System (ADS)

    Ceballos, G. A.; Hernández, L. F.

    2015-04-01

    Objective. The classical ERP-based speller, or P300 Speller, is one of the most commonly used paradigms in the field of Brain Computer Interfaces (BCI). Several alterations to the visual stimuli presentation system have been developed to avoid unfavorable effects elicited by adjacent stimuli. However, there has been little, if any, regard to useful information contained in responses to adjacent stimuli about spatial location of target symbols. This paper aims to demonstrate that combining the classification of non-target adjacent stimuli with standard classification (target versus non-target) significantly improves classical ERP-based speller efficiency. Approach. Four SWLDA classifiers were trained and combined with the standard classifier: the lower row, upper row, right column and left column classifiers. This new feature extraction procedure and the classification method were carried out on three open databases: the UAM P300 database (Universidad Autonoma Metropolitana, Mexico), BCI competition II (dataset IIb) and BCI competition III (dataset II). Main results. The inclusion of the classification of non-target adjacent stimuli improves target classification in the classical row/column paradigm. A gain in mean single trial classification of 9.6% and an overall improvement of 25% in simulated spelling speed was achieved. Significance. We have provided further evidence that the ERPs produced by adjacent stimuli present discriminable features, which could provide additional information about the spatial location of intended symbols. This work promotes the searching of information on the peripheral stimulation responses to improve the performance of emerging visual ERP-based spellers.

  13. Development of the Final Version of the Classification and Assessment of Occupational Dysfunction Scale

    PubMed Central

    Teraoka, Mutsumi; Kyougoku, Makoto

    2015-01-01

    Occupational therapy is involved in disability prevention and health enhancement through the prevention of occupational dysfunction. Although many occupational dysfunction scales exist, no standard method is available for the assessment and classification of occupational dysfunction, which may include occupational imbalance, occupational deprivation, occupational alienation, and occupational marginalization. The purpose of this study was to develop the final version of Classification and Assessment of Occupational Dysfunction (CAOD). Our study demonstrated the validity and reliability of CAOD in a group of undergraduate students. The CAOD scale includes 16 items and addresses the following 4 domains: occupational imbalance, occupational deprivation, occupational alienation, and occupational marginalization. PMID:26263375

  14. Standardizing Foot-Type Classification Using Arch Index Values

    PubMed Central

    Weil, Rich; de Boer, Emily

    2012-01-01

    ABSTRACT Purpose: The lack of a reliable classification standard for foot type makes drawing conclusions from existing research and clinical decisions difficult, since different foot types may move and respond to treatment differently. The purpose of this study was to determine interrater agreement for foot-type classification based on photo-box-derived arch index values. Method: For this correlational study with two raters, a sample of 11 healthy volunteers with normal to obese body mass indices was recruited from both a community weight-loss programme and a programme in physical therapy. Arch index was calculated using AutoCAD software from footprint photographs obtained via mirrored photo-box. Classification as high-arched, normal, or low-arched foot type was based on arch index values. Reliability of the arch index was determined with intra-class correlations; agreement on foot-type classification was determined using quadratic weighted kappa (κw). Results: Average arch index was 0.215 for one tester and 0.219 for the second tester, with an overall range of 0.017 to 0.370. Both testers classified 6 feet as low-arched, 9 feet as normal, and 7 feet as high-arched. Interrater reliability for the arch index was ICC=0.90; interrater agreement for foot-type classification was κw=0.923. Conclusions: Classification of foot type based on arch index values derived from plantar footprint photographs obtained via mirrored photo-box showed excellent reliability in people with varying BMI. Foot-type classification may help clinicians and researchers subdivide sample populations to better differentiate mobility, gait, or treatment effects among foot types. PMID:23729964

  15. The Future of Classification in Wheelchair Sports; Can Data Science and Technological Advancement Offer an Alternative Point of View?

    PubMed

    van der Slikke, Rienk M A; Bregman, Daan J J; Berger, Monique A M; de Witte, Annemarie M H; Veeger, Dirk-Jan H E J

    2017-11-01

    Classification is a defining factor for competition in wheelchair sports, but it is a delicate and time-consuming process with often questionable validity. 1 New inertial sensor based measurement methods applied in match play and field tests, allow for more precise and objective estimates of the impairment effect on wheelchair mobility performance. It was evaluated if these measures could offer an alternative point of view for classification. Six standard wheelchair mobility performance outcomes of different classification groups were measured in match play (n=29), as well as best possible performance in a field test (n=47). In match-results a clear relationship between classification and performance level is shown, with increased performance outcomes in each adjacent higher classification group. Three outcomes differed significantly between the low and mid-class groups, and one between the mid and high-class groups. In best performance (field test), a split between the low and mid-class groups shows (5 out of 6 outcomes differed significantly) but hardly any difference between the mid and high-class groups. This observed split was confirmed by cluster analysis, revealing the existence of only two performance based clusters. The use of inertial sensor technology to get objective measures of wheelchair mobility performance, combined with a standardized field-test, brought alternative views for evidence based classification. The results of this approach provided arguments for a reduced number of classes in wheelchair basketball. Future use of inertial sensors in match play and in field testing could enhance evaluation of classification guidelines as well as individual athlete performance.

  16. 7 CFR 51.1438 - Size classifications for pieces.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... STANDARDS) United States Standards for Grades of Shelled Pecans Size Classifications § 51.1438 Size classifications for pieces. The size of pecan pieces in a lot may be specified in accordance with one of the size...

  17. Hierarchical Gene Selection and Genetic Fuzzy System for Cancer Microarray Data Classification

    PubMed Central

    Nguyen, Thanh; Khosravi, Abbas; Creighton, Douglas; Nahavandi, Saeid

    2015-01-01

    This paper introduces a novel approach to gene selection based on a substantial modification of analytic hierarchy process (AHP). The modified AHP systematically integrates outcomes of individual filter methods to select the most informative genes for microarray classification. Five individual ranking methods including t-test, entropy, receiver operating characteristic (ROC) curve, Wilcoxon and signal to noise ratio are employed to rank genes. These ranked genes are then considered as inputs for the modified AHP. Additionally, a method that uses fuzzy standard additive model (FSAM) for cancer classification based on genes selected by AHP is also proposed in this paper. Traditional FSAM learning is a hybrid process comprising unsupervised structure learning and supervised parameter tuning. Genetic algorithm (GA) is incorporated in-between unsupervised and supervised training to optimize the number of fuzzy rules. The integration of GA enables FSAM to deal with the high-dimensional-low-sample nature of microarray data and thus enhance the efficiency of the classification. Experiments are carried out on numerous microarray datasets. Results demonstrate the performance dominance of the AHP-based gene selection against the single ranking methods. Furthermore, the combination of AHP-FSAM shows a great accuracy in microarray data classification compared to various competing classifiers. The proposed approach therefore is useful for medical practitioners and clinicians as a decision support system that can be implemented in the real medical practice. PMID:25823003

  18. Hierarchical gene selection and genetic fuzzy system for cancer microarray data classification.

    PubMed

    Nguyen, Thanh; Khosravi, Abbas; Creighton, Douglas; Nahavandi, Saeid

    2015-01-01

    This paper introduces a novel approach to gene selection based on a substantial modification of analytic hierarchy process (AHP). The modified AHP systematically integrates outcomes of individual filter methods to select the most informative genes for microarray classification. Five individual ranking methods including t-test, entropy, receiver operating characteristic (ROC) curve, Wilcoxon and signal to noise ratio are employed to rank genes. These ranked genes are then considered as inputs for the modified AHP. Additionally, a method that uses fuzzy standard additive model (FSAM) for cancer classification based on genes selected by AHP is also proposed in this paper. Traditional FSAM learning is a hybrid process comprising unsupervised structure learning and supervised parameter tuning. Genetic algorithm (GA) is incorporated in-between unsupervised and supervised training to optimize the number of fuzzy rules. The integration of GA enables FSAM to deal with the high-dimensional-low-sample nature of microarray data and thus enhance the efficiency of the classification. Experiments are carried out on numerous microarray datasets. Results demonstrate the performance dominance of the AHP-based gene selection against the single ranking methods. Furthermore, the combination of AHP-FSAM shows a great accuracy in microarray data classification compared to various competing classifiers. The proposed approach therefore is useful for medical practitioners and clinicians as a decision support system that can be implemented in the real medical practice.

  19. Automated EEG sleep staging in the term-age baby using a generative modelling approach.

    PubMed

    Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten

    2018-06-01

    We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording's feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen's kappa agreement calculated between the estimates and clinicians' visual labels. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.

  20. Automated EEG sleep staging in the term-age baby using a generative modelling approach

    NASA Astrophysics Data System (ADS)

    Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten

    2018-06-01

    Objective. We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. Approach. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording’s feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen’s kappa agreement calculated between the estimates and clinicians’ visual labels. Main results. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. Significance. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.

  1. Aircraft accidents : method of analysis

    NASA Technical Reports Server (NTRS)

    1929-01-01

    This report on a method of analysis of aircraft accidents has been prepared by a special committee on the nomenclature, subdivision, and classification of aircraft accidents organized by the National Advisory Committee for Aeronautics in response to a request dated February 18, 1928, from the Air Coordination Committee consisting of the Assistant Secretaries for Aeronautics in the Departments of War, Navy, and Commerce. The work was undertaken in recognition of the difficulty of drawing correct conclusions from efforts to analyze and compare reports of aircraft accidents prepared by different organizations using different classifications and definitions. The air coordination committee's request was made "in order that practices used may henceforth conform to a standard and be universally comparable." the purpose of the special committee therefore was to prepare a basis for the classification and comparison of aircraft accidents, both civil and military. (author)

  2. [The establishment, development and application of classification approach of freshwater phytoplankton based on the functional group: a review].

    PubMed

    Yang, Wen; Zhu, Jin-Yong; Lu, Kai-Hong; Wan, Li; Mao, Xiao-Hua

    2014-06-01

    Appropriate schemes for classification of freshwater phytoplankton are prerequisites and important tools for revealing phytoplanktonic succession and studying freshwater ecosystems. An alternative approach, functional group of freshwater phytoplankton, has been proposed and developed due to the deficiencies of Linnaean and molecular identification in ecological applications. The functional group of phytoplankton is a classification scheme based on autoecology. In this study, the theoretical basis and classification criterion of functional group (FG), morpho-functional group (MFG) and morphology-based functional group (MBFG) were summarized, as well as their merits and demerits. FG was considered as the optimal classification approach for the aquatic ecology research and aquatic environment evaluation. The application status of FG was introduced, with the evaluation standards and problems of two approaches to assess water quality on the basis of FG, index methods of Q and QR, being briefly discussed.

  3. International classification of reliability for implanted cochlear implant receiver stimulators.

    PubMed

    Battmer, Rolf-Dieter; Backous, Douglas D; Balkany, Thomas J; Briggs, Robert J S; Gantz, Bruce J; van Hasselt, Andrew; Kim, Chong Sun; Kubo, Takeshi; Lenarz, Thomas; Pillsbury, Harold C; O'Donoghue, Gerard M

    2010-10-01

    To design an international standard to be used when reporting reliability of the implanted components of cochlear implant systems to appropriate governmental authorities, cochlear implant (CI) centers, and for journal editors in evaluating manuscripts involving cochlear implant reliability. The International Consensus Group for Cochlear Implant Reliability Reporting was assembled to unify ongoing efforts in the United States, Europe, Asia, and Australia to create a consistent and comprehensive classification system for the implanted components of CI systems across manufacturers. All members of the consensus group are from tertiary referral cochlear implant centers. None. A clinically relevant classification scheme adapted from principles of ISO standard 5841-2:2000 originally designed for reporting reliability of cardiac pacemakers, pulse generators, or leads. Standard definitions for device failure, survival time, clinical benefit, reduced clinical benefit, and specification were generated. Time intervals for reporting back to implant centers for devices tested to be "out of specification," categorization of explanted devices, the method of cumulative survival reporting, and content of reliability reports to be issued by manufacturers was agreed upon by all members. The methodology for calculating Cumulative survival was adapted from ISO standard 5841-2:2000. The International Consensus Group on Cochlear Implant Device Reliability Reporting recommends compliance to this new standard in reporting reliability of implanted CI components by all manufacturers of CIs and the adoption of this standard as a minimal reporting guideline for editors of journals publishing cochlear implant research results.

  4. Introducing a design exigency to promote student learning through assessment: A case study.

    PubMed

    Grealish, Laurie A; Shaw, Julie M

    2018-02-01

    Assessment technologies are often used to classify student and newly qualified nurse performance as 'pass' or 'fail', with little attention to how these decisions are achieved. Examining the design exigencies of classification technologies, such as performance assessment technologies, provides opportunities to explore flexibility and change in the process of using those technologies. Evaluate an established assessment technology for nursing performance as a classification system. A case study analysis that is focused on the assessment approach and a priori design exigencies of performance assessment technology, in this case the Australian Nursing Standards Assessment Tool 2016. Nurse assessors are required to draw upon their expertise to judge performance, but that judgement is described as a source of bias, creating confusion. The definition of satisfactory performance is 'ready to enter practice'. To pass, the performance on each criterion must be at least satisfactory, indicating to the student that no further improvement is required. The Australian Nursing Standards Assessment Tool 2016 does not have a third 'other' category, which is usually found in classification systems. Introducing a 'not yet competent' category and creating a two-part, mixed methods assessment process can improve the Australian Nursing Standards Assessment Tool 2016 assessment technology. Using a standards approach in the first part, judgement is valued and can generate learning opportunities across a program. Using a measurement approach in the second part, student performance can be 'not yet competent' but still meet criteria for year level performance and a graded pass. Subjecting the Australian Nursing Standards Assessment Tool 2016 assessment technology to analysis as a classification system provides opportunities for innovation in design. This design innovation has the potential to support students who move between programs and clinicians who assess students from different universities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Selection of representative embankments based on rough set - fuzzy clustering method

    NASA Astrophysics Data System (ADS)

    Bin, Ou; Lin, Zhi-xiang; Fu, Shu-yan; Gao, Sheng-song

    2018-02-01

    The premise condition of comprehensive evaluation of embankment safety is selection of representative unit embankment, on the basis of dividing the unit levee the influencing factors and classification of the unit embankment are drafted.Based on the rough set-fuzzy clustering, the influence factors of the unit embankment are measured by quantitative and qualitative indexes.Construct to fuzzy similarity matrix of standard embankment then calculate fuzzy equivalent matrix of fuzzy similarity matrix by square method. By setting the threshold of the fuzzy equivalence matrix, the unit embankment is clustered, and the representative unit embankment is selected from the classification of the embankment.

  6. The debate over diagnosis related groups.

    PubMed

    Spiegel, A D; Kavaler, F

    1985-01-01

    With the advent of the Prospective Payment System (PPS) using Diagnosis Related Groups (DRGs) as a classification method, the pros and cons of that mechanism have been sharply debated. Grouping the comments into categories related to administration/management, DRG system and quality of care, a review of relevant literature highlights the pertinent attitudes and views of professionals and organizations. Points constantly argued include data utilization, meaningful medical classifications, resource use, gaming, profit centers, patient homogeneity, severity of illness, length of stay, technology limitations and the erosion of standards.

  7. 40 CFR 432.1 - General Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... STANDARDS MEAT AND POULTRY PRODUCTS POINT SOURCE CATEGORY § 432.1 General Applicability. As defined more... the following industrial classification codes: Standard industrial classification 1 North Americanindustrial classification system 2 SIC 0751 NAICS 311611. SIC 2011 NAICS 311612. SIC 2013 NAICS 311615. SIC...

  8. Introduction to the history and current status of evidence-based korean medicine: a unique integrated system of allopathic and holistic medicine.

    PubMed

    Yin, Chang Shik; Ko, Seong-Gyu

    2014-01-01

    Objectives. Korean medicine, an integrated allopathic and traditional medicine, has developed unique characteristics and has been active in contributing to evidence-based medicine. Recent developments in Korean medicine have not been as well disseminated as traditional Chinese medicine. This introduction to recent developments in Korean medicine will draw attention to, and facilitate, the advancement of evidence-based complementary alternative medicine (CAM). Methods and Results. The history of and recent developments in Korean medicine as evidence-based medicine are explored through discussions on the development of a national standard classification of diseases and study reports, ranging from basic research to newly developed clinical therapies. A national standard classification of diseases has been developed and revised serially into an integrated classification of Western allopathic and traditional holistic medicine disease entities. Standard disease classifications offer a starting point for the reliable gathering of evidence and provide a representative example of the unique status of evidence-based Korean medicine as an integration of Western allopathic medicine and traditional holistic medicine. Conclusions. Recent developments in evidence-based Korean medicine show a unique development in evidence-based medicine, adopting both Western allopathic and holistic traditional medicine. It is expected that Korean medicine will continue to be an important contributor to evidence-based medicine, encompassing conventional and complementary approaches.

  9. Nodule Classification on Low-Dose Unenhanced CT and Standard-Dose Enhanced CT: Inter-Protocol Agreement and Analysis of Interchangeability.

    PubMed

    Lee, Kyung Hee; Lee, Kyung Won; Park, Ji Hoon; Han, Kyunghwa; Kim, Jihang; Lee, Sang Min; Park, Chang Min

    2018-01-01

    To measure inter-protocol agreement and analyze interchangeability on nodule classification between low-dose unenhanced CT and standard-dose enhanced CT. From nodule libraries containing both low-dose unenhanced and standard-dose enhanced CT, 80 solid and 80 subsolid (40 part-solid, 40 non-solid) nodules of 135 patients were selected. Five thoracic radiologists categorized each nodule into solid, part-solid or non-solid. Inter-protocol agreement between low-dose unenhanced and standard-dose enhanced images was measured by pooling κ values for classification into two (solid, subsolid) and three (solid, part-solid, non-solid) categories. Interchangeability between low-dose unenhanced and standard-dose enhanced CT for the classification into two categories was assessed using a pre-defined equivalence limit of 8 percent. Inter-protocol agreement for the classification into two categories {κ, 0.96 (95% confidence interval [CI], 0.94-0.98)} and that into three categories (κ, 0.88 [95% CI, 0.85-0.92]) was considerably high. The probability of agreement between readers with standard-dose enhanced CT was 95.6% (95% CI, 94.5-96.6%), and that between low-dose unenhanced and standard-dose enhanced CT was 95.4% (95% CI, 94.7-96.0%). The difference between the two proportions was 0.25% (95% CI, -0.85-1.5%), wherein the upper bound CI was markedly below 8 percent. Inter-protocol agreement for nodule classification was considerably high. Low-dose unenhanced CT can be used interchangeably with standard-dose enhanced CT for nodule classification.

  10. Generating Automated Text Complexity Classifications That Are Aligned with Targeted Text Complexity Standards. Research Report. ETS RR-10-28

    ERIC Educational Resources Information Center

    Sheehan, Kathleen M.; Kostin, Irene; Futagi, Yoko; Flor, Michael

    2010-01-01

    The Common Core Standards call for students to be exposed to a much greater level of text complexity than has been the norm in schools for the past 40 years. Textbook publishers, teachers, and assessment developers are being asked to refocus materials and methods to ensure that students are challenged to read texts at steadily increasing…

  11. How a national vegetation classification can help ecological research and management

    USGS Publications Warehouse

    Franklin, Scott; Comer, Patrick; Evens, Julie; Ezcurra, Exequiel; Faber-Langendoen, Don; Franklin, Janet; Jennings, Michael; Josse, Carmen; Lea, Chris; Loucks, Orie; Muldavin, Esteban; Peet, Robert K.; Ponomarenko, Serguei; Roberts, David G.; Solomeshch, Ayzik; Keeler-Wolf, Todd; Van Kley, James; Weakley, Alan; McKerrow, Alexa; Burke, Marianne; Spurrier, Carol

    2015-01-01

    The elegance of classification lies in its ability to compile and systematize various terminological conventions and masses of information that are unattainable during typical research projects. Imagine a discipline without standards for collection, analysis, and interpretation; unfortunately, that describes much of 20th-century vegetation ecology. With differing methods, how do we assess community dynamics over decades, much less centuries? How do we compare plant communities from different areas? The need for a widely applied vegetation classification has long been clear. Now imagine a multi-decade effort to assimilate hundreds of disparate vegetation classifications into one common classification for the US. In this letter, we introduce the US National Vegetation Classification (USNVC; www.usnvc.org) as a powerful tool for research and conservation, analogous to the argument made by Schimel and Chadwick (2013) for soils. The USNVC provides a national framework to classify and describe vegetation; here we describe the USNVC and offer brief examples of its efficacy.

  12. The application of SRF vs. RDF classification and specifications to the material flows of two mechanical-biological treatment plants of Rome: Comparison and implications.

    PubMed

    Di Lonardo, Maria Chiara; Franzese, Maurizio; Costa, Giulia; Gavasci, Renato; Lombardi, Francesco

    2016-01-01

    This work assessed the quality in terms of solid recovered fuel (SRF) definitions of the dry light flow (until now indicated as refuse derived fuel, RDF), heavy rejects and stabilisation rejects, produced by two mechanical biological treatment plants of Rome (Italy). SRF classification and specifications were evaluated first on the basis of RDF historical characterisation methods and data and then applying the sampling and analytical methods laid down by the recently issued SRF standards. The results showed that the dry light flow presented a worst SRF class in terms of net calorific value applying the new methods compared to that obtained from RDF historical data (4 instead of 3). This lead to incompliance with end of waste criteria established by Italian legislation for SRF use as co-fuel in cement kilns and power plants. Furthermore, the metal contents of the dry light flow obtained applying SRF current methods proved to be considerably higher (although still meeting SRF specifications) compared to those resulting from historical data retrieved with RDF standard methods. These differences were not related to a decrease in the quality of the dry light flow produced in the mechanical-biological treatment plants but rather to the different sampling procedures set by the former RDF and current SRF standards. In particular, the shredding of the sample before quartering established by the latter methods ensures that also the finest waste fractions, characterised by higher moisture and metal contents, are included in the sample to be analysed, therefore affecting the composition and net calorific value of the waste. As for the reject flows, on the basis of their SRF classification and specification parameters, it was found that combined with the dry light flow they may present similar if not the same class codes as the latter alone, thus indicating that these material flows could be also treated in combustion plants instead of landfilled. In conclusion, the introduction of SRF definitions, classification and specification procedures, while not necessarily leading to an upgrade of the waste as co-fuel in cement kilns and power plants, may anyhow provide new possibilities for energy recovery from waste by increasing the types of mechanically treated waste flows that may be thermally treated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Radiation-Tolerant DC-DC Converters

    NASA Technical Reports Server (NTRS)

    Skutt, Glenn; Sable, Dan; Leslie, Leonard; Graham, Shawn

    2012-01-01

    A document discusses power converters suitable for space use that meet the DSCC MIL-PRF-38534 Appendix G radiation hardness level P classification. A method for qualifying commercially produced electronic parts for DC-DC converters per the Defense Supply Center Columbus (DSCC) radiation hardened assurance requirements was developed. Development and compliance testing of standard hybrid converters suitable for space use were completed for missions with total dose radiation requirements of up to 30 kRad. This innovation provides the same overall performance as standard hybrid converters, but includes assurance of radiation- tolerant design through components and design compliance testing. This availability of design-certified radiation-tolerant converters can significantly reduce total cost and delivery time for power converters for space applications that fit the appropriate DSCC classification (30 kRad).

  14. Recognition and defect detection of dot-matrix text via variation-model based learning

    NASA Astrophysics Data System (ADS)

    Ohyama, Wataru; Suzuki, Koushi; Wakabayashi, Tetsushi

    2017-03-01

    An algorithm for recognition and defect detection of dot-matrix text printed on products is proposed. Extraction and recognition of dot-matrix text contains several difficulties, which are not involved in standard camera-based OCR, that the appearance of dot-matrix characters is corrupted and broken by illumination, complex texture in the background and other standard characters printed on product packages. We propose a dot-matrix text extraction and recognition method which does not require any user interaction. The method employs detected location of corner points and classification score. The result of evaluation experiment using 250 images shows that recall and precision of extraction are 78.60% and 76.03%, respectively. Recognition accuracy of correctly extracted characters is 94.43%. Detecting printing defect of dot-matrix text is also important in the production scene to avoid illegal productions. We also propose a detection method for printing defect of dot-matrix characters. The method constructs a feature vector of which elements are classification scores of each character class and employs support vector machine to classify four types of printing defect. The detection accuracy of the proposed method is 96.68 %.

  15. Mapping and improving frequency, accuracy, and interpretation of land cover change: Classifying coastal Louisiana with 1990, 1993, 1996, and 1999 Landsat Thematic Mapper image data

    USGS Publications Warehouse

    Nelson, G.; Ramsey, Elijah W.; Rangoonwala, A.

    2005-01-01

    Landsat Thematic Mapper images and collateral data sources were used to classify the land cover of the Mermentau River Basin within the chenier coastal plain and the adjacent uplands of Louisiana, USA. Landcover classes followed that of the National Oceanic and Atmospheric Administration's Coastal Change Analysis Program; however, classification methods needed to be developed to meet these national standards. Our first classification was limited to the Mermentau River Basin (MRB) in southcentral Louisiana, and the years of 1990, 1993, and 1996. To overcome problems due to class spectral inseparable, spatial and spectra continuums, mixed landcovers, and abnormal transitions, we separated the coastal area into regions of commonality and applying masks to specific land mixtures. Over the three years and 14 landcover classes (aggregating the cultivated land and grassland, and water and floating vegetation classes), overall accuracies ranged from 82% to 90%. To enhance landcover change interpretation, three indicators were introduced as Location Stability, Residence stability, and Turnover. Implementing methods substantiated in the multiple date MRB classification, we spatially extended the classification to the entire Louisiana coast and temporally extended the original 1990, 1993, 1996 classifications to 1999 (Figure 1). We also advanced the operational functionality of the classification and increased the credibility of change detection results. Increased operational functionality that resulted in diminished user input was for the most part gained by implementing a classification logic based on forbidden transitions. The logic detected and corrected misclassifications and mostly alleviated the necessity of subregion separation prior to the classification. The new methods provided an improved ability for more timely detection and response to landcover impact. ?? 2005 IEEE.

  16. Multistrategy Self-Organizing Map Learning for Classification Problems

    PubMed Central

    Hasan, S.; Shamsuddin, S. M.

    2011-01-01

    Multistrategy Learning of Self-Organizing Map (SOM) and Particle Swarm Optimization (PSO) is commonly implemented in clustering domain due to its capabilities in handling complex data characteristics. However, some of these multistrategy learning architectures have weaknesses such as slow convergence time always being trapped in the local minima. This paper proposes multistrategy learning of SOM lattice structure with Particle Swarm Optimisation which is called ESOMPSO for solving various classification problems. The enhancement of SOM lattice structure is implemented by introducing a new hexagon formulation for better mapping quality in data classification and labeling. The weights of the enhanced SOM are optimised using PSO to obtain better output quality. The proposed method has been tested on various standard datasets with substantial comparisons with existing SOM network and various distance measurement. The results show that our proposed method yields a promising result with better average accuracy and quantisation errors compared to the other methods as well as convincing significant test. PMID:21876686

  17. Perspectives on current tumor-node-metastasis (TNM) staging of cancers of the colon and rectum.

    PubMed

    Hu, Huankai; Krasinskas, Alyssa; Willis, Joseph

    2011-08-01

    Improvements in classifications of cancers based on discovery and validation of important histopathological parameters and new molecular markers continue unabated. Though still not perfect, recent updates of classification schemes in gastrointestinal oncology by the American Joint Commission on Cancer (tumor-node-metastasis [TNM] staging) and the World Health Organization further stratify patients and guide optimization of treatment strategies and better predict patient outcomes. These updates recognize the heterogeneity of patient populations with significant subgrouping of each tumor stage and use of tumor deposits to significantly "up-stage" some cancers; change staging parameters for subsets of IIIB and IIIC cancers; and introduce of several new subtypes of colon carcinomas. By the nature of the process, recent discoveries that are important to improving even routine standards of patient care, especially new advances in molecular medicine, are not incorporated into these systems. Nonetheless, these classifications significantly advance clinical standards and are welcome enhancements to our current methods of cancer reporting. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. A Systematic Review of the Robson Classification for Caesarean Section: What Works, Doesn't Work and How to Improve It

    PubMed Central

    Betrán, Ana Pilar; Vindevoghel, Nadia; Souza, Joao Paulo; Gülmezoglu, A. Metin; Torloni, Maria Regina

    2014-01-01

    Background Caesarean sections (CS) rates continue to increase worldwide without a clear understanding of the main drivers and consequences. The lack of a standardized internationally-accepted classification system to monitor and compare CS rates is one of the barriers to a better understanding of this trend. The Robson's 10-group classification is based on simple obstetrical parameters (parity, previous CS, gestational age, onset of labour, fetal presentation and number of fetuses) and does not involve the indication for CS. This classification has become very popular over the last years in many countries. We conducted a systematic review to synthesize the experience of users on the implementation of this classification and proposed adaptations. Methods Four electronic databases were searched. A three-step thematic synthesis approach and a qualitative metasummary method were used. Results 232 unique reports were identified, 97 were selected for full-text evaluation and 73 were included. These publications reported on the use of Robson's classification in over 33 million women from 31 countries. According to users, the main strengths of the classification are its simplicity, robustness, reliability and flexibility. However, missing data, misclassification of women and lack of definition or consensus on core variables of the classification are challenges. To improve the classification for local use and to decrease heterogeneity within groups, several subdivisions in each of the 10 groups have been proposed. Group 5 (women with previous CS) received the largest number of suggestions. Conclusions The use of the Robson classification is increasing rapidly and spontaneously worldwide. Despite some limitations, this classification is easy to implement and interpret. Several suggested modifications could be useful to help facilities and countries as they work towards its implementation. PMID:24892928

  19. 7 CFR 27.91 - Advance deposit may be required.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Costs of Classification and...

  20. 7 CFR 27.83 - No fees for certain certificates.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Costs of Classification and...

  1. Automatic adventitious respiratory sound analysis: A systematic review.

    PubMed

    Pramono, Renard Xaviero Adhi; Bowyer, Stuart; Rodriguez-Villegas, Esther

    2017-01-01

    Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11.69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases.

  2. Automatic adventitious respiratory sound analysis: A systematic review

    PubMed Central

    Bowyer, Stuart; Rodriguez-Villegas, Esther

    2017-01-01

    Background Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. Objective To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. Data sources A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Study selection Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Data extraction Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. Data synthesis A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11.69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Limitations Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. Conclusion A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases. PMID:28552969

  3. Classifying diseases and remedies in ethnomedicine and ethnopharmacology.

    PubMed

    Staub, Peter O; Geck, Matthias S; Weckerle, Caroline S; Casu, Laura; Leonti, Marco

    2015-11-04

    Ethnopharmacology focuses on the understanding of local and indigenous use of medicines and therefore an emic approach is inevitable. Often, however, standard biomedical disease classifications are used to describe and analyse local diseases and remedies. Standard classifications might be a valid tool for cross-cultural comparisons and bioprospecting purposes but are not suitable to understand the local perception of disease and use of remedies. Different standard disease classification systems exist but their suitability for cross-cultural comparisons of ethnomedical data has never been assessed. Depending on the research focus, (I) ethnomedical, (II) cross-cultural, and (III) bioprospecting, we provide suggestions for the use of specific classification systems. We analyse three different standard biomedical classification systems (the International Classification of Diseases (ICD); the Economic Botany Data Collection Standard (EBDCS); and the International Classification of Primary Care (ICPC)), and discuss their value for categorizing diseases of ethnomedical systems and their suitability for cross-cultural research in ethnopharmacology. Moreover, based on the biomedical uses of all approved plant derived biomedical drugs, we propose a biomedical therapy-based classification system as a guide for the discovery of drugs from ethnopharmacological sources. Widely used standards, such as the International Classification of Diseases (ICD) by the WHO and the Economic Botany Data Collection Standard (EBDCS) are either technically challenging due to a categorisation system based on clinical examinations, which are usually not possible during field research (ICD) or lack clear biomedical criteria combining disorders and medical effects in an imprecise and confusing way (EBDCS). The International Classification of Primary Care (ICPC), also accepted by the WHO, has more in common with ethnomedical reality than the ICD or the EBDCS, as the categories are designed according to patient's perceptions and are less influenced by clinical medicine. Since diagnostic tools are not required, medical ethnobotanists and ethnopharmacologists can easily classify reported symptoms and complaints with the ICPC in one of the "chapters" based on 17 body systems, psychological and social problems. Also the biomedical uses of plant-derived drugs are classifiable into 17 broad organ- and therapy-based use-categories but can easily be divided into more specific subcategories. Depending on the research focus (I-III) we propose the following classification systems: I. Ethnomedicine: Ethnomedicine is culture-bound and local classifications have to be understood from an emic perspective. Consequently, the application of prefabricated, "one-size fits all" biomedical classification schemes is of limited value. II. Cross-cultural analysis: The ICPC is a suitable standard that can be applied but modified as required. III. Bioprospecting: We suggest a biomedical therapy-driven classification system with currently 17 use-categories based on biomedical uses of all approved plant derived natural product drugs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Classification of neocortical interneurons using affinity propagation.

    PubMed

    Santana, Roberto; McGarry, Laura M; Bielza, Concha; Larrañaga, Pedro; Yuste, Rafael

    2013-01-01

    In spite of over a century of research on cortical circuits, it is still unknown how many classes of cortical neurons exist. In fact, neuronal classification is a difficult problem because it is unclear how to designate a neuronal cell class and what are the best characteristics to define them. Recently, unsupervised classifications using cluster analysis based on morphological, physiological, or molecular characteristics, have provided quantitative and unbiased identification of distinct neuronal subtypes, when applied to selected datasets. However, better and more robust classification methods are needed for increasingly complex and larger datasets. Here, we explored the use of affinity propagation, a recently developed unsupervised classification algorithm imported from machine learning, which gives a representative example or exemplar for each cluster. As a case study, we applied affinity propagation to a test dataset of 337 interneurons belonging to four subtypes, previously identified based on morphological and physiological characteristics. We found that affinity propagation correctly classified most of the neurons in a blind, non-supervised manner. Affinity propagation outperformed Ward's method, a current standard clustering approach, in classifying the neurons into 4 subtypes. Affinity propagation could therefore be used in future studies to validly classify neurons, as a first step to help reverse engineer neural circuits.

  5. Automatic document classification of biological literature

    PubMed Central

    Chen, David; Müller, Hans-Michael; Sternberg, Paul W

    2006-01-01

    Background Document classification is a wide-spread problem with many applications, from organizing search engine snippets to spam filtering. We previously described Textpresso, a text-mining system for biological literature, which marks up full text according to a shallow ontology that includes terms of biological interest. This project investigates document classification in the context of biological literature, making use of the Textpresso markup of a corpus of Caenorhabditis elegans literature. Results We present a two-step text categorization algorithm to classify a corpus of C. elegans papers. Our classification method first uses a support vector machine-trained classifier, followed by a novel, phrase-based clustering algorithm. This clustering step autonomously creates cluster labels that are descriptive and understandable by humans. This clustering engine performed better on a standard test-set (Reuters 21578) compared to previously published results (F-value of 0.55 vs. 0.49), while producing cluster descriptions that appear more useful. A web interface allows researchers to quickly navigate through the hierarchy and look for documents that belong to a specific concept. Conclusion We have demonstrated a simple method to classify biological documents that embodies an improvement over current methods. While the classification results are currently optimized for Caenorhabditis elegans papers by human-created rules, the classification engine can be adapted to different types of documents. We have demonstrated this by presenting a web interface that allows researchers to quickly navigate through the hierarchy and look for documents that belong to a specific concept. PMID:16893465

  6. [Research on fast classification based on LIBS technology and principle component analyses].

    PubMed

    Yu, Qi; Ma, Xiao-Hong; Wang, Rui; Zhao, Hua-Feng

    2014-11-01

    Laser-induced breakdown spectroscopy (LIBS) and the principle component analysis (PCA) were combined to study aluminum alloy classification in the present article. Classification experiments were done on thirteen different kinds of standard samples of aluminum alloy which belong to 4 different types, and the results suggested that the LIBS-PCA method can be used to aluminum alloy fast classification. PCA was used to analyze the spectrum data from LIBS experiments, three principle components were figured out that contribute the most, the principle component scores of the spectrums were calculated, and the scores of the spectrums data in three-dimensional coordinates were plotted. It was found that the spectrum sample points show clear convergence phenomenon according to the type of aluminum alloy they belong to. This result ensured the three principle components and the preliminary aluminum alloy type zoning. In order to verify its accuracy, 20 different aluminum alloy samples were used to do the same experiments to verify the aluminum alloy type zoning. The experimental result showed that the spectrum sample points all located in their corresponding area of the aluminum alloy type, and this proved the correctness of the earlier aluminum alloy standard sample type zoning method. Based on this, the identification of unknown type of aluminum alloy can be done. All the experimental results showed that the accuracy of principle component analyses method based on laser-induced breakdown spectroscopy is more than 97.14%, and it can classify the different type effectively. Compared to commonly used chemical methods, laser-induced breakdown spectroscopy can do the detection of the sample in situ and fast with little sample preparation, therefore, using the method of the combination of LIBS and PCA in the areas such as quality testing and on-line industrial controlling can save a lot of time and cost, and improve the efficiency of detection greatly.

  7. Employing wavelet-based texture features in ammunition classification

    NASA Astrophysics Data System (ADS)

    Borzino, Ángelo M. C. R.; Maher, Robert C.; Apolinário, José A.; de Campos, Marcello L. R.

    2017-05-01

    Pattern recognition, a branch of machine learning, involves classification of information in images, sounds, and other digital representations. This paper uses pattern recognition to identify which kind of ammunition was used when a bullet was fired based on a carefully constructed set of gunshot sound recordings. To do this task, we show that texture features obtained from the wavelet transform of a component of the gunshot signal, treated as an image, and quantized in gray levels, are good ammunition discriminators. We test the technique with eight different calibers and achieve a classification rate better than 95%. We also compare the performance of the proposed method with results obtained by standard temporal and spectrographic techniques

  8. Transfer Learning of Classification Rules for Biomarker Discovery and Verification from Molecular Profiling Studies

    PubMed Central

    Ganchev, Philip; Malehorn, David; Bigbee, William L.; Gopalakrishnan, Vanathi

    2013-01-01

    We present a novel framework for integrative biomarker discovery from related but separate data sets created in biomarker profiling studies. The framework takes prior knowledge in the form of interpretable, modular rules, and uses them during the learning of rules on a new data set. The framework consists of two methods of transfer of knowledge from source to target data: transfer of whole rules and transfer of rule structures. We evaluated the methods on three pairs of data sets: one genomic and two proteomic. We used standard measures of classification performance and three novel measures of amount of transfer. Preliminary evaluation shows that whole-rule transfer improves classification performance over using the target data alone, especially when there is more source data than target data. It also improves performance over using the union of the data sets. PMID:21571094

  9. Characterisation of Feature Points in Eye Fundus Images

    NASA Astrophysics Data System (ADS)

    Calvo, D.; Ortega, M.; Penedo, M. G.; Rouco, J.

    The retinal vessel tree adds decisive knowledge in the diagnosis of numerous opthalmologic pathologies such as hypertension or diabetes. One of the problems in the analysis of the retinal vessel tree is the lack of information in terms of vessels depth as the image acquisition usually leads to a 2D image. This situation provokes a scenario where two different vessels coinciding in a point could be interpreted as a vessel forking into a bifurcation. That is why, for traking and labelling the retinal vascular tree, bifurcations and crossovers of vessels are considered feature points. In this work a novel method for these retinal vessel tree feature points detection and classification is introduced. The method applies image techniques such as filters or thinning to obtain the adequate structure to detect the points and sets a classification of these points studying its environment. The methodology is tested using a standard database and the results show high classification capabilities.

  10. Nutritional status in sick children and adolescents is not accurately reflected by BMI-SDS.

    PubMed

    Fusch, Gerhard; Raja, Preeya; Dung, Nguyen Quang; Karaolis-Danckert, Nadina; Barr, Ronald; Fusch, Christoph

    2013-01-01

    Nutritional status provides helpful information of disease severity and treatment effectiveness. Body mass index standard deviation scores (BMI-SDS) provide an approximation of body composition and thus are frequently used to classify nutritional status of sick children and adolescents. However, the accuracy of estimating body composition in this population using BMI-SDS has not been assessed. Thus, this study aims to evaluate the accuracy of nutritional status classification in sick infants and adolescents using BMI-SDS, upon comparison to classification using percentage body fat (%BF) reference charts. BMI-SDS was calculated from anthropometric measurements and %BF was measured using dual-energy x-ray absorptiometry (DXA) for 393 sick children and adolescents (5 months-18 years). Subjects were classified by nutritional status (underweight, normal weight, overweight, and obese), using 2 methods: (1) BMI-SDS, based on age- and gender-specific percentiles, and (2) %BF reference charts (standard). Linear regression and a correlation analysis were conducted to compare agreement between both methods of nutritional status classification. %BF reference value comparisons were also made between 3 independent sources based on German, Canadian, and American study populations. Correlation between nutritional status classification by BMI-SDS and %BF agreed moderately (r (2) = 0.75, 0.76 in boys and girls, respectively). The misclassification of nutritional status in sick children and adolescents using BMI-SDS was 27% when using German %BF references. Similar rates observed when using Canadian and American %BF references (24% and 23%, respectively). Using BMI-SDS to determine nutritional status in a sick population is not considered an appropriate clinical tool for identifying individual underweight or overweight children or adolescents. However, BMI-SDS may be appropriate for longitudinal measurements or for screening purposes in large field studies. When accurate nutritional status classification of a sick patient is needed for clinical purposes, nutritional status will be assessed more accurately using methods that accurately measure %BF, such as DXA.

  11. 7 CFR 27.80 - Fees; classification, Micronaire, and supervision.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....80 Section 27.80 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Costs of...

  12. 7 CFR 28.8 - Classification of cotton; determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Classification of cotton; determination. 28.8 Section... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Administrative and General § 28.8 Classification of cotton; determination. For the purposes of...

  13. 7 CFR 28.8 - Classification of cotton; determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Classification of cotton; determination. 28.8 Section... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Administrative and General § 28.8 Classification of cotton; determination. For the purposes of...

  14. 7 CFR 28.8 - Classification of cotton; determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Classification of cotton; determination. 28.8 Section... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Administrative and General § 28.8 Classification of cotton; determination. For the purposes of...

  15. 7 CFR 28.8 - Classification of cotton; determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Classification of cotton; determination. 28.8 Section... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Administrative and General § 28.8 Classification of cotton; determination. For the purposes of...

  16. 7 CFR 28.8 - Classification of cotton; determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Classification of cotton; determination. 28.8 Section... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Regulations Under the United States Cotton Standards Act Administrative and General § 28.8 Classification of cotton; determination. For the purposes of...

  17. Effects of eye artifact removal methods on single trial P300 detection, a comparative study.

    PubMed

    Ghaderi, Foad; Kim, Su Kyoung; Kirchner, Elsa Andrea

    2014-01-15

    Electroencephalographic signals are commonly contaminated by eye artifacts, even if recorded under controlled conditions. The objective of this work was to quantitatively compare standard artifact removal methods (regression, filtered regression, Infomax, and second order blind identification (SOBI)) and two artifact identification approaches for independent component analysis (ICA) methods, i.e. ADJUST and correlation. To this end, eye artifacts were removed and the cleaned datasets were used for single trial classification of P300 (a type of event related potentials elicited using the oddball paradigm). Statistical analysis of the results confirms that the combination of Infomax and ADJUST provides a relatively better performance (0.6% improvement on average of all subject) while the combination of SOBI and correlation performs the worst. Low-pass filtering the data at lower cutoffs (here 4 Hz) can also improve the classification accuracy. Without requiring any artifact reference channel, the combination of Infomax and ADJUST improves the classification performance more than the other methods for both examined filtering cutoffs, i.e., 4 Hz and 25 Hz. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Urban field classification by "local climate zones" in a medium-sized Central European city: the case of Olomouc (Czech Republic)

    NASA Astrophysics Data System (ADS)

    Lehnert, Michal; Geletič, Jan; Husák, Jan; Vysoudil, Miroslav

    2015-11-01

    The stations of the Metropolitan Station Network in Olomouc (Czech Republic) were assigned to local climatic zones, and the temperature characteristics of the stations were compared. The classification of local climatic zones represents an up-to-date concept for the unification of the characterization of the neighborhoods of climate research sites. This study is one of the first to provide a classification of existing stations within local climate zones. Using a combination of GIS-based analyses and field research, the values of geometric and surface cover properties were calculated, and the stations were subsequently classified into the local climate zones. It turned out that the classification of local climatic zones can be efficiently used for representative documentation of the neighborhood of the climate stations. To achieve a full standardization of the description of the neighborhood of a station, the classification procedures, including the methods used for the processing of spatial data and methods used for the indication of specific local characteristics, must be also standardized. Although the main patterns of temperature differences between the stations with a compact rise, those with an open rise and the stations with no rise or sparsely built areas were evident; the air temperature also showed considerable differences within particular zones. These differences were largely caused by various geometric layout of development and by unstandardized placement of the stations. For the direct comparison of temperatures between zones, particularly those stations which have been placed in such a way that they are as representative as possible for the zone in question should be used in further research.

  19. 76 FR 5375 - Submission for OMB Review; Request for Authorization of Additional Classification and Rate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-31

    ... for Authorization of Additional Classification and Rate, Standard Form 1444 AGENCIES: Department of... Request for Authorization of Additional Classification and Rate, Standard Form 1444. A notice published in... personal and/or business confidential information provided. FOR FURTHER INFORMATION CONTACT: Ms. Clare...

  20. Classification of high-resolution multispectral satellite remote sensing images using extended morphological attribute profiles and independent component analysis

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei

    2017-07-01

    In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.

  1. A comprehensive quality evaluation method by FT-NIR spectroscopy and chemometric: Fine classification and untargeted authentication against multiple frauds for Chinese Ganoderma lucidum

    NASA Astrophysics Data System (ADS)

    Fu, Haiyan; Yin, Qiaobo; Xu, Lu; Wang, Weizheng; Chen, Feng; Yang, Tianming

    2017-07-01

    The origins and authenticity against frauds are two essential aspects of food quality. In this work, a comprehensive quality evaluation method by FT-NIR spectroscopy and chemometrics were suggested to address the geographical origins and authentication of Chinese Ganoderma lucidum (GL). Classification for 25 groups of GL samples (7 common species from 15 producing areas) was performed using near-infrared spectroscopy and interval-combination One-Versus-One least squares support vector machine (IC-OVO-LS-SVM). Untargeted analysis of 4 adulterants of cheaper mushrooms was performed by one-class partial least squares (OCPLS) modeling for each of the 7 GL species. After outlier diagnosis and comparing the influences of different preprocessing methods and spectral intervals on classification, IC-OVO-LS-SVM with standard normal variate (SNV) spectra obtained a total classification accuracy of 0.9317, an average sensitivity and specificity of 0.9306 and 0.9971, respectively. With SNV or second-order derivative (D2) spectra, OCPLS could detect at least 2% or more doping levels of adulterants for 5 of the 7 GL species and 5% or more doping levels for the other 2 GL species. This study demonstrates the feasibility of using new chemometrics and NIR spectroscopy for fine classification of GL geographical origins and species as well as for untargeted analysis of multiple adulterants.

  2. Text Classification for Assisting Moderators in Online Health Communities

    PubMed Central

    Huh, Jina; Yetisgen-Yildiz, Meliha; Pratt, Wanda

    2013-01-01

    Objectives Patients increasingly visit online health communities to get help on managing health. The large scale of these online communities makes it impossible for the moderators to engage in all conversations; yet, some conversations need their expertise. Our work explores low-cost text classification methods to this new domain of determining whether a thread in an online health forum needs moderators’ help. Methods We employed a binary classifier on WebMD’s online diabetes community data. To train the classifier, we considered three feature types: (1) word unigram, (2) sentiment analysis features, and (3) thread length. We applied feature selection methods based on χ2 statistics and under sampling to account for unbalanced data. We then performed a qualitative error analysis to investigate the appropriateness of the gold standard. Results Using sentiment analysis features, feature selection methods, and balanced training data increased the AUC value up to 0.75 and the F1-score up to 0.54 compared to the baseline of using word unigrams with no feature selection methods on unbalanced data (0.65 AUC and 0.40 F1-score). The error analysis uncovered additional reasons for why moderators respond to patients’ posts. Discussion We showed how feature selection methods and balanced training data can improve the overall classification performance. We present implications of weighing precision versus recall for assisting moderators of online health communities. Our error analysis uncovered social, legal, and ethical issues around addressing community members’ needs. We also note challenges in producing a gold standard, and discuss potential solutions for addressing these challenges. Conclusion Social media environments provide popular venues in which patients gain health-related information. Our work contributes to understanding scalable solutions for providing moderators’ expertise in these large-scale, social media environments. PMID:24025513

  3. Automatic classification of transiently evoked otoacoustic emissions using an artificial neural network.

    PubMed

    Buller, G; Lutman, M E

    1998-08-01

    The increasing use of transiently evoked otoacoustic emissions (TEOAE) in large neonatal hearing screening programmes makes a standardized method of response classification desirable. Until now methods have been either subjective or based on arbitrary response characteristics. This study takes an expert system approach to standardize the subjective judgements of an experienced scorer. The method that is developed comprises three stages. First, it transforms TEOAEs from waveforms in the time domain into a simplified parameter set. Second, the parameter set is classified by an artificial neural network that has been taught on a large database TEOAE waveforms and corresponding expert scores. Third, additional fuzzy logic rules automatically detect probable artefacts in the waveforms and synchronized spontaneous emission components. In this way, the knowledge of the experienced scorer is encapsulated in the expert system software and thereafter can be accessed by non-experts. Teaching and evaluation of the neural network was based on TEOAEs from a database totalling 2190 neonatal hearing screening tests. The database was divided into learning and test groups with 820 and 1370 waveforms respectively. From each recorded waveform a set of 12 parameters was calculated, representing signal static and dynamic properties. The artifical network was taught with parameter sets of only the learning groups. Reproduction of the human scorer classification by the neural net in the learning group showed a sensitivity for detecting screen fails of 99.3% (299 from 301 failed results on subjective scoring) and a specificity for detecting screen passes of 81.1% (421 of 519 pass results). To quantify the post hoc performance of the net (generalization), the test group was then presented to the network input. Sensitivity was 99.4% (474 from 477) and specificity was 87.3% (780 from 893). To check the efficiency of the classification method, a second learning group was selected out of the previous test group, and the previous learning group was used as the test group. Repeating learning and test procedures yielded 99.3% sensitivity and 80.7% specificity for reproduction, and 99.4% sensitivity and 86.7% specificity for generalization. In all respects, performance was better than for a previously optimized method based simply on cross-correlation between replicate non-linear waveforms. It is concluded that classification methods based on neural networks show promise for application to large neonatal screening programmes utilizing TEOAEs.

  4. Ecosystem Services Linking People to Coastal Habitats ...

    EPA Pesticide Factsheets

    Background/Question/Methods: There is a growing need to incorporate and prioritize ecosystem services/condition information into land-use decision making. While there are a number of place-based studies looking at how land-use decisions affect the availability and delivery of coastal services, many of these methods require data, funding and/or expertise that may be inaccessible to many coastal communities. Using existing classification standards for beneficiaries and coastal habitats, (i.e., Final Ecosystem Goods and Services Classification System (FEGS-CS) and Coastal and Marine Ecological Classification Standard (CMECS)), a comprehensive literature review was coupled with a “weight of evidence” approach to evaluate linkages between beneficiaries and coastal habitat features most relevant to community needs. An initial search of peer-reviewed journal articles was conducted using JSTOR and ScienceDirect repositories identifying sources that provide evidence for coastal beneficiary:habitat linkages. Potential sources were further refined based on a double-blind review of titles, abstracts, and full-texts, when needed. Articles in the final list were then scored based on habitat/beneficiary specificity and data quality (e.g., indirect evidence from literature reviews was scored lower than direct evidence from case studies with valuation results). Scores were then incorporated into a weight of evidence framework summarizing the support for each benefici

  5. DNA methylation-based classification of central nervous system tumours.

    PubMed

    Capper, David; Jones, David T W; Sill, Martin; Hovestadt, Volker; Schrimpf, Daniel; Sturm, Dominik; Koelsche, Christian; Sahm, Felix; Chavez, Lukas; Reuss, David E; Kratz, Annekathrin; Wefers, Annika K; Huang, Kristin; Pajtler, Kristian W; Schweizer, Leonille; Stichel, Damian; Olar, Adriana; Engel, Nils W; Lindenberg, Kerstin; Harter, Patrick N; Braczynski, Anne K; Plate, Karl H; Dohmen, Hildegard; Garvalov, Boyan K; Coras, Roland; Hölsken, Annett; Hewer, Ekkehard; Bewerunge-Hudler, Melanie; Schick, Matthias; Fischer, Roger; Beschorner, Rudi; Schittenhelm, Jens; Staszewski, Ori; Wani, Khalida; Varlet, Pascale; Pages, Melanie; Temming, Petra; Lohmann, Dietmar; Selt, Florian; Witt, Hendrik; Milde, Till; Witt, Olaf; Aronica, Eleonora; Giangaspero, Felice; Rushing, Elisabeth; Scheurlen, Wolfram; Geisenberger, Christoph; Rodriguez, Fausto J; Becker, Albert; Preusser, Matthias; Haberler, Christine; Bjerkvig, Rolf; Cryan, Jane; Farrell, Michael; Deckert, Martina; Hench, Jürgen; Frank, Stephan; Serrano, Jonathan; Kannan, Kasthuri; Tsirigos, Aristotelis; Brück, Wolfgang; Hofer, Silvia; Brehmer, Stefanie; Seiz-Rosenhagen, Marcel; Hänggi, Daniel; Hans, Volkmar; Rozsnoki, Stephanie; Hansford, Jordan R; Kohlhof, Patricia; Kristensen, Bjarne W; Lechner, Matt; Lopes, Beatriz; Mawrin, Christian; Ketter, Ralf; Kulozik, Andreas; Khatib, Ziad; Heppner, Frank; Koch, Arend; Jouvet, Anne; Keohane, Catherine; Mühleisen, Helmut; Mueller, Wolf; Pohl, Ute; Prinz, Marco; Benner, Axel; Zapatka, Marc; Gottardo, Nicholas G; Driever, Pablo Hernáiz; Kramm, Christof M; Müller, Hermann L; Rutkowski, Stefan; von Hoff, Katja; Frühwald, Michael C; Gnekow, Astrid; Fleischhack, Gudrun; Tippelt, Stephan; Calaminus, Gabriele; Monoranu, Camelia-Maria; Perry, Arie; Jones, Chris; Jacques, Thomas S; Radlwimmer, Bernhard; Gessi, Marco; Pietsch, Torsten; Schramm, Johannes; Schackert, Gabriele; Westphal, Manfred; Reifenberger, Guido; Wesseling, Pieter; Weller, Michael; Collins, Vincent Peter; Blümcke, Ingmar; Bendszus, Martin; Debus, Jürgen; Huang, Annie; Jabado, Nada; Northcott, Paul A; Paulus, Werner; Gajjar, Amar; Robinson, Giles W; Taylor, Michael D; Jaunmuktane, Zane; Ryzhova, Marina; Platten, Michael; Unterberg, Andreas; Wick, Wolfgang; Karajannis, Matthias A; Mittelbronn, Michel; Acker, Till; Hartmann, Christian; Aldape, Kenneth; Schüller, Ulrich; Buslei, Rolf; Lichter, Peter; Kool, Marcel; Herold-Mende, Christel; Ellison, David W; Hasselblatt, Martin; Snuderl, Matija; Brandner, Sebastian; Korshunov, Andrey; von Deimling, Andreas; Pfister, Stefan M

    2018-03-22

    Accurate pathological diagnosis is crucial for optimal management of patients with cancer. For the approximately 100 known tumour types of the central nervous system, standardization of the diagnostic process has been shown to be particularly challenging-with substantial inter-observer variability in the histopathological diagnosis of many tumour types. Here we present a comprehensive approach for the DNA methylation-based classification of central nervous system tumours across all entities and age groups, and demonstrate its application in a routine diagnostic setting. We show that the availability of this method may have a substantial impact on diagnostic precision compared to standard methods, resulting in a change of diagnosis in up to 12% of prospective cases. For broader accessibility, we have designed a free online classifier tool, the use of which does not require any additional onsite data processing. Our results provide a blueprint for the generation of machine-learning-based tumour classifiers across other cancer entities, with the potential to fundamentally transform tumour pathology.

  6. Differential diagnosis of pleural mesothelioma using Logic Learning Machine.

    PubMed

    Parodi, Stefano; Filiberti, Rosa; Marroni, Paola; Libener, Roberta; Ivaldi, Giovanni Paolo; Mussap, Michele; Ferrari, Enrico; Manneschi, Chiara; Montani, Erika; Muselli, Marco

    2015-01-01

    Tumour markers are standard tools for the differential diagnosis of cancer. However, the occurrence of nonspecific symptoms and different malignancies involving the same cancer site may lead to a high proportion of misclassifications. Classification accuracy can be improved by combining information from different markers using standard data mining techniques, like Decision Tree (DT), Artificial Neural Network (ANN), and k-Nearest Neighbour (KNN) classifier. Unfortunately, each method suffers from some unavoidable limitations. DT, in general, tends to show a low classification performance, whereas ANN and KNN produce a "black-box" classification that does not provide biological information useful for clinical purposes. Logic Learning Machine (LLM) is an innovative method of supervised data analysis capable of building classifiers described by a set of intelligible rules including simple conditions in their antecedent part. It is essentially an efficient implementation of the Switching Neural Network model and reaches excellent classification accuracy while keeping low the computational demand. LLM was applied to data from a consecutive cohort of 169 patients admitted for diagnosis to two pulmonary departments in Northern Italy from 2009 to 2011. Patients included 52 malignant pleural mesotheliomas (MPM), 62 pleural metastases (MTX) from other tumours and 55 benign diseases (BD) associated with pleurisies. Concentration of three tumour markers (CEA, CYFRA 21-1 and SMRP) was measured in the pleural fluid of each patient and a cytological examination was also carried out. The performance of LLM and that of three competing methods (DT, KNN and ANN) was assessed by leave-one-out cross-validation. LLM outperformed all other considered methods. Global accuracy was 77.5% for LLM, 72.8% for DT, 54.4% for KNN, and 63.9% for ANN, respectively. In more details, LLM correctly classified 79% of MPM, 66% of MTX and 89% of BD. The corresponding figures for DT were: MPM = 83%, MTX = 55% and BD = 84%; for KNN: MPM = 58%, MTX = 45%, BD = 62%; for ANN: MPM = 71%, MTX = 47%, BD = 76%. Finally, LLM provided classification rules in a very good agreement with a priori knowledge about the biological role of the considered tumour markers. LLM is a new flexible tool potentially useful for the differential diagnosis of pleural mesothelioma.

  7. 77 FR 55482 - Public Workshop on Marine Technology and Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-10

    ... provide a unique opportunity for classification societies, industry groups, standards development... email at [email protected] . You may also contact Lieutenant Commander Ken Hettler, Office of Design and... provides a unique opportunity for classification societies, industry groups, standards development...

  8. Improved Fuzzy K-Nearest Neighbor Using Modified Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Jamaluddin; Siringoringo, Rimbun

    2017-12-01

    Fuzzy k-Nearest Neighbor (FkNN) is one of the most powerful classification methods. The presence of fuzzy concepts in this method successfully improves its performance on almost all classification issues. The main drawbackof FKNN is that it is difficult to determine the parameters. These parameters are the number of neighbors (k) and fuzzy strength (m). Both parameters are very sensitive. This makes it difficult to determine the values of ‘m’ and ‘k’, thus making FKNN difficult to control because no theories or guides can deduce how proper ‘m’ and ‘k’ should be. This study uses Modified Particle Swarm Optimization (MPSO) to determine the best value of ‘k’ and ‘m’. MPSO is focused on the Constriction Factor Method. Constriction Factor Method is an improvement of PSO in order to avoid local circumstances optima. The model proposed in this study was tested on the German Credit Dataset. The test of the data/The data test has been standardized by UCI Machine Learning Repository which is widely applied to classification problems. The application of MPSO to the determination of FKNN parameters is expected to increase the value of classification performance. Based on the experiments that have been done indicating that the model offered in this research results in a better classification performance compared to the Fk-NN model only. The model offered in this study has an accuracy rate of 81%, while. With using Fk-NN model, it has the accuracy of 70%. At the end is done comparison of research model superiority with 2 other classification models;such as Naive Bayes and Decision Tree. This research model has a better performance level, where Naive Bayes has accuracy 75%, and the decision tree model has 70%

  9. Assessment of sexual orientation using the hemodynamic brain response to visual sexual stimuli.

    PubMed

    Ponseti, Jorge; Granert, Oliver; Jansen, Olav; Wolff, Stephan; Mehdorn, Hubertus; Bosinski, Hartmut; Siebner, Hartwig

    2009-06-01

    The assessment of sexual orientation is of importance to the diagnosis and treatment of sex offenders and paraphilic disorders. Phallometry is considered gold standard in objectifying sexual orientation, yet this measurement has been criticized because of its intrusiveness and limited reliability. To evaluate whether the spatial response pattern to sexual stimuli as revealed by a change in blood oxygen level-dependent (BOLD) signal can be used for individual classification of sexual orientation. We used a preexisting functional MRI (fMRI) data set that had been acquired in a nonclinical sample of 12 heterosexual men and 14 homosexual men. During fMRI, participants were briefly exposed to pictures of same-sex and opposite-sex genitals. Data analysis involved four steps: (i) differences in the BOLD response to female and male sexual stimuli were calculated for each subject; (ii) these contrast images were entered into a group analysis to calculate whole-brain difference maps between homosexual and heterosexual participants; (iii) a single expression value was computed for each subject expressing its correspondence to the group result; and (iv) based on these expression values, Fisher's linear discriminant analysis and the kappa-nearest neighbor classification method were used to predict the sexual orientation of each subject. Sensitivity and specificity of the two classification methods in predicting individual sexual orientation. Both classification methods performed well in predicting individual sexual orientation with a mean accuracy of >85% (Fisher's linear discriminant analysis: 92% sensitivity, 85% specificity; kappa-nearest neighbor classification: 88% sensitivity, 92% specificity). Despite the small sample size, the functional response patterns of the brain to sexual stimuli contained sufficient information to predict individual sexual orientation with high accuracy. These results suggest that fMRI-based classification methods hold promise for the diagnosis of paraphilic disorders (e.g., pedophilia).

  10. A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM

    NASA Astrophysics Data System (ADS)

    Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan

    2018-03-01

    In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.

  11. Classifying the Standards via Revised Bloom's Taxonomy: A Comparison of Pre-Service and In- Service Teachers

    ERIC Educational Resources Information Center

    Kocakaya, Serhat; Kotluk, Nihat

    2016-01-01

    The aim of this study is (a) to investigate the usefulness of Bloom's revised taxonomy (RBT) for classification of standards, (b) to examine the differences and similarities between pre-service teachers' and in-service teachers' classification of the same standards and (c) to determine which standards are vague and broad. The 45 standards, in the…

  12. [Study on seed quality test and quality standard of Lonicera macranthoides].

    PubMed

    Zhang, Ying; Xu, Jin; Li, Long-Yun; Cui, Guang-Lin; She, Yue-Hui

    2016-04-01

    Referring to the rules for agricultural seed testing (GB/T 3543-1995) issued by China, the test of sampling, purity, thousand seed weight, moisture, viability, relative conductivity and germination rate had been studied for seed quality test methods of Lonicera macranthoides. The seed quality from 38 different collection areas was measured to establish quality classification standard by K-means clustering. The results showed that at least 7.5 g seeds should be sampled, and passed 20-mesh sieve for purity analysis.The 500-seed method used to measure thousand seed weight. The moisture was determined by crushed seeds dried in high temperature (130±2) ℃ for 3 h.The viability determined by 25 ℃ 0.1% TTC stained 5h in dark. 1.0 g seeds soaked in 50 ml ultra pure water in 25 ℃ for 12 hours to determine the relative conductivity. The seed by 4 ℃stratification for 80 days were cultured on paper at 15 ℃. Quality of the seeds from different areas was divided into three grades. The primary seed quality classification standard was established.The I grade and II grade were recommend use in production. Copyright© by the Chinese Pharmaceutical Association.

  13. 48 CFR 19.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Industry Classification System (NAICS) codes and size standards. 19.303 Section 19.303 Federal Acquisition... of Small Business Status for Small Business Programs 19.303 Determining North American Industry... user, the added text is set forth as follows: 19.303 Determining North American Industry Classification...

  14. Classification of Regional Radiographic Emphysematous Patterns Using Low-Attenuation Gap Length Matrix

    NASA Astrophysics Data System (ADS)

    Tan, Kok Liang; Tanaka, Toshiyuki; Nakamura, Hidetoshi; Shirahata, Toru; Sugiura, Hiroaki

    The standard computer-tomography-based method for measuring emphysema uses percentage of area of low attenuation which is called the pixel index (PI). However, the PI method is susceptible to the problem of averaging effect and this causes the discrepancy between what the PI method describes and what radiologists observe. Knowing that visual recognition of the different types of regional radiographic emphysematous tissues in a CT image can be fuzzy, this paper proposes a low-attenuation gap length matrix (LAGLM) based algorithm for classifying the regional radiographic lung tissues into four emphysema types distinguishing, in particular, radiographic patterns that imply obvious or subtle bullous emphysema from those that imply diffuse emphysema or minor destruction of airway walls. Neural network is used for discrimination. The proposed LAGLM method is inspired by, but different from, former texture-based methods like gray level run length matrix (GLRLM) and gray level gap length matrix (GLGLM). The proposed algorithm is successfully validated by classifying 105 lung regions that are randomly selected from 270 images. The lung regions are hand-annotated by radiologists beforehand. The average four-class classification accuracies in the form of the proposed algorithm/PI/GLRLM/GLGLM methods are: 89.00%/82.97%/52.90%/51.36%, respectively. The p-values from the correlation analyses between the classification results of 270 images and pulmonary function test results are generally less than 0.01. The classification results are useful for a followup study especially for monitoring morphological changes with progression of pulmonary disease.

  15. Novel Histogram Based Unsupervised Classification Technique to Determine Natural Classes From Biophysically Relevant Fit Parameters to Hyperspectral Data

    DOE PAGES

    McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra; ...

    2017-05-23

    Hyperspectral image analysis has benefited from an array of methods that take advantage of the increased spectral depth compared to multispectral sensors; however, the focus of these developments has been on supervised classification methods. Lack of a priori knowledge regarding land cover characteristics can make unsupervised classification methods preferable under certain circumstances. An unsupervised classification technique is presented in this paper that utilizes physically relevant basis functions to model the reflectance spectra. These fit parameters used to generate the basis functions allow clustering based on spectral characteristics rather than spectral channels and provide both noise and data reduction. Histogram splittingmore » of the fit parameters is then used as a means of producing an unsupervised classification. Unlike current unsupervised classification techniques that rely primarily on Euclidian distance measures to determine similarity, the unsupervised classification technique uses the natural splitting of the fit parameters associated with the basis functions creating clusters that are similar in terms of physical parameters. The data set used in this work utilizes the publicly available data collected at Indian Pines, Indiana. This data set provides reference data allowing for comparisons of the efficacy of different unsupervised data analysis. The unsupervised histogram splitting technique presented in this paper is shown to be better than the standard unsupervised ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. Finally, this improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA.« less

  16. Novel Histogram Based Unsupervised Classification Technique to Determine Natural Classes From Biophysically Relevant Fit Parameters to Hyperspectral Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra

    Hyperspectral image analysis has benefited from an array of methods that take advantage of the increased spectral depth compared to multispectral sensors; however, the focus of these developments has been on supervised classification methods. Lack of a priori knowledge regarding land cover characteristics can make unsupervised classification methods preferable under certain circumstances. An unsupervised classification technique is presented in this paper that utilizes physically relevant basis functions to model the reflectance spectra. These fit parameters used to generate the basis functions allow clustering based on spectral characteristics rather than spectral channels and provide both noise and data reduction. Histogram splittingmore » of the fit parameters is then used as a means of producing an unsupervised classification. Unlike current unsupervised classification techniques that rely primarily on Euclidian distance measures to determine similarity, the unsupervised classification technique uses the natural splitting of the fit parameters associated with the basis functions creating clusters that are similar in terms of physical parameters. The data set used in this work utilizes the publicly available data collected at Indian Pines, Indiana. This data set provides reference data allowing for comparisons of the efficacy of different unsupervised data analysis. The unsupervised histogram splitting technique presented in this paper is shown to be better than the standard unsupervised ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. Finally, this improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA.« less

  17. 7 CFR 27.69 - Classification review; notations on certificate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Section 27.69 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD... review of classification is made after the issuance of a cotton class certificate, the results of the...

  18. Retinopathy of Prematurity: Clinical Features, Classification, Natural History, Management and Outcome.

    PubMed

    Shah, Parag K; Prabhu, Vishma; Ranjan, Ratnesh; Narendran, Venkatapathy; Kalpana, Narendran

    2016-11-07

    Retinopathy of prematurity is an avoidable cause of childhood blindness. Proper understanding of the classification and treatment methods is a must in tackling this disease. Literature search with PubMed was conducted covering the period 1940-2015 with regards to retinopathy of prematurity, retrolental fibroplasia, its natural history, classification and treatment. The clinical features, screening and staging of retinopathy of prematurity according to International classification of retinopathy of prematurity (ICROP) has been included with illustrations. The standard current treatment indications, modalities and outcomes from landmark randomized controlled trials on retinopathy of prematurity have been mentioned. This review would help pediatricians to update their current knowledge on classification and treatment of retinopathy of prematurity. Screening for retinopathy of prematurity, in India, should be performed in all preterm neonates who are born <34 weeks gestation and/or <1750 grams birthweight; as well as in babies 34-36 weeks gestation or 1750-2000 grams birthweight if they have risk factors for ROP. Screening should start by one month after birth.

  19. Accurate diagnosis of thyroid follicular lesions from nuclear morphology using supervised learning.

    PubMed

    Ozolek, John A; Tosun, Akif Burak; Wang, Wei; Chen, Cheng; Kolouri, Soheil; Basu, Saurav; Huang, Hu; Rohde, Gustavo K

    2014-07-01

    Follicular lesions of the thyroid remain significant diagnostic challenges in surgical pathology and cytology. The diagnosis often requires considerable resources and ancillary tests including immunohistochemistry, molecular studies, and expert consultation. Visual analyses of nuclear morphological features, generally speaking, have not been helpful in distinguishing this group of lesions. Here we describe a method for distinguishing between follicular lesions of the thyroid based on nuclear morphology. The method utilizes an optimal transport-based linear embedding for segmented nuclei, together with an adaptation of existing classification methods. We show the method outputs assignments (classification results) which are near perfectly correlated with the clinical diagnosis of several lesion types' lesions utilizing a database of 94 patients in total. Experimental comparisons also show the new method can significantly outperform standard numerical feature-type methods in terms of agreement with the clinical diagnosis gold standard. In addition, the new method could potentially be used to derive insights into biologically meaningful nuclear morphology differences in these lesions. Our methods could be incorporated into a tool for pathologists to aid in distinguishing between follicular lesions of the thyroid. In addition, these results could potentially provide nuclear morphological correlates of biological behavior and reduce health care costs by decreasing histotechnician and pathologist time and obviating the need for ancillary testing. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. 7 CFR 28.903 - Classification of samples.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification and Market News Services § 28.903 Classification of samples. The Director, or an...

  1. 7 CFR 28.903 - Classification of samples.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification and Market News Services § 28.903 Classification of samples. The Director, or an...

  2. 7 CFR 28.903 - Classification of samples.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification and Market News Services § 28.903 Classification of samples. The Director, or an...

  3. 7 CFR 28.903 - Classification of samples.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification and Market News Services § 28.903 Classification of samples. The Director, or an...

  4. 7 CFR 28.903 - Classification of samples.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification and Market News Services § 28.903 Classification of samples. The Director, or an...

  5. Development of an Instrument for Assessing Elder Care Needs

    ERIC Educational Resources Information Center

    Åhsberg, Elizabeth; Fahlström, Gunilla; Rönnbäck, Eva; Granberg, Ann-Kristin; Almborg, Ann-Helene

    2017-01-01

    Objective: To construct a needs assessment instrument for older people using a standardized terminology (International classification of functioning, disability, and health [ICF]) and assess its psychometrical properties. Method: An instrument was developed comprising questions to older people regarding their perceived care needs. The instrument's…

  6. A modified artificial immune system based pattern recognition approach -- an application to clinic diagnostics

    PubMed Central

    Zhao, Weixiang; Davis, Cristina E.

    2011-01-01

    Objective This paper introduces a modified artificial immune system (AIS)-based pattern recognition method to enhance the recognition ability of the existing conventional AIS-based classification approach and demonstrates the superiority of the proposed new AIS-based method via two case studies of breast cancer diagnosis. Methods and materials Conventionally, the AIS approach is often coupled with the k nearest neighbor (k-NN) algorithm to form a classification method called AIS-kNN. In this paper we discuss the basic principle and possible problems of this conventional approach, and propose a new approach where AIS is integrated with the radial basis function – partial least square regression (AIS-RBFPLS). Additionally, both the two AIS-based approaches are compared with two classical and powerful machine learning methods, back-propagation neural network (BPNN) and orthogonal radial basis function network (Ortho-RBF network). Results The diagnosis results show that: (1) both the AIS-kNN and the AIS-RBFPLS proved to be a good machine leaning method for clinical diagnosis, but the proposed AIS-RBFPLS generated an even lower misclassification ratio, especially in the cases where the conventional AIS-kNN approach generated poor classification results because of possible improper AIS parameters. For example, based upon the AIS memory cells of “replacement threshold = 0.3”, the average misclassification ratios of two approaches for study 1 are 3.36% (AIS-RBFPLS) and 9.07% (AIS-kNN), and the misclassification ratios for study 2 are 19.18% (AIS-RBFPLS) and 28.36% (AIS-kNN); (2) the proposed AIS-RBFPLS presented its robustness in terms of the AIS-created memory cells, showing a smaller standard deviation of the results from the multiple trials than AIS-kNN. For example, using the result from the first set of AIS memory cells as an example, the standard deviations of the misclassification ratios for study 1 are 0.45% (AIS-RBFPLS) and 8.71% (AIS-kNN) and those for study 2 are 0.49% (AIS-RBFPLS) and 6.61% (AIS-kNN); and (3) the proposed AIS-RBFPLS classification approaches also yielded better diagnosis results than two classical neural network approaches of BPNN and Ortho-RBF network. Conclusion In summary, this paper proposed a new machine learning method for complex systems by integrating the AIS system with RBFPLS. This new method demonstrates its satisfactory effect on classification accuracy for clinical diagnosis, and also indicates its wide potential applications to other diagnosis and detection problems. PMID:21515033

  7. Using genetically modified tomato crop plants with purple leaves for absolute weed/crop classification.

    PubMed

    Lati, Ran N; Filin, Sagi; Aly, Radi; Lande, Tal; Levin, Ilan; Eizenberg, Hanan

    2014-07-01

    Weed/crop classification is considered the main problem in developing precise weed-management methodologies, because both crops and weeds share similar hues. Great effort has been invested in the development of classification models, most based on expensive sensors and complicated algorithms. However, satisfactory results are not consistently obtained due to imaging conditions in the field. We report on an innovative approach that combines advances in genetic engineering and robust image-processing methods to detect weeds and distinguish them from crop plants by manipulating the crop's leaf color. We demonstrate this on genetically modified tomato (germplasm AN-113) which expresses a purple leaf color. An autonomous weed/crop classification is performed using an invariant-hue transformation that is applied to images acquired by a standard consumer camera (visible wavelength) and handles variations in illumination intensities. The integration of these methodologies is simple and effective, and classification results were accurate and stable under a wide range of imaging conditions. Using this approach, we simplify the most complicated stage in image-based weed/crop classification models. © 2013 Society of Chemical Industry.

  8. Model selection for anomaly detection

    NASA Astrophysics Data System (ADS)

    Burnaev, E.; Erofeev, P.; Smolyakov, D.

    2015-12-01

    Anomaly detection based on one-class classification algorithms is broadly used in many applied domains like image processing (e.g. detection of whether a patient is "cancerous" or "healthy" from mammography image), network intrusion detection, etc. Performance of an anomaly detection algorithm crucially depends on a kernel, used to measure similarity in a feature space. The standard approaches (e.g. cross-validation) for kernel selection, used in two-class classification problems, can not be used directly due to the specific nature of a data (absence of a second, abnormal, class data). In this paper we generalize several kernel selection methods from binary-class case to the case of one-class classification and perform extensive comparison of these approaches using both synthetic and real-world data.

  9. Development of a template for the classification of traditional medical knowledge in Korea.

    PubMed

    Kim, Sungha; Kim, Boyoung; Mun, Sujeong; Park, Jeong Hwan; Kim, Min-Kyeoung; Choi, Sunmi; Lee, Sanghun

    2016-02-03

    Traditional Medical Knowledge (TMK) is a form of Traditional Knowledge associated with medicine that is handed down orally or by written material. There are efforts to document TMK, and make database to conserve Traditional Medicine and facilitate future research to validate traditional use. Despite of these efforts, there is no widely accepted template in data file format that is specific for TMK and, at the same time, helpful for understanding and organizing TMK. We aimed to develop a template to classify TMK. First, we reviewed books, articles, and health-related classification systems, and used focus group discussion to establish the definition, scope, and constituents of TMK. Second, we developed an initial version of the template to classify TMK, and applied it to TMK data. Third, we revised the template, based on the results of the initial template and input from experts, and applied it to the data. We developed the template for classification of TMK. The constituents of the template were summary, properties, tools/ingredients, indication/preparation/application, and international standard classification. We applied International Patent Classification, International Classification of Diseases (Korea version), and Classification of Korean Traditional Knowledge Resources to provide legal protection of TMK and facilitate academic research. The template provides standard terms for ingredients, preparation, administration route, and procedure method to assess safety and efficacy. This is the first template that is specialized for TMK for arranging and classifying TMK. The template would have important roles in preserving TMK, and protecting intellectual property. TMK data classified with the template could be used as the preliminary data to screen potential candidates for new pharmaceuticals. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  10. 13 CFR 121.101 - What are SBA size standards?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... SBA size standards? (a) SBA's size standards define whether a business entity is small and, thus... Industry Classification System (NAICS). (b) NAICS is described in the North American Industry Classification Manual-United States, which is available from the National Technical Information Service, 5285...

  11. 13 CFR 121.101 - What are SBA size standards?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... SBA size standards? (a) SBA's size standards define whether a business entity is small and, thus... Industry Classification System (NAICS). (b) NAICS is described in the North American Industry Classification Manual-United States, which is available from the National Technical Information Service, 5285...

  12. 7 CFR 28.119 - Fee when request for classification is withdrawn.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....119 Section 28.119 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND... Cotton Standards Act Fees and Costs § 28.119 Fee when request for classification is withdrawn. When the...

  13. a Novel 3d Intelligent Fuzzy Algorithm Based on Minkowski-Clustering

    NASA Astrophysics Data System (ADS)

    Toori, S.; Esmaeily, A.

    2017-09-01

    Assessing and monitoring the state of the earth surface is a key requirement for global change research. In this paper, we propose a new consensus fuzzy clustering algorithm that is based on the Minkowski distance. This research concentrates on Tehran's vegetation mass and its changes during 29 years using remote sensing technology. The main purpose of this research is to evaluate the changes in vegetation mass using a new process by combination of intelligent NDVI fuzzy clustering and Minkowski distance operation. The dataset includes the images of Landsat8 and Landsat TM, from 1989 to 2016. For each year three images of three continuous days were used to identify vegetation impact and recovery. The result was a 3D NDVI image, with one dimension for each day NDVI. The next step was the classification procedure which is a complicated process of categorizing pixels into a finite number of separate classes, based on their data values. If a pixel satisfies a certain set of standards, the pixel is allocated to the class that corresponds to those criteria. This method is less sensitive to noise and can integrate solutions from multiple samples of data or attributes for processing data in the processing industry. The result was a fuzzy one dimensional image. This image was also computed for the next 28 years. The classification was done in both specified urban and natural park areas of Tehran. Experiments showed that our method worked better in classifying image pixels in comparison with the standard classification methods.

  14. Standardization--the iron cage of nurses' work?

    PubMed

    Meum, Torbjørg; Wangensteen, Gro; Igesund, Harald; Ellingsen, Gunnar; Monteiro, Eric

    2010-01-01

    This paper explores how nursing classification has been adopted and used in a local clinical practice. The study is inspired by the socio-technical approach to information system and illustrates some of the enabling and constraining properties of standardization. Findings from the study show how international standards have been embedded into local practice. At the same time, the use of locally developed standards has increased and many of these are similar to the international classification. This indicates that we need to move beyond the dichotomous perspective on nurses' use of classification and strive for more flexible solutions.

  15. Contributions for classification of platelet rich plasma - proposal of a new classification: MARSPILL.

    PubMed

    Lana, Jose Fabio Santos Duarte; Purita, Joseph; Paulus, Christian; Huber, Stephany Cares; Rodrigues, Bruno Lima; Rodrigues, Ana Amélia; Santana, Maria Helena; Madureira, João Lopo; Malheiros Luzo, Ângela Cristina; Belangero, William Dias; Annichino-Bizzacchi, Joyce Maria

    2017-07-01

    Platelet-rich plasma (PRP) has emerged as a significant therapy used in medical conditions with heterogeneous results. There are some important classifications to try to standardize the PRP procedure. The aim of this report is to describe PRP contents studying celular and molecular components, and also propose a new classification for PRP. The main focus is on mononuclear cells, which comprise progenitor cells and monocytes. In addition, there are important variables related to PRP application incorporated in this study, which are the harvest method, activation, red blood cells, number of spins, image guidance, leukocytes number and light activation. The other focus is the discussion about progenitor cells presence on peripherial blood which are interesting due to neovasculogenesis and proliferation. The function of monocytes (in tissue-macrophages) are discussed here and also its plasticity, a potential property for regenerative medicine treatments.

  16. International Standard Classification of Education (ISCED) Three Stage Classification System: 1973; Part 2 - Definitions.

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific, and Cultural Organization, Paris (France).

    The seven levels of education, as classified numerically by International Standard Classification of Education (ISCED), are defined along with courses, programs, and fields of education listed under each level. Also contained is an alphabetical subject index indicating appropriate code numbers. For related documents see TM003535 and TM003536. (RC)

  17. The 2010 Standard Occupational Classification (SOC): A Classification System Gets an Update

    ERIC Educational Resources Information Center

    Emmel, Alissa; Cosca, Theresa

    2010-01-01

    Making sense of occupational data isn't always easy. But the task is less daunting when the data are well organized. For Federal occupational statistics, the Standard Occupational Classification (SOC) system establishes that organization. And a recent revision to the SOC means that the data will be current, in addition to being well organized. The…

  18. Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Felix; Quach, Tu-Thach; Wheeler, Jason

    File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less

  19. Improved Classification of Lung Cancer Using Radial Basis Function Neural Network with Affine Transforms of Voss Representation.

    PubMed

    Adetiba, Emmanuel; Olugbara, Oludayo O

    2015-01-01

    Lung cancer is one of the diseases responsible for a large number of cancer related death cases worldwide. The recommended standard for screening and early detection of lung cancer is the low dose computed tomography. However, many patients diagnosed die within one year, which makes it essential to find alternative approaches for screening and early detection of lung cancer. We present computational methods that can be implemented in a functional multi-genomic system for classification, screening and early detection of lung cancer victims. Samples of top ten biomarker genes previously reported to have the highest frequency of lung cancer mutations and sequences of normal biomarker genes were respectively collected from the COSMIC and NCBI databases to validate the computational methods. Experiments were performed based on the combinations of Z-curve and tetrahedron affine transforms, Histogram of Oriented Gradient (HOG), Multilayer perceptron and Gaussian Radial Basis Function (RBF) neural networks to obtain an appropriate combination of computational methods to achieve improved classification of lung cancer biomarker genes. Results show that a combination of affine transforms of Voss representation, HOG genomic features and Gaussian RBF neural network perceptibly improves classification accuracy, specificity and sensitivity of lung cancer biomarker genes as well as achieving low mean square error.

  20. Rapid classification of heavy metal-exposed freshwater bacteria by infrared spectroscopy coupled with chemometrics using supervised method

    NASA Astrophysics Data System (ADS)

    Gurbanov, Rafig; Gozen, Ayse Gul; Severcan, Feride

    2018-01-01

    Rapid, cost-effective, sensitive and accurate methodologies to classify bacteria are still in the process of development. The major drawbacks of standard microbiological, molecular and immunological techniques call for the possible usage of infrared (IR) spectroscopy based supervised chemometric techniques. Previous applications of IR based chemometric methods have demonstrated outstanding findings in the classification of bacteria. Therefore, we have exploited an IR spectroscopy based chemometrics using supervised method namely Soft Independent Modeling of Class Analogy (SIMCA) technique for the first time to classify heavy metal-exposed bacteria to be used in the selection of suitable bacteria to evaluate their potential for environmental cleanup applications. Herein, we present the powerful differentiation and classification of laboratory strains (Escherichia coli and Staphylococcus aureus) and environmental isolates (Gordonia sp. and Microbacterium oxydans) of bacteria exposed to growth inhibitory concentrations of silver (Ag), cadmium (Cd) and lead (Pb). Our results demonstrated that SIMCA was able to differentiate all heavy metal-exposed and control groups from each other with 95% confidence level. Correct identification of randomly chosen test samples in their corresponding groups and high model distances between the classes were also achieved. We report, for the first time, the success of IR spectroscopy coupled with supervised chemometric technique SIMCA in classification of different bacteria under a given treatment.

  1. Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification

    DOE PAGES

    Wang, Felix; Quach, Tu-Thach; Wheeler, Jason; ...

    2018-04-05

    File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less

  2. Mixed deep learning and natural language processing method for fake-food image recognition and standardization to help automated dietary assessment.

    PubMed

    Mezgec, Simon; Eftimov, Tome; Bucher, Tamara; Koroušić Seljak, Barbara

    2018-04-06

    The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.

  3. Injuries of the Medial Clavicle: A Cohort Analysis in a Level-I-Trauma-Center. Concomitant Injuries. Management. Classification.

    PubMed

    Bakir, Mustafa Sinan; Merschin, David; Unterkofler, Jan; Guembel, Denis; Langenbach, Andreas; Ekkernkamp, Axel; Schulz-Drost, Stefan

    2017-01-01

    Introduction: Although shoulder girdle injuries are frequent, those of the medial clavicle are widely unexplored. An applied classification is less used just as a standard management. Methods: A retrospective analysis of medial clavicle injuries (MCI) during a 5-year-term in a Level-1-Trauma-Center. We analyzed amongst others concomitant injuries, therapy strategies and the classification following the AO standards. Results: 19 (2.5%) out of 759 clavicula injuries were medial ones (11 A, 6 B and 2 C-Type fractures) thereunder 27,8% were displaced and thus operatively treated Locked plate osteosynthesis was employed in unstable fractures and a reconstruction of the ligaments at the sternoclavicular joint (SCJ) in case of their disruption. 84,2% of the patients sustained relevant concomitant injuries. Numerous midshaft fractures were miscoded as medial fracture, which limited the study population. Conclusions: MCI resulted from high impact mechanisms of injury, often with relevant dislocation and concomitant injuries. Concerning medial injury's complexity, treatment should occur in specialized hospitals. Unstable fractures and injuries of the SCJ ligaments should be considered for operative treatment. Midshaft fractures should be clearly distinguished from the medial ones in ICD-10-coding. Further studies are required also regarding a subtyping of the AO classification for medial clavicle fractures including ligamental injuries. Celsius.

  4. Evaluation of Multiple Kernel Learning Algorithms for Crop Mapping Using Satellite Image Time-Series Data

    NASA Astrophysics Data System (ADS)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2017-09-01

    Crop mapping through classification of Satellite Image Time-Series (SITS) data can provide very valuable information for several agricultural applications, such as crop monitoring, yield estimation, and crop inventory. However, the SITS data classification is not straightforward. Because different images of a SITS data have different levels of information regarding the classification problems. Moreover, the SITS data is a four-dimensional data that cannot be classified using the conventional classification algorithms. To address these issues in this paper, we presented a classification strategy based on Multiple Kernel Learning (MKL) algorithms for SITS data classification. In this strategy, initially different kernels are constructed from different images of the SITS data and then they are combined into a composite kernel using the MKL algorithms. The composite kernel, once constructed, can be used for the classification of the data using the kernel-based classification algorithms. We compared the computational time and the classification performances of the proposed classification strategy using different MKL algorithms for the purpose of crop mapping. The considered MKL algorithms are: MKL-Sum, SimpleMKL, LPMKL and Group-Lasso MKL algorithms. The experimental tests of the proposed strategy on two SITS data sets, acquired by SPOT satellite sensors, showed that this strategy was able to provide better performances when compared to the standard classification algorithm. The results also showed that the optimization method of the used MKL algorithms affects both the computational time and classification accuracy of this strategy.

  5. Recent machine learning advancements in sensor-based mobility analysis: Deep learning for Parkinson's disease assessment.

    PubMed

    Eskofier, Bjoern M; Lee, Sunghoon I; Daneault, Jean-Francois; Golabchi, Fatemeh N; Ferreira-Carvalho, Gabriela; Vergara-Diaz, Gloria; Sapienza, Stefano; Costante, Gianluca; Klucken, Jochen; Kautz, Thomas; Bonato, Paolo

    2016-08-01

    The development of wearable sensors has opened the door for long-term assessment of movement disorders. However, there is still a need for developing methods suitable to monitor motor symptoms in and outside the clinic. The purpose of this paper was to investigate deep learning as a method for this monitoring. Deep learning recently broke records in speech and image classification, but it has not been fully investigated as a potential approach to analyze wearable sensor data. We collected data from ten patients with idiopathic Parkinson's disease using inertial measurement units. Several motor tasks were expert-labeled and used for classification. We specifically focused on the detection of bradykinesia. For this, we compared standard machine learning pipelines with deep learning based on convolutional neural networks. Our results showed that deep learning outperformed other state-of-the-art machine learning algorithms by at least 4.6 % in terms of classification rate. We contribute a discussion of the advantages and disadvantages of deep learning for sensor-based movement assessment and conclude that deep learning is a promising method for this field.

  6. Removal of BCG artifacts using a non-Kirchhoffian overcomplete representation.

    PubMed

    Dyrholm, Mads; Goldman, Robin; Sajda, Paul; Brown, Truman R

    2009-02-01

    We present a nonlinear unmixing approach for extracting the ballistocardiogram (BCG) from EEG recorded in an MR scanner during simultaneous acquisition of functional MRI (fMRI). First, an overcomplete basis is identified in the EEG based on a custom multipath EEG electrode cap. Next, the overcomplete basis is used to infer non-Kirchhoffian latent variables that are not consistent with a conservative electric field. Neural activity is strictly Kirchhoffian while the BCG artifact is not, and the representation can hence be used to remove the artifacts from the data in a way that does not attenuate the neural signals needed for optimal single-trial classification performance. We compare our method to more standard methods for BCG removal, namely independent component analysis and optimal basis sets, by looking at single-trial classification performance for an auditory oddball experiment. We show that our overcomplete representation method for removing BCG artifacts results in better single-trial classification performance compared to the conventional approaches, indicating that the derived neural activity in this representation retains the complex information in the trial-to-trial variability.

  7. Polyhydroxyalkanoates (PHA) Bioplastic Packaging Materials

    DTIC Science & Technology

    2010-05-01

    FINAL REPORT Polyhydroxyalkanoates (PHA) Bioplastic Packaging Materials SERDP Project WP-1478 MAY 2010 Dr.Chris Schwier Metabolix...biopolymer, biodegradable, polyhydroxyalkanoate 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF...Acronyms and Definitions ASTM – American Society of Test Methods ISO – International Standardization Organization PHA – Polyhydroxyalkanoates

  8. 76 FR 70833 - National Emission Standards for Hazardous Air Pollutant Emissions for Primary Lead Processing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-15

    ... Classification System. \\2\\ Maximum Achievable Control Technology. Table 2 is not intended to be exhaustive, but..., methods, systems, or techniques that reduce the volume of or eliminate HAP emissions through process changes, substitution of materials, or other modifications; enclose systems or processes to eliminate...

  9. Automated ancillary cancer history classification for mesothelioma patients from free-text clinical reports

    PubMed Central

    Wilson, Richard A.; Chapman, Wendy W.; DeFries, Shawn J.; Becich, Michael J.; Chapman, Brian E.

    2010-01-01

    Background: Clinical records are often unstructured, free-text documents that create information extraction challenges and costs. Healthcare delivery and research organizations, such as the National Mesothelioma Virtual Bank, require the aggregation of both structured and unstructured data types. Natural language processing offers techniques for automatically extracting information from unstructured, free-text documents. Methods: Five hundred and eight history and physical reports from mesothelioma patients were split into development (208) and test sets (300). A reference standard was developed and each report was annotated by experts with regard to the patient’s personal history of ancillary cancer and family history of any cancer. The Hx application was developed to process reports, extract relevant features, perform reference resolution and classify them with regard to cancer history. Two methods, Dynamic-Window and ConText, for extracting information were evaluated. Hx’s classification responses using each of the two methods were measured against the reference standard. The average Cohen’s weighted kappa served as the human benchmark in evaluating the system. Results: Hx had a high overall accuracy, with each method, scoring 96.2%. F-measures using the Dynamic-Window and ConText methods were 91.8% and 91.6%, which were comparable to the human benchmark of 92.8%. For the personal history classification, Dynamic-Window scored highest with 89.2% and for the family history classification, ConText scored highest with 97.6%, in which both methods were comparable to the human benchmark of 88.3% and 97.2%, respectively. Conclusion: We evaluated an automated application’s performance in classifying a mesothelioma patient’s personal and family history of cancer from clinical reports. To do so, the Hx application must process reports, identify cancer concepts, distinguish the known mesothelioma from ancillary cancers, recognize negation, perform reference resolution and determine the experiencer. Results indicated that both information extraction methods tested were dependant on the domain-specific lexicon and negation extraction. We showed that the more general method, ConText, performed as well as our task-specific method. Although Dynamic- Window could be modified to retrieve other concepts, ConText is more robust and performs better on inconclusive concepts. Hx could greatly improve and expedite the process of extracting data from free-text, clinical records for a variety of research or healthcare delivery organizations. PMID:21031012

  10. 7 CFR 28.911 - Review classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Review classification. 28.911 Section 28.911... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification § 28.911 Review classification. (a) A producer may request one review...

  11. 7 CFR 28.911 - Review classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Review classification. 28.911 Section 28.911... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification § 28.911 Review classification. (a) A producer may request one review...

  12. 7 CFR 28.911 - Review classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Review classification. 28.911 Section 28.911... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification § 28.911 Review classification. (a) A producer may request one review...

  13. 7 CFR 28.911 - Review classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Review classification. 28.911 Section 28.911... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification § 28.911 Review classification. (a) A producer may request one review...

  14. 75 FR 56549 - National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-16

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the... Public Health Data Standards Staff, NCHS, 3311 Toledo Road, Room 2337, Hyattsville, Maryland 20782, e...

  15. 7 CFR 51.2559 - Size classifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2559 Size classifications. (a) The size of pistachio kernels may be specified in connection with the grade in accordance with one of...

  16. Continuous robust sound event classification using time-frequency features and deep learning

    PubMed Central

    Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478

  17. Continuous robust sound event classification using time-frequency features and deep learning.

    PubMed

    McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.

  18. Sparse kernel methods for high-dimensional survival data.

    PubMed

    Evers, Ludger; Messow, Claudia-Martina

    2008-07-15

    Sparse kernel methods like support vector machines (SVM) have been applied with great success to classification and (standard) regression settings. Existing support vector classification and regression techniques however are not suitable for partly censored survival data, which are typically analysed using Cox's proportional hazards model. As the partial likelihood of the proportional hazards model only depends on the covariates through inner products, it can be 'kernelized'. The kernelized proportional hazards model however yields a solution that is dense, i.e. the solution depends on all observations. One of the key features of an SVM is that it yields a sparse solution, depending only on a small fraction of the training data. We propose two methods. One is based on a geometric idea, where-akin to support vector classification-the margin between the failed observation and the observations currently at risk is maximised. The other approach is based on obtaining a sparse model by adding observations one after another akin to the Import Vector Machine (IVM). Data examples studied suggest that both methods can outperform competing approaches. Software is available under the GNU Public License as an R package and can be obtained from the first author's website http://www.maths.bris.ac.uk/~maxle/software.html.

  19. Oromucosal film preparations: classification and characterization methods.

    PubMed

    Preis, Maren; Woertz, Christina; Kleinebudde, Peter; Breitkreutz, Jörg

    2013-09-01

    Recently, the regulatory authorities have enlarged the variety of 'oromucosal preparations' by buccal films and orodispersible films. Various film preparations have entered the market and pharmacopoeias. Due to the novelty of the official monographs, no standardized characterization methods and quality specifications are included. This review reports the methods of choice to characterize oromucosal film preparations with respect to biorelevant characterization and quality control. Commonly used dissolution tests for other dosage forms are not transferable for films in all cases. Alternatives and guidance on decision, which methods are favorable for film preparations are discussed. Furthermore, issues about requirements for film dosage forms are reflected. Oromucosal film preparations offer a wide spectrum of opportunities. There are a lot of suggestions in the literature on how to control the quality of these innovative products, but no standardized tests are available. Regulatory authorities need to define the standards and quality requirements more precisely.

  20. Ensemble analyses improve signatures of tumour hypoxia and reveal inter-platform differences

    PubMed Central

    2014-01-01

    Background The reproducibility of transcriptomic biomarkers across datasets remains poor, limiting clinical application. We and others have suggested that this is in-part caused by differential error-structure between datasets, and their incomplete removal by pre-processing algorithms. Methods To test this hypothesis, we systematically assessed the effects of pre-processing on biomarker classification using 24 different pre-processing methods and 15 distinct signatures of tumour hypoxia in 10 datasets (2,143 patients). Results We confirm strong pre-processing effects for all datasets and signatures, and find that these differ between microarray versions. Importantly, exploiting different pre-processing techniques in an ensemble technique improved classification for a majority of signatures. Conclusions Assessing biomarkers using an ensemble of pre-processing techniques shows clear value across multiple diseases, datasets and biomarkers. Importantly, ensemble classification improves biomarkers with initially good results but does not result in spuriously improved performance for poor biomarkers. While further research is required, this approach has the potential to become a standard for transcriptomic biomarkers. PMID:24902696

  1. Understanding the use of standardized nursing terminology and classification systems in published research: A case study using the International Classification for Nursing Practice(®).

    PubMed

    Strudwick, Gillian; Hardiker, Nicholas R

    2016-10-01

    In the era of evidenced based healthcare, nursing is required to demonstrate that care provided by nurses is associated with optimal patient outcomes, and a high degree of quality and safety. The use of standardized nursing terminologies and classification systems are a way that nursing documentation can be leveraged to generate evidence related to nursing practice. Several widely-reported nursing specific terminologies and classifications systems currently exist including the Clinical Care Classification System, International Classification for Nursing Practice(®), Nursing Intervention Classification, Nursing Outcome Classification, Omaha System, Perioperative Nursing Data Set and NANDA International. However, the influence of these systems on demonstrating the value of nursing and the professions' impact on quality, safety and patient outcomes in published research is relatively unknown. This paper seeks to understand the use of standardized nursing terminology and classification systems in published research, using the International Classification for Nursing Practice(®) as a case study. A systematic review of international published empirical studies on, or using, the International Classification for Nursing Practice(®) were completed using Medline and the Cumulative Index for Nursing and Allied Health Literature. Since 2006, 38 studies have been published on the International Classification for Nursing Practice(®). The main objectives of the published studies have been to validate the appropriateness of the classification system for particular care areas or populations, further develop the classification system, or utilize it to support the generation of new nursing knowledge. To date, most studies have focused on the classification system itself, and a lesser number of studies have used the system to generate information about the outcomes of nursing practice. Based on the published literature that features the International Classification for Nursing Practice, standardized nursing terminology and classification systems appear to be well developed for various populations, settings and to harmonize with other health-related terminology systems. However, the use of the systems to generate new nursing knowledge, and to validate nursing practice is still in its infancy. There is an opportunity now to utilize the well-developed systems in their current state to further what is know about nursing practice, and how best to demonstrate improvements in patient outcomes through nursing care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Understanding Homicide-Suicide.

    PubMed

    Knoll, James L

    2016-12-01

    Homicide-suicide is the phenomenon in which an individual kills 1 or more people and commits suicide. Research on homicide-suicide has been hampered by a lack of an accepted classification scheme and reliance on media reports. Mass murder-suicide is gaining increasing attention particularly in the United States. This article reviews the research and literature on homicide-suicide, proposing a standard classification scheme. Preventive methods are discussed and sociocultural factors explored. For a more accurate and complete understanding of homicide-suicide, it is argued that future research should use the full psychological autopsy approach, to include collateral interviews. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. The challenge of monitoring elusive large carnivores: An accurate and cost-effective tool to identify and sex pumas (Puma concolor) from footprints.

    PubMed

    Alibhai, Sky; Jewell, Zoe; Evans, Jonah

    2017-01-01

    Acquiring reliable data on large felid populations is crucial for effective conservation and management. However, large felids, typically solitary, elusive and nocturnal, are difficult to survey. Tagging and following individuals with VHF or GPS technology is the standard approach, but costs are high and these methodologies can compromise animal welfare. Such limitations can restrict the use of these techniques at population or landscape levels. In this paper we describe a robust technique to identify and sex individual pumas from footprints. We used a standardized image collection protocol to collect a reference database of 535 footprints from 35 captive pumas over 10 facilities; 19 females (300 footprints) and 16 males (235 footprints), ranging in age from 1-20 yrs. Images were processed in JMP data visualization software, generating one hundred and twenty three measurements from each footprint. Data were analyzed using a customized model based on a pairwise trail comparison using robust cross-validated discriminant analysis with a Ward's clustering method. Classification accuracy was consistently > 90% for individuals, and for the correct classification of footprints within trails, and > 99% for sex classification. The technique has the potential to greatly augment the methods available for studying puma and other elusive felids, and is amenable to both citizen-science and opportunistic/local community data collection efforts, particularly as the data collection protocol is inexpensive and intuitive.

  4. 5 CFR 1312.8 - Standard identification and markings.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification.... (a) Original classification. At the time classified material is produced, the classifier shall apply...: (1) Classification authority. The name/personal identifier, and position title of the original...

  5. 5 CFR 1312.8 - Standard identification and markings.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification.... (a) Original classification. At the time classified material is produced, the classifier shall apply...: (1) Classification authority. The name/personal identifier, and position title of the original...

  6. 5 CFR 1312.8 - Standard identification and markings.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification.... (a) Original classification. At the time classified material is produced, the classifier shall apply...: (1) Classification authority. The name/personal identifier, and position title of the original...

  7. Comparison of three methods for long-term monitoring of boreal lake area using Landsat TM and ETM+ imagery

    USGS Publications Warehouse

    Roach, Jennifer K.; Griffith, Brad; Verbyla, David

    2012-01-01

    Programs to monitor lake area change are becoming increasingly important in high latitude regions, and their development often requires evaluating tradeoffs among different approaches in terms of accuracy of measurement, consistency across multiple users over long time periods, and efficiency. We compared three supervised methods for lake classification from Landsat imagery (density slicing, classification trees, and feature extraction). The accuracy of lake area and number estimates was evaluated relative to high-resolution aerial photography acquired within two days of satellite overpasses. The shortwave infrared band 5 was better at separating surface water from nonwater when used alone than when combined with other spectral bands. The simplest of the three methods, density slicing, performed best overall. The classification tree method resulted in the most omission errors (approx. 2x), feature extraction resulted in the most commission errors (approx. 4x), and density slicing had the least directional bias (approx. half of the lakes with overestimated area and half of the lakes with underestimated area). Feature extraction was the least consistent across training sets (i.e., large standard error among different training sets). Density slicing was the best of the three at classifying small lakes as evidenced by its lower optimal minimum lake size criterion of 5850 m2 compared with the other methods (8550 m2). Contrary to conventional wisdom, the use of additional spectral bands and a more sophisticated method not only required additional processing effort but also had a cost in terms of the accuracy and consistency of lake classifications.

  8. 7 CFR 28.180 - Issuance of cotton classification memoranda.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Issuance of cotton classification memoranda. 28.180... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.180 Issuance of cotton classification memoranda. As soon as practicable after the classification or...

  9. 7 CFR 28.180 - Issuance of cotton classification memoranda.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Issuance of cotton classification memoranda. 28.180... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.180 Issuance of cotton classification memoranda. As soon as practicable after the classification or...

  10. 7 CFR 28.180 - Issuance of cotton classification memoranda.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Issuance of cotton classification memoranda. 28.180... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.180 Issuance of cotton classification memoranda. As soon as practicable after the classification or...

  11. 7 CFR 28.180 - Issuance of cotton classification memoranda.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Issuance of cotton classification memoranda. 28.180... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.180 Issuance of cotton classification memoranda. As soon as practicable after the classification or...

  12. 7 CFR 28.180 - Issuance of cotton classification memoranda.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Issuance of cotton classification memoranda. 28.180... CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.180 Issuance of cotton classification memoranda. As soon as practicable after the classification or...

  13. Reference Standard Test and the Diagnostic Ability of Spectral Domain Optical Coherence Tomography in Glaucoma.

    PubMed

    Rao, Harsha L; Yadav, Ravi K; Addepalli, Uday K; Begum, Viquar U; Senthil, Sirisha; Choudhari, Nikhil S; Garudadri, Chandra S

    2015-08-01

    To evaluate the relationship between the reference standard used to diagnose glaucoma and the diagnostic ability of spectral domain optical coherence tomograph (SDOCT). In a cross-sectional study, 280 eyes of 175 consecutive subjects, referred to a tertiary eye care center for glaucoma evaluation, underwent optic disc photography, visual field (VF) examination, and SDOCT examination. The cohort was divided into glaucoma and control groups based on 3 reference standards for glaucoma diagnosis: first based on the optic disc classification (179 glaucoma and 101 control eyes), second on VF classification (glaucoma hemifield test outside normal limits and pattern SD with P-value of <5%, 130 glaucoma and 150 control eyes), and third on the presence of both glaucomatous optic disc and glaucomatous VF (125 glaucoma and 155 control eyes). Relationship between the reference standards and the diagnostic parameters of SDOCT were evaluated using areas under the receiver operating characteristic curve, sensitivity, and specificity. Areas under the receiver operating characteristic curve and sensitivities of most of the SDOCT parameters obtained with the 3 reference standards (ranging from 0.74 to 0.88 and 72% to 88%, respectively) were comparable (P>0.05). However, specificities of SDOCT parameters were significantly greater (P<0.05) with optic disc classification as reference standard (74% to 88%) compared with VF classification as reference standard (57% to 74%). Diagnostic parameters of SDOCT that was significantly affected by reference standard was the specificity, which was greater with optic disc classification as the reference standard. This has to be considered when comparing the diagnostic ability of SDOCT across studies.

  14. Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Li, X.; Xiao, W.

    2018-05-01

    The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.

  15. Land cover's refined classification based on multi source of remote sensing information fusion: a case study of national geographic conditions census in China

    NASA Astrophysics Data System (ADS)

    Cheng, Tao; Zhang, Jialong; Zheng, Xinyan; Yuan, Rujin

    2018-03-01

    The project of The First National Geographic Conditions Census developed by Chinese government has designed the data acquisition content and indexes, and has built corresponding classification system mainly based on the natural property of material. However, the unified standard for land cover classification system has not been formed; the production always needs converting to meet the actual needs. Therefore, it proposed a refined classification method based on multi source of remote sensing information fusion. It takes the third-level classes of forest land and grassland for example, and has collected the thematic data of Vegetation Map of China (1:1,000,000), attempts to develop refined classification utilizing raster spatial analysis model. Study area is selected, and refined classification is achieved by using the proposed method. The results show that land cover within study area is divided principally among 20 classes, from subtropical broad-leaved forest (31131) to grass-forb community type of low coverage grassland (41192); what's more, after 30 years in the study area, climatic factors, developmental rhythm characteristics and vegetation ecological geographical characteristics have not changed fundamentally, only part of the original vegetation types have changed in spatial distribution range or land cover types. Research shows that refined classification for the third-level classes of forest land and grassland could make the results take on both the natural attributes of the original and plant community ecology characteristics, which could meet the needs of some industry application, and has certain practical significance for promoting the product of The First National Geographic Conditions Census.

  16. A Cradle-to-Grave Integrated Approach to Using UNIFORMAT II

    ERIC Educational Resources Information Center

    Schneider, Richard C.; Cain, David A.

    2009-01-01

    The ASTM E1557/UNIFORMAT II standard is a three-level, function-oriented classification which links the schematic phase Preliminary Project Descriptions (PPD), based on Construction Standard Institute (CSI) Practice FF/180, to elemental cost estimates based on R.S. Means Cost Data. With the UNIFORMAT II Standard Classification for Building…

  17. Nonlinear features for product inspection

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1999-03-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non-invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data.

  18. A summary of recent developments in transportation hazard classification activities for ammonium perchlorate

    NASA Technical Reports Server (NTRS)

    Koller, A. M., Jr.; Hannum, J. A. E.

    1983-01-01

    The transportation hazard classification of Ammonium Perchlorate is discussed. A test program was completed and data were forwarded to retain a Class 5.1 designation (oxidizer) for AP which is shipped internationally. As a follow-on to the initial team effort to conduct AP tests existing data were examined and a matrix which catalogs test parameters and findings was compiled. A collection of test protocols is developed to standardize test methods for energetic materials of all types. The actions to date are summarized; the participating organizations and their roles as presently understood; specific findings on AP (matrix); and issues, lessons learned, and potential actions of particular interest to the propulsion community which may evolve as a result of future U.N. propellant transportation classification activities.

  19. Comparison of geometric morphometric outline methods in the discrimination of age-related differences in feather shape

    PubMed Central

    Sheets, H David; Covino, Kristen M; Panasiewicz, Joanna M; Morris, Sara R

    2006-01-01

    Background Geometric morphometric methods of capturing information about curves or outlines of organismal structures may be used in conjunction with canonical variates analysis (CVA) to assign specimens to groups or populations based on their shapes. This methodological paper examines approaches to optimizing the classification of specimens based on their outlines. This study examines the performance of four approaches to the mathematical representation of outlines and two different approaches to curve measurement as applied to a collection of feather outlines. A new approach to the dimension reduction necessary to carry out a CVA on this type of outline data with modest sample sizes is also presented, and its performance is compared to two other approaches to dimension reduction. Results Two semi-landmark-based methods, bending energy alignment and perpendicular projection, are shown to produce roughly equal rates of classification, as do elliptical Fourier methods and the extended eigenshape method of outline measurement. Rates of classification were not highly dependent on the number of points used to represent a curve or the manner in which those points were acquired. The new approach to dimensionality reduction, which utilizes a variable number of principal component (PC) axes, produced higher cross-validation assignment rates than either the standard approach of using a fixed number of PC axes or a partial least squares method. Conclusion Classification of specimens based on feather shape was not highly dependent of the details of the method used to capture shape information. The choice of dimensionality reduction approach was more of a factor, and the cross validation rate of assignment may be optimized using the variable number of PC axes method presented herein. PMID:16978414

  20. Improving zero-training brain-computer interfaces by mixing model estimators

    NASA Astrophysics Data System (ADS)

    Verhoeven, T.; Hübner, D.; Tangermann, M.; Müller, K. R.; Dambre, J.; Kindermans, P. J.

    2017-06-01

    Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method’s strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

  1. The European standard for sun-protective clothing: EN 13758.

    PubMed

    Gambichler, T; Laperre, J; Hoffmann, K

    2006-02-01

    Clothing is considered one of the most important tools for sun protection. Contrary to popular opinion, however, some summer fabrics provide insufficient ultraviolet (UV) protection. The European Committee for Standardization (CEN), has developed a new standard on requirements for test methods and labelling of sun-protective garments. This document has now been completed and is published. Within CEN, a working group, CEN/TC 248 WG14 'UV protective clothing', was set up with the mission to produce standards on the UV-protective properties of textile materials. This working group started its activities in 1998 and included 30 experts (dermatologists, physicists, textile technologists, fabric manufacturers and retailers of apparel textiles) from 11 European member states. Within this working group, all medical, ethical, technical and economical aspects of standardization of UV-protective clothing were discussed on the basis of the expertise of each member and in consideration of the relevant literature in this field. Decisions were made in consensus. The first part of the standard (EN 13758-1) deals with all details of test methods (e.g. spectrophotometric measurements) for textile materials and part 2 (EN 13758-2) covers classification and marking of apparel textiles. UV-protective cloths for which compliance with this standard is claimed must fulfill all stringent instructions of testing, classification and marking, including a UV protection factor (UPF) larger than 40 (UPF 40+), average UVA transmission lower than 5%, and design requirements as specified in part 2 of the standard. A pictogram, which is marked with the number of the standard EN 13758-2 and the UPF of 40+, shall be attached to the garment if it is in compliance with the standard. The dermatology community should take cognizance of this new standard document. Garment manufacturers and retailers may now follow these official guidelines for testing and labelling of UV-protective summer clothes, and the sun-aware consumer can easily recognize garments that definitely provide sufficient UV protection.

  2. 5 CFR 1312.8 - Standard identification and markings.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.8 Standard identification and markings... or event for declassification that corresponds to the lapse of the information's national security...

  3. 5 CFR 1312.8 - Standard identification and markings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.8 Standard identification and markings... or event for declassification that corresponds to the lapse of the information's national security...

  4. High-Altitude Electromagnetic Pulse (HEMP) Testing

    DTIC Science & Technology

    2011-11-10

    Security Classification Guide ( SCG ). b. The HEMP simulation facility shall have a measured map of the peak amplitude waveform of the...Quadripartite Standardization Agreement s, sec second SCG security classification guide SN serial number SOP Standard Operating Procedure

  5. Validation of the Lung Subtyping Panel in Multiple Fresh-Frozen and Formalin-Fixed, Paraffin-Embedded Lung Tumor Gene Expression Data Sets.

    PubMed

    Faruki, Hawazin; Mayhew, Gregory M; Fan, Cheng; Wilkerson, Matthew D; Parker, Scott; Kam-Morgan, Lauren; Eisenberg, Marcia; Horten, Bruce; Hayes, D Neil; Perou, Charles M; Lai-Goldman, Myla

    2016-06-01

    Context .- A histologic classification of lung cancer subtypes is essential in guiding therapeutic management. Objective .- To complement morphology-based classification of lung tumors, a previously developed lung subtyping panel (LSP) of 57 genes was tested using multiple public fresh-frozen gene-expression data sets and a prospectively collected set of formalin-fixed, paraffin-embedded lung tumor samples. Design .- The LSP gene-expression signature was evaluated in multiple lung cancer gene-expression data sets totaling 2177 patients collected from 4 platforms: Illumina RNAseq (San Diego, California), Agilent (Santa Clara, California) and Affymetrix (Santa Clara) microarrays, and quantitative reverse transcription-polymerase chain reaction. Gene centroids were calculated for each of 3 genomic-defined subtypes: adenocarcinoma, squamous cell carcinoma, and neuroendocrine, the latter of which encompassed both small cell carcinoma and carcinoid. Classification by LSP into 3 subtypes was evaluated in both fresh-frozen and formalin-fixed, paraffin-embedded tumor samples, and agreement with the original morphology-based diagnosis was determined. Results .- The LSP-based classifications demonstrated overall agreement with the original clinical diagnosis ranging from 78% (251 of 322) to 91% (492 of 538 and 869 of 951) in the fresh-frozen public data sets and 84% (65 of 77) in the formalin-fixed, paraffin-embedded data set. The LSP performance was independent of tissue-preservation method and gene-expression platform. Secondary, blinded pathology review of formalin-fixed, paraffin-embedded samples demonstrated concordance of 82% (63 of 77) with the original morphology diagnosis. Conclusions .- The LSP gene-expression signature is a reproducible and objective method for classifying lung tumors and demonstrates good concordance with morphology-based classification across multiple data sets. The LSP panel can supplement morphologic assessment of lung cancers, particularly when classification by standard methods is challenging.

  6. Methods of classification for women undergoing induction of labour: a systematic review and novel classification system.

    PubMed

    Nippita, T A; Khambalia, A Z; Seeho, S K; Trevena, J A; Patterson, J A; Ford, J B; Morris, J M; Roberts, C L

    2015-09-01

    A lack of reproducible methods for classifying women having an induction of labour (IOL) has led to controversies regarding IOL and related maternal and perinatal health outcomes. To evaluate articles that classify IOL and to develop a novel IOL classification system. Electronic searches using CINAHL, EMBASE, WEB of KNOWLEDGE, and reference lists. Two reviewers independently assessed studies that classified women having an IOL. For the systematic review, data were extracted on study characteristics, quality, and results. Pre-specified criteria were used for evaluation. A multidisciplinary collaboration developed a new classification system using a clinically logical model and stakeholder feedback, demonstrating applicability in a population cohort of 909 702 maternities in New South Wales, Australia, over the period 2002-2011. All seven studies included in the systematic review categorised women according to the presence or absence of varying medical indications for IOL. Evaluation identified uncertainties or deficiencies across all studies, related to the criteria of total inclusivity, reproducibility, clinical utility, implementability, and data availability. A classification system of ten groups was developed based on parity, previous caesarean, gestational age, number, and presentation of the fetus. Nulliparous and parous women at full term were the largest groups (21.2 and 24.5%, respectively), and accounted for the highest proportion of all IOL (20.7 and 21.5%, respectively). Current methods of classifying women undertaking IOL based on medical indications are inadequate. We propose a classification system that has the attributes of simplicity and clarity, uses information that is readily and reliably collected, and enables the standard characterisation of populations of women having an IOL across and within jurisdictions. © 2015 Royal College of Obstetricians and Gynaecologists.

  7. 7 CFR 28.181 - Review of cotton classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Review of cotton classification. 28.181 Section 28.181... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.181 Review of cotton classification. A review of any classification or comparison made pursuant to this subpart...

  8. 7 CFR 28.181 - Review of cotton classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Review of cotton classification. 28.181 Section 28.181... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.181 Review of cotton classification. A review of any classification or comparison made pursuant to this subpart...

  9. 7 CFR 28.181 - Review of cotton classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Review of cotton classification. 28.181 Section 28.181... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.181 Review of cotton classification. A review of any classification or comparison made pursuant to this subpart...

  10. 7 CFR 28.181 - Review of cotton classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Review of cotton classification. 28.181 Section 28.181... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.181 Review of cotton classification. A review of any classification or comparison made pursuant to this subpart...

  11. 7 CFR 28.181 - Review of cotton classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Review of cotton classification. 28.181 Section 28.181... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.181 Review of cotton classification. A review of any classification or comparison made pursuant to this subpart...

  12. 7 CFR 30.31 - Classification of leaf tobacco.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Classification of leaf tobacco. 30.31 Section 30.31... REGULATIONS TOBACCO STOCKS AND STANDARDS Classification of Leaf Tobacco Covering Classes, Types and Groups of Grades § 30.31 Classification of leaf tobacco. For the purpose of this classification leaf tobacco shall...

  13. 7 CFR 30.31 - Classification of leaf tobacco.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Classification of leaf tobacco. 30.31 Section 30.31... REGULATIONS TOBACCO STOCKS AND STANDARDS Classification of Leaf Tobacco Covering Classes, Types and Groups of Grades § 30.31 Classification of leaf tobacco. For the purpose of this classification leaf tobacco shall...

  14. 7 CFR 30.31 - Classification of leaf tobacco.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Classification of leaf tobacco. 30.31 Section 30.31... REGULATIONS TOBACCO STOCKS AND STANDARDS Classification of Leaf Tobacco Covering Classes, Types and Groups of Grades § 30.31 Classification of leaf tobacco. For the purpose of this classification leaf tobacco shall...

  15. 7 CFR 30.31 - Classification of leaf tobacco.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Classification of leaf tobacco. 30.31 Section 30.31... REGULATIONS TOBACCO STOCKS AND STANDARDS Classification of Leaf Tobacco Covering Classes, Types and Groups of Grades § 30.31 Classification of leaf tobacco. For the purpose of this classification leaf tobacco shall...

  16. 7 CFR 30.31 - Classification of leaf tobacco.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Classification of leaf tobacco. 30.31 Section 30.31... REGULATIONS TOBACCO STOCKS AND STANDARDS Classification of Leaf Tobacco Covering Classes, Types and Groups of Grades § 30.31 Classification of leaf tobacco. For the purpose of this classification leaf tobacco shall...

  17. 7 CFR 51.1904 - Maturity classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Maturity classification. 51.1904 Section 51.1904 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Maturity Classification § 51.1904 Maturity classification. Tomatoes which are characteristically red when...

  18. 7 CFR 51.1904 - Maturity classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Maturity classification. 51.1904 Section 51.1904 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Maturity Classification § 51.1904 Maturity classification. Tomatoes which are characteristically red when...

  19. 7 CFR 28.183 - Fees and costs; payment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Fees and costs; payment. 28.183 Section 28.183... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.183 Fees and costs; payment. The provisions of §§ 28.115 through 28.126 relating to fees, costs, and method of...

  20. Sonification and Visualization of Predecisional Information Search: Identifying Toolboxes in Children

    ERIC Educational Resources Information Center

    Betsch, Tilmann; Wünsche, Kirsten; Großkopf, Armin; Schröder, Klara; Stenmans, Rachel

    2018-01-01

    Prior evidence has suggested that preschoolers and elementary schoolers search information largely with no systematic plan when making decisions in probabilistic environments. However, this finding might be due to the insensitivity of standard classification methods that assume a lack of variance in decision strategies for tasks of the same kind.…

  1. DARPA Antibody Technology Program Standardized Test Bed for Antibody Characterization: Characterization of an MS2 Human IgG Antibody Produced by AnaptysBio, Inc.

    DTIC Science & Technology

    2016-02-01

    Enzyme-linked immunosorbent assay ( ELISA ) Quality Testing MS2 coat protein (MS2CP) 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF... ELISA ................................................................................................................4 2.8 SPR Method...3.5 ELISA Results .................................................................................................11 3.6 SPR Results

  2. Tribological Technology. Volume II.

    DTIC Science & Technology

    1982-09-01

    rolling bearings, gears, and sliding bearings produce distinctive particles. An atlas of such particles is available2 9 . Atlases of characteristic...Gravitational methods cover both sedimentation and elutration techniques. Inertial type separators perform cyclonic classification. Ferrography is the...generated after each size exposure of contaminant. This can be done today using Ferrography . Standard contaminant sensitivity tests require test

  3. Risk-Based Prioritization Method for the Classification of Groundwater Pollution from Hazardous Waste Landfills.

    PubMed

    Yang, Yu; Jiang, Yong-Hai; Lian, Xin-Ying; Xi, Bei-Dou; Ma, Zhi-Fei; Xu, Xiang-Jian; An, Da

    2016-12-01

    Hazardous waste landfill sites are a significant source of groundwater pollution. To ensure that these landfills with a significantly high risk of groundwater contamination are properly managed, a risk-based ranking method related to groundwater contamination is needed. In this research, a risk-based prioritization method for the classification of groundwater pollution from hazardous waste landfills was established. The method encompasses five phases, including risk pre-screening, indicator selection, characterization, classification and, lastly, validation. In the risk ranking index system employed here, 14 indicators involving hazardous waste landfills and migration in the vadose zone as well as aquifer were selected. The boundary of each indicator was determined by K-means cluster analysis and the weight of each indicator was calculated by principal component analysis. These methods were applied to 37 hazardous waste landfills in China. The result showed that the risk for groundwater contamination from hazardous waste landfills could be ranked into three classes from low to high risk. In all, 62.2 % of the hazardous waste landfill sites were classified in the low and medium risk classes. The process simulation method and standardized anomalies were used to validate the result of risk ranking; the results were consistent with the simulated results related to the characteristics of contamination. The risk ranking method was feasible, valid and can provide reference data related to risk management for groundwater contamination at hazardous waste landfill sites.

  4. Luminance sticker based facial expression recognition using discrete wavelet transform for physically disabled persons.

    PubMed

    Nagarajan, R; Hariharan, M; Satiyan, M

    2012-08-01

    Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.

  5. Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    PubMed

    Taylor, Jonathan Christopher; Fenner, John Wesley

    2017-11-29

    Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification performance was lower for the local database than the research database for both semi-quantitative and machine learning algorithms. However, for both databases, the machine learning methods generated equal or higher mean accuracies (with lower variance) than any of the semi-quantification approaches. The gain in performance from using machine learning algorithms as compared to semi-quantification was relatively small and may be insufficient, when considered in isolation, to offer significant advantages in the clinical context.

  6. [Research on identification of cabbages and weeds combining spectral imaging technology and SAM taxonomy].

    PubMed

    Zu, Qin; Zhang, Shui-fa; Cao, Yang; Zhao, Hui-yi; Dang, Chang-qing

    2015-02-01

    Weeds automatic identification is the key technique and also the bottleneck for implementation of variable spraying and precision pesticide. Therefore, accurate, rapid and non-destructive automatic identification of weeds has become a very important research direction for precision agriculture. Hyperspectral imaging system was used to capture the hyperspectral images of cabbage seedlings and five kinds of weeds such as pigweed, barnyard grass, goosegrass, crabgrass and setaria with the wavelength ranging from 1000 to 2500 nm. In ENVI, by utilizing the MNF rotation to implement the noise reduction and de-correlation of hyperspectral data and reduce the band dimensions from 256 to 11, and extracting the region of interest to get the spectral library as standard spectra, finally, using the SAM taxonomy to identify cabbages and weeds, the classification effect was good when the spectral angle threshold was set as 0. 1 radians. In HSI Analyzer, after selecting the training pixels to obtain the standard spectrum, the SAM taxonomy was used to distinguish weeds from cabbages. Furthermore, in order to measure the recognition accuracy of weeds quantificationally, the statistical data of the weeds and non-weeds were obtained by comparing the SAM classification image with the best classification effects to the manual classification image. The experimental results demonstrated that, when the parameters were set as 5-point smoothing, 0-order derivative and 7-degree spectral angle, the best classification result was acquired and the recognition rate of weeds, non-weeds and overall samples was 80%, 97.3% and 96.8% respectively. The method that combined the spectral imaging technology and the SAM taxonomy together took full advantage of fusion information of spectrum and image. By applying the spatial classification algorithms to establishing training sets for spectral identification, checking the similarity among spectral vectors in the pixel level, integrating the advantages of spectra and images meanwhile considering their accuracy and rapidity and improving weeds detection range in the full range that could detect weeds between and within crop rows, the above method contributes relevant analysis tools and means to the application field requiring the accurate information of plants in agricultural precision management

  7. The Standard for Clinicians’ Interview in Psychiatry (SCIP): A Clinician-administered Tool with Categorical, Dimensional, and Numeric Output—Conceptual Development, Design, and Description of the SCIP

    PubMed Central

    Nasrallah, Henry; Muvvala, Srinivas; El-Missiry, Ahmed; Mansour, Hader; Hill, Cheryl; Elswick, Daniel; Price, Elizabeth C.

    2016-01-01

    Existing standardized diagnostic interviews (SDIs) were designed for researchers and produce mainly categorical diagnoses. There is an urgent need for a clinician-administered tool that produces dimensional measures, in addition to categorical diagnoses. The Standard for Clinicians’ Interview in Psychiatry (SCIP) is a method of assessment of psychopathology for adults. It is designed to be administered by clinicians and includes the SCIP manual and the SCIP interview. Clinicians use the SCIP questions and rate the responses according to the SCIP manual rules. Clinicians use the patient’s responses to questions, observe the patient’s behaviors and make the final rating of the various signs and symptoms assessed. The SCIP method of psychiatric assessment has three components: 1) the SCIP interview (dimensional) component, 2) the etiological component, and 3) the disorder classification component. The SCIP produces three main categories of clinical data: 1) a diagnostic classification of psychiatric disorders, 2) dimensional scores, and 3) numeric data. The SCIP provides diagnoses consistent with criteria from editions of the Diagnostic and Statistical Manual (DSM) and International Classification of Disease (ICD). The SCIP produces 18 dimensional measures for key psychiatric signs or symptoms: anxiety, posttraumatic stress, obsessions, compulsions, depression, mania, suicidality, suicidal behavior, delusions, hallucinations, agitation, disorganized behavior, negativity, catatonia, alcohol addiction, drug addiction, attention, and hyperactivity. The SCIP produces numeric severity data for use in either clinical care or research. The SCIP was shown to be a valid and reliable assessment tool, and the validity and reliability results were published in 2014 and 2015. The SCIP is compatible with personalized psychiatry research and is in line with the Research Domain Criteria framework. PMID:27800284

  8. The Standard for Clinicians' Interview in Psychiatry (SCIP): A Clinician-administered Tool with Categorical, Dimensional, and Numeric Output-Conceptual Development, Design, and Description of the SCIP.

    PubMed

    Aboraya, Ahmed; Nasrallah, Henry; Muvvala, Srinivas; El-Missiry, Ahmed; Mansour, Hader; Hill, Cheryl; Elswick, Daniel; Price, Elizabeth C

    2016-01-01

    Existing standardized diagnostic interviews (SDIs) were designed for researchers and produce mainly categorical diagnoses. There is an urgent need for a clinician-administered tool that produces dimensional measures, in addition to categorical diagnoses. The Standard for Clinicians' Interview in Psychiatry (SCIP) is a method of assessment of psychopathology for adults. It is designed to be administered by clinicians and includes the SCIP manual and the SCIP interview. Clinicians use the SCIP questions and rate the responses according to the SCIP manual rules. Clinicians use the patient's responses to questions, observe the patient's behaviors and make the final rating of the various signs and symptoms assessed. The SCIP method of psychiatric assessment has three components: 1) the SCIP interview (dimensional) component, 2) the etiological component, and 3) the disorder classification component. The SCIP produces three main categories of clinical data: 1) a diagnostic classification of psychiatric disorders, 2) dimensional scores, and 3) numeric data. The SCIP provides diagnoses consistent with criteria from editions of the Diagnostic and Statistical Manual (DSM) and International Classification of Disease (ICD). The SCIP produces 18 dimensional measures for key psychiatric signs or symptoms: anxiety, posttraumatic stress, obsessions, compulsions, depression, mania, suicidality, suicidal behavior, delusions, hallucinations, agitation, disorganized behavior, negativity, catatonia, alcohol addiction, drug addiction, attention, and hyperactivity. The SCIP produces numeric severity data for use in either clinical care or research. The SCIP was shown to be a valid and reliable assessment tool, and the validity and reliability results were published in 2014 and 2015. The SCIP is compatible with personalized psychiatry research and is in line with the Research Domain Criteria framework.

  9. Machine-Learning Algorithms to Code Public Health Spending Accounts

    PubMed Central

    Leider, Jonathon P.; Resnick, Beth A.; Alfonso, Y. Natalia; Bishai, David

    2017-01-01

    Objectives: Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. Methods: We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Results: Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Conclusions: Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation. PMID:28363034

  10. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    PubMed

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Methods in hair research: how to objectively distinguish between anagen and catagen in human hair follicle organ culture.

    PubMed

    Kloepper, Jennifer Elisabeth; Sugawara, Koji; Al-Nuaimi, Yusur; Gáspár, Erzsébet; van Beek, Nina; Paus, Ralf

    2010-03-01

    The organ culture of human scalp hair follicles (HFs) is the best currently available assay for hair research in the human system. In order to determine the hair growth-modulatory effects of agents in this assay, one critical read-out parameter is the assessment of whether the test agent has prolonged anagen duration or induced catagen in vitro. However, objective criteria to distinguish between anagen VI HFs and early catagen in human HF organ culture, two hair cycle stages with a deceptively similar morphology, remain to be established. Here, we develop, document and test an objective classification system that allows to distinguish between anagen VI and early catagen in organ-cultured human HFs, using both qualitative and quantitative parameters that can be generated by light microscopy or immunofluorescence. Seven qualitative classification criteria are defined that are based on assessing the morphology of the hair matrix, the dermal papilla and the distribution of pigmentary markers (melanin, gp100). These are complemented by ten quantitative parameters. We have tested this classification system by employing the clinically used topical hair growth inhibitor, eflornithine, and show that eflornithine indeed produces the expected premature catagen induction, as identified by the novel classification criteria reported here. Therefore, this classification system offers a standardized, objective and reproducible new experimental method to reliably distinguish between human anagen VI and early catagen HFs in organ culture.

  12. CAC-DRS: Coronary Artery Calcium Data and Reporting System. An expert consensus document of the Society of Cardiovascular Computed Tomography (SCCT).

    PubMed

    Hecht, Harvey S; Blaha, Michael J; Kazerooni, Ella A; Cury, Ricardo C; Budoff, Matt; Leipsic, Jonathon; Shaw, Leslee

    2018-03-30

    The goal of CAC-DRS: Coronary Artery Calcium Data and Reporting System is to create a standardized method to communicate findings of CAC scanning on all noncontrast CT scans, irrespective of the indication, in order to facilitate clinical decision-making, with recommendations for subsequent patient management. The CAC-DRS classification is applied on a per-patient basis and represents the total calcium score and the number of involved arteries. General recommendations are provided for further management of patients with different degrees of calcified plaque burden based on CAC-DRS classification. In addition, CAC-DRS will provide a framework of standardization that may benefit quality assurance and tracking patient outcomes with the potential to ultimately result in improved quality of care. Copyright © 2018 Society of Cardiovascular Computed Tomography. All rights reserved.

  13. [Evaluation of eco-environmental quality based on artificial neural network and remote sensing techniques].

    PubMed

    Li, Hongyi; Shi, Zhou; Sha, Jinming; Cheng, Jieliang

    2006-08-01

    In the present study, vegetation, soil brightness, and moisture indices were extracted from Landsat ETM remote sensing image, heat indices were extracted from MODIS land surface temperature product, and climate index and other auxiliary geographical information were selected as the input of neural network. The remote sensing eco-environmental background value of standard interest region evaluated in situ was selected as the output of neural network, and the back propagation (BP) neural network prediction model containing three layers was designed. The network was trained, and the remote sensing eco-environmental background value of Fuzhou in China was predicted by using software MATLAB. The class mapping of remote sensing eco-environmental background values based on evaluation standard showed that the total classification accuracy was 87. 8%. The method with a scheme of prediction first and classification then could provide acceptable results in accord with the regional eco-environment types.

  14. Government information resource catalog and its service system realization

    NASA Astrophysics Data System (ADS)

    Gui, Sheng; Li, Lin; Wang, Hong; Peng, Zifeng

    2007-06-01

    During the process of informatization, there produces a great deal of information resources. In order to manage these information resources and use them to serve the management of business, government decision and public life, it is necessary to establish a transparent and dynamic information resource catalog and its service system. This paper takes the land-house management information resource for example. Aim at the characteristics of this kind of information, this paper does classification, identification and description of land-house information in an uniform specification and method, establishes land-house information resource catalog classification system&, metadata standard, identification standard and land-house thematic thesaurus, and in the internet environment, user can search and get their interested information conveniently. Moreover, under the network environment, to achieve speedy positioning, inquiring, exploring and acquiring various types of land-house management information; and satisfy the needs of sharing, exchanging, application and maintenance of land-house management information resources.

  15. Surface Electromyography Signal Processing and Classification Techniques

    PubMed Central

    Chowdhury, Rubana H.; Reaz, Mamun B. I.; Ali, Mohd Alauddin Bin Mohd; Bakar, Ashrif A. A.; Chellappan, Kalaivani; Chang, Tae. G.

    2013-01-01

    Electromyography (EMG) signals are becoming increasingly important in many applications, including clinical/biomedical, prosthesis or rehabilitation devices, human machine interactions, and more. However, noisy EMG signals are the major hurdles to be overcome in order to achieve improved performance in the above applications. Detection, processing and classification analysis in electromyography (EMG) is very desirable because it allows a more standardized and precise evaluation of the neurophysiological, rehabitational and assistive technological findings. This paper reviews two prominent areas; first: the pre-processing method for eliminating possible artifacts via appropriate preparation at the time of recording EMG signals, and second: a brief explanation of the different methods for processing and classifying EMG signals. This study then compares the numerous methods of analyzing EMG signals, in terms of their performance. The crux of this paper is to review the most recent developments and research studies related to the issues mentioned above. PMID:24048337

  16. Multi-classification of cell deformation based on object alignment and run length statistic.

    PubMed

    Li, Heng; Liu, Zhiwen; An, Xing; Shi, Yonggang

    2014-01-01

    Cellular morphology is widely applied in digital pathology and is essential for improving our understanding of the basic physiological processes of organisms. One of the main issues of application is to develop efficient methods for cell deformation measurement. We propose an innovative indirect approach to analyze dynamic cell morphology in image sequences. The proposed approach considers both the cellular shape change and cytoplasm variation, and takes each frame in the image sequence into account. The cell deformation is measured by the minimum energy function of object alignment, which is invariant to object pose. Then an indirect analysis strategy is employed to overcome the limitation of gradual deformation by run length statistic. We demonstrate the power of the proposed approach with one application: multi-classification of cell deformation. Experimental results show that the proposed method is sensitive to the morphology variation and performs better than standard shape representation methods.

  17. 7 CFR 51.652 - Classification of defects.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Classification of defects. 51.652 Section 51.652 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  18. 7 CFR 51.652 - Classification of defects.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Classification of defects. 51.652 Section 51.652 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  19. 7 CFR 51.1877 - Classification of defects.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Classification of defects. 51.1877 Section 51.1877 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  20. 7 CFR 51.652 - Classification of defects.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Classification of defects. 51.652 Section 51.652 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  1. 7 CFR 51.713 - Classification of defects.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Classification of defects. 51.713 Section 51.713 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  2. 7 CFR 51.713 - Classification of defects.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Classification of defects. 51.713 Section 51.713 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  3. 7 CFR 51.652 - Classification of defects.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Classification of defects. 51.652 Section 51.652 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  4. 7 CFR 51.713 - Classification of defects.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Classification of defects. 51.713 Section 51.713 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  5. 7 CFR 51.713 - Classification of defects.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Classification of defects. 51.713 Section 51.713 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  6. 7 CFR 51.1877 - Classification of defects.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Classification of defects. 51.1877 Section 51.1877 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  7. 7 CFR 51.1877 - Classification of defects.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Classification of defects. 51.1877 Section 51.1877 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  8. 7 CFR 51.713 - Classification of defects.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Classification of defects. 51.713 Section 51.713 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  9. 7 CFR 51.1877 - Classification of defects.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Classification of defects. 51.1877 Section 51.1877 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  10. 7 CFR 51.652 - Classification of defects.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Classification of defects. 51.652 Section 51.652 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...

  11. 7 CFR 51.1903 - Size classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Size classification. 51.1903 Section 51.1903 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Maturity Classification § 51.1903 Size classification. The following terms may be used for describing the...

  12. 7 CFR 51.1903 - Size classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Size classification. 51.1903 Section 51.1903 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Maturity Classification § 51.1903 Size classification. The following terms may be used for describing the...

  13. A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2011-01-01

    Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed

  14. International Standards for Neurological Classification of Spinal Cord Injury: cases with classification challenges.

    PubMed

    Kirshblum, S C; Biering-Sorensen, F; Betz, R; Burns, S; Donovan, W; Graves, D E; Johansen, M; Jones, L; Mulcahey, M J; Rodriguez, G M; Schmidt-Read, M; Steeves, J D; Tansey, K; Waring, W

    2014-03-01

    The International Standards for the Neurological Classification of Spinal Cord Injury (ISNCSCI) is routinely used to determine the levels of injury and to classify the severity of the injury. Questions are often posed to the International Standards Committee of the American Spinal Injury Association regarding the classification. The committee felt that disseminating some of the challenging questions posed, as well as the responses, would be of benefit for professionals utilizing the ISNCSCI. Case scenarios that were submitted to the committee are presented with the responses as well as the thought processes considered by the committee members. The importance of this documentation is to clarify some points as well as update the SCI community regarding possible revisions that will be needed in the future based upon some rules that require clarification.

  15. Automatic Building Detection based on Supervised Classification using High Resolution Google Earth Images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, S.; Ghaffarian, S.

    2014-08-01

    This paper presents a novel approach to detect the buildings by automization of the training area collecting stage for supervised classification. The method based on the fact that a 3d building structure should cast a shadow under suitable imaging conditions. Therefore, the methodology begins with the detection and masking out the shadow areas using luminance component of the LAB color space, which indicates the lightness of the image, and a novel double thresholding technique. Further, the training areas for supervised classification are selected by automatically determining a buffer zone on each building whose shadow is detected by using the shadow shape and the sun illumination direction. Thereafter, by calculating the statistic values of each buffer zone which is collected from the building areas the Improved Parallelepiped Supervised Classification is executed to detect the buildings. Standard deviation thresholding applied to the Parallelepiped classification method to improve its accuracy. Finally, simple morphological operations conducted for releasing the noises and increasing the accuracy of the results. The experiments were performed on set of high resolution Google Earth images. The performance of the proposed approach was assessed by comparing the results of the proposed approach with the reference data by using well-known quality measurements (Precision, Recall and F1-score) to evaluate the pixel-based and object-based performances of the proposed approach. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.4 % and 853 % overall pixel-based and object-based precision performances, respectively.

  16. Control-group feature normalization for multivariate pattern analysis of structural MRI data using the support vector machine.

    PubMed

    Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T

    2016-05-15

    Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. The borderline range of toxicological methods: Quantification and implications for evaluating precision.

    PubMed

    Leontaridou, Maria; Urbisch, Daniel; Kolle, Susanne N; Ott, Katharina; Mulliner, Denis S; Gabbert, Silke; Landsiedel, Robert

    2017-01-01

    Test methods to assess the skin sensitization potential of a substance usually use threshold criteria to dichotomize continuous experimental read-outs into yes/no conclusions. The threshold criteria are prescribed in the respective OECD test guidelines and the conclusion is used for regulatory hazard assessment, i.e., classification and labelling of the substance. We can identify a borderline range (BR) around the classification threshold within which test results are inconclusive due to a test method's biological and technical variability. We quantified BRs in the prediction models of the non-animal test methods DPRA, LuSens and h-CLAT, and of the animal test LLNA, respectively. Depending on the size of the BR, we found that between 6% and 28% of the substances in the sets tested with these methods were considered borderline. When the results of individual non-animal test methods were combined into integrated testing strategies (ITS), borderline test results of individual tests also affected the overall assessment of the skin sensitization potential of the testing strategy. This was analyzed for the 2-out-of-3 ITS: Four out of 40 substances (10%) were considered borderline. Based on our findings we propose expanding the standard binary classification of substances into "positive"/"negative" or "hazardous"/"non-hazardous" by adding a "borderline" or "inconclusive" alert for cases where test results fall within the borderline range.

  18. Automatic Identification of Critical Follow-Up Recommendation Sentences in Radiology Reports

    PubMed Central

    Yetisgen-Yildiz, Meliha; Gunn, Martin L.; Xia, Fei; Payne, Thomas H.

    2011-01-01

    Communication of follow-up recommendations when abnormalities are identified on imaging studies is prone to error. When recommendations are not systematically identified and promptly communicated to referrers, poor patient outcomes can result. Using information technology can improve communication and improve patient safety. In this paper, we describe a text processing approach that uses natural language processing (NLP) and supervised text classification methods to automatically identify critical recommendation sentences in radiology reports. To increase the classification performance we enhanced the simple unigram token representation approach with lexical, semantic, knowledge-base, and structural features. We tested different combinations of those features with the Maximum Entropy (MaxEnt) classification algorithm. Classifiers were trained and tested with a gold standard corpus annotated by a domain expert. We applied 5-fold cross validation and our best performing classifier achieved 95.60% precision, 79.82% recall, 87.0% F-score, and 99.59% classification accuracy in identifying the critical recommendation sentences in radiology reports. PMID:22195225

  19. Automatic identification of critical follow-up recommendation sentences in radiology reports.

    PubMed

    Yetisgen-Yildiz, Meliha; Gunn, Martin L; Xia, Fei; Payne, Thomas H

    2011-01-01

    Communication of follow-up recommendations when abnormalities are identified on imaging studies is prone to error. When recommendations are not systematically identified and promptly communicated to referrers, poor patient outcomes can result. Using information technology can improve communication and improve patient safety. In this paper, we describe a text processing approach that uses natural language processing (NLP) and supervised text classification methods to automatically identify critical recommendation sentences in radiology reports. To increase the classification performance we enhanced the simple unigram token representation approach with lexical, semantic, knowledge-base, and structural features. We tested different combinations of those features with the Maximum Entropy (MaxEnt) classification algorithm. Classifiers were trained and tested with a gold standard corpus annotated by a domain expert. We applied 5-fold cross validation and our best performing classifier achieved 95.60% precision, 79.82% recall, 87.0% F-score, and 99.59% classification accuracy in identifying the critical recommendation sentences in radiology reports.

  20. 7 CFR 28.910 - Classification of samples and issuance of classification data.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... classification data. 28.910 Section 28.910 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY... of classification data. (a)(1) The samples submitted as provided in the subpart shall be classified...

  1. Photoacoustic discrimination of vascular and pigmented lesions using classical and Bayesian methods

    NASA Astrophysics Data System (ADS)

    Swearingen, Jennifer A.; Holan, Scott H.; Feldman, Mary M.; Viator, John A.

    2010-01-01

    Discrimination of pigmented and vascular lesions in skin can be difficult due to factors such as size, subungual location, and the nature of lesions containing both melanin and vascularity. Misdiagnosis may lead to precancerous or cancerous lesions not receiving proper medical care. To aid in the rapid and accurate diagnosis of such pathologies, we develop a photoacoustic system to determine the nature of skin lesions in vivo. By irradiating skin with two laser wavelengths, 422 and 530 nm, we induce photoacoustic responses, and the relative response at these two wavelengths indicates whether the lesion is pigmented or vascular. This response is due to the distinct absorption spectrum of melanin and hemoglobin. In particular, pigmented lesions have ratios of photoacoustic amplitudes of approximately 1.4 to 1 at the two wavelengths, while vascular lesions have ratios of about 4.0 to 1. Furthermore, we consider two statistical methods for conducting classification of lesions: standard multivariate analysis classification techniques and a Bayesian-model-based approach. We study 15 human subjects with eight vascular and seven pigmented lesions. Using the classical method, we achieve a perfect classification rate, while the Bayesian approach has an error rate of 20%.

  2. Machine learning classification with confidence: application of transductive conformal predictors to MRI-based diagnostic and prognostic markers in depression.

    PubMed

    Nouretdinov, Ilia; Costafreda, Sergi G; Gammerman, Alexander; Chervonenkis, Alexey; Vovk, Vladimir; Vapnik, Vladimir; Fu, Cynthia H Y

    2011-05-15

    There is rapidly accumulating evidence that the application of machine learning classification to neuroimaging measurements may be valuable for the development of diagnostic and prognostic prediction tools in psychiatry. However, current methods do not produce a measure of the reliability of the predictions. Knowing the risk of the error associated with a given prediction is essential for the development of neuroimaging-based clinical tools. We propose a general probabilistic classification method to produce measures of confidence for magnetic resonance imaging (MRI) data. We describe the application of transductive conformal predictor (TCP) to MRI images. TCP generates the most likely prediction and a valid measure of confidence, as well as the set of all possible predictions for a given confidence level. We present the theoretical motivation for TCP, and we have applied TCP to structural and functional MRI data in patients and healthy controls to investigate diagnostic and prognostic prediction in depression. We verify that TCP predictions are as accurate as those obtained with more standard machine learning methods, such as support vector machine, while providing the additional benefit of a valid measure of confidence for each prediction. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    PubMed

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  4. Random forest classification of large volume structures for visuo-haptic rendering in CT images

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-03-01

    For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.

  5. Bosniak classification system: a prospective comparison of CT, contrast-enhanced US, and MR for categorizing complex renal cystic masses.

    PubMed

    Graumann, Ole; Osther, Susanne Sloth; Karstoft, Jens; Hørlyck, Arne; Osther, Palle Jörn Sloth

    2016-11-01

    Background The Bosniak classification was originally based on computed tomographic (CT) findings. Magnetic resonance (MR) and contrast-enhanced ultrasonography (CEUS) imaging may demonstrate findings that are not depicted at CT, and there may not always be a clear correlation between the findings at MR and CEUS imaging and those at CT. Purpose To compare diagnostic accuracy of MR, CEUS, and CT when categorizing complex renal cystic masses according to the Bosniak classification. Material and Methods From February 2011 to June 2012, 46 complex renal cysts were prospectively evaluated by three readers. Each mass was categorized according to the Bosniak classification and CT was chosen as gold standard. Kappa was calculated for diagnostic accuracy and data was compared with pathological results. Results CT images found 27 BII, six BIIF, seven BIII, and six BIV. Forty-three cysts could be characterized by CEUS, 79% were in agreement with CT (κ = 0.86). Five BII lesions were upgraded to BIIF and four lesions were categorized lower with CEUS. Forty-one lesions were examined with MR; 78% were in agreement with CT (κ = 0.91). Three BII lesions were upgraded to BIIF and six lesions were categorized one category lower. Pathologic correlation in six lesions revealed four malignant and two benign lesions. Conclusion CEUS and MR both up- and downgraded renal cysts compared to CT, and until these non-radiation modalities have been refined and adjusted, CT should remain the gold standard of the Bosniak classification.

  6. Watershed-based Morphometric Analysis: A Review

    NASA Astrophysics Data System (ADS)

    Sukristiyanti, S.; Maria, R.; Lestiana, H.

    2018-02-01

    Drainage basin/watershed analysis based on morphometric parameters is very important for watershed planning. Morphometric analysis of watershed is the best method to identify the relationship of various aspects in the area. Despite many technical papers were dealt with in this area of study, there is no particular standard classification and implication of each parameter. It is very confusing to evaluate a value of every morphometric parameter. This paper deals with the meaning of values of the various morphometric parameters, with adequate contextual information. A critical review is presented on each classification, the range of values, and their implications. Besides classification and its impact, the authors also concern about the quality of input data, either in data preparation or scale/the detail level of mapping. This review paper hopefully can give a comprehensive explanation to assist the upcoming research dealing with morphometric analysis.

  7. Integrating medical, assistive, and universally designed products and technologies: assistive technology device classification (ATDC).

    PubMed

    Bauer, Stephen; Elsaesser, Linda-Jeanne

    2012-09-01

    ISO26000:2010 International Guidance Standard on Organizational Social Responsibility requires that effective organizational performance recognize social responsibility, including the rights of persons with disabilities (PWD), engage stakeholders and contribute to sustainable development. Millennium Development Goals 2010 notes that the most vulnerable people require special attention, while the World Report on Disability 2011 identifies improved data collection and removal of barriers to rehabilitation as the means to empower PWD. The Assistive Technology Device Classification (ATDC), Assistive Technology Service Method (ATSM) and Matching Person and Technology models provide an evidence-based, standardized, internationally comparable framework to improve data collection and rehabilitation interventions. The ATDC and ATSM encompass and support universal design (UD) principles, and use the language and concepts of the International Classification of Functioning, Disability and Health (ICF). Use ATDC and ICF concepts to differentiate medical, assistive and UD products and technology; relate technology "types" to markets and costs; and support provision of UD products and technologies as sustainable and socially responsible behavior. Supply-side and demand-side incentives are suggested to foster private sector development and commercialization of UD products and technologies. Health and health-related professionals should be knowledgeable of UD principles and interventions.

  8. Objective automated quantification of fluorescence signal in histological sections of rat lens.

    PubMed

    Talebizadeh, Nooshin; Hagström, Nanna Zhou; Yu, Zhaohua; Kronschläger, Martin; Söderberg, Per; Wählby, Carolina

    2017-08-01

    Visual quantification and classification of fluorescent signals is the gold standard in microscopy. The purpose of this study was to develop an automated method to delineate cells and to quantify expression of fluorescent signal of biomarkers in each nucleus and cytoplasm of lens epithelial cells in a histological section. A region of interest representing the lens epithelium was manually demarcated in each input image. Thereafter, individual cell nuclei within the region of interest were automatically delineated based on watershed segmentation and thresholding with an algorithm developed in Matlab™. Fluorescence signal was quantified within nuclei, cytoplasms and juxtaposed backgrounds. The classification of cells as labelled or not labelled was based on comparison of the fluorescence signal within cells with local background. The classification rule was thereafter optimized as compared with visual classification of a limited dataset. The performance of the automated classification was evaluated by asking 11 independent blinded observers to classify all cells (n = 395) in one lens image. Time consumed by the automatic algorithm and visual classification of cells was recorded. On an average, 77% of the cells were correctly classified as compared with the majority vote of the visual observers. The average agreement among visual observers was 83%. However, variation among visual observers was high, and agreement between two visual observers was as low as 71% in the worst case. Automated classification was on average 10 times faster than visual scoring. The presented method enables objective and fast detection of lens epithelial cells and quantification of expression of fluorescent signal with an accuracy comparable with the variability among visual observers. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  9. Classification of interstitial lung disease patterns with topological texture features

    NASA Astrophysics Data System (ADS)

    Huber, Markus B.; Nagarajan, Mahesh; Leinsinger, Gerda; Ray, Lawrence A.; Wismüller, Axel

    2010-03-01

    Topological texture features were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honey-combing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. A set of 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and three Minkowski Functionals (MFs, e.g. MF.euler). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions and the significance thresholds were adjusted for multiple comparisons by the Bonferroni correction. The best classification results were obtained by the MF features, which performed significantly better than all the standard GLCM and MD features (p < 0.005) for both classifiers. The highest accuracy was found for MF.euler (97.5%, 96.6%; for the k-NN and RBFN classifier, respectively). The best standard texture features were the GLCM features 'homogeneity' (91.8%, 87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced topological texture features can provide superior classification performance in computer-assisted diagnosis of interstitial lung diseases when compared to standard texture analysis methods.

  10. Sex estimation standards for medieval and contemporary Croats

    PubMed Central

    Bašić, Željana; Kružić, Ivana; Jerković, Ivan; Anđelinović, Deny; Anđelinović, Šimun

    2017-01-01

    Aim To develop discriminant functions for sex estimation on medieval Croatian population and test their application on contemporary Croatian population. Methods From a total of 519 skeletons, we chose 84 adult excellently preserved skeletons free of antemortem and postmortem changes and took all standard measurements. Sex was estimated/determined using standard anthropological procedures and ancient DNA (amelogenin analysis) where pelvis was insufficiently preserved or where sex morphological indicators were not consistent. We explored which measurements showed sexual dimorphism and used them for developing univariate and multivariate discriminant functions for sex estimation. We included only those functions that reached accuracy rate ≥80%. We tested the applicability of developed functions on modern Croatian sample (n = 37). Results From 69 standard skeletal measurements used in this study, 56 of them showed statistically significant sexual dimorphism (74.7%). We developed five univariate discriminant functions with classification rate 80.6%-85.2% and seven multivariate discriminant functions with an accuracy rate of 81.8%-93.0%. When tested on the modern population functions showed classification rates 74.1%-100%, and ten of them reached aimed accuracy rate. Females showed higher classification rates in the medieval populations, whereas males were better classified in the modern populations. Conclusion Developed discriminant functions are sufficiently accurate for reliable sex estimation in both medieval Croatian population and modern Croatian samples and may be used in forensic settings. The methodological issues that emerged regarding the importance of considering external factors in development and application of discriminant functions for sex estimation should be further explored. PMID:28613039

  11. Report to the Higher Education Policy Commission. West Virginia Higher Education Facilities Information System Statewide Institution Report.

    ERIC Educational Resources Information Center

    West Virginia Higher Education Policy Commission, 2004

    2004-01-01

    The West Virginia Higher Education Facilities Information System was formed as a method for instituting statewide standardization of space use and classification; to serve as a vehicle for statewide data acquisition; and to provide statistical data that contributes to detailed institutional planning analysis. The result thus far is the production…

  12. Risk Assessment Stability: A Revalidation Study of the Arizona Risk/Needs Assessment Instrument

    ERIC Educational Resources Information Center

    Schwalbe, Craig S.

    2009-01-01

    The actuarial method is the gold standard for risk assessment in child welfare, juvenile justice, and criminal justice. It produces risk classifications that are highly predictive and that may be robust to sampling error. This article reports a revalidation study of the Arizona Risk/Needs Assessment instrument, an actuarial instrument for juvenile…

  13. Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use

    PubMed Central

    Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil

    2013-01-01

    The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648

  14. Proposal of a New Adverse Event Classification by the Society of Interventional Radiology Standards of Practice Committee.

    PubMed

    Khalilzadeh, Omid; Baerlocher, Mark O; Shyn, Paul B; Connolly, Bairbre L; Devane, A Michael; Morris, Christopher S; Cohen, Alan M; Midia, Mehran; Thornton, Raymond H; Gross, Kathleen; Caplin, Drew M; Aeron, Gunjan; Misra, Sanjay; Patel, Nilesh H; Walker, T Gregory; Martinez-Salazar, Gloria; Silberzweig, James E; Nikolic, Boris

    2017-10-01

    To develop a new adverse event (AE) classification for the interventional radiology (IR) procedures and evaluate its clinical, research, and educational value compared with the existing Society of Interventional Radiology (SIR) classification via an SIR member survey. A new AE classification was developed by members of the Standards of Practice Committee of the SIR. Subsequently, a survey was created by a group of 18 members from the SIR Standards of Practice Committee and Service Lines. Twelve clinical AE case scenarios were generated that encompassed a broad spectrum of IR procedures and potential AEs. Survey questions were designed to evaluate the following domains: educational and research values, accountability for intraprocedural challenges, consistency of AE reporting, unambiguity, and potential for incorporation into existing quality-assurance framework. For each AE scenario, the survey participants were instructed to answer questions about the proposed and existing SIR classifications. SIR members were invited via online survey links, and 68 members participated among 140 surveyed. Answers on new and existing classifications were evaluated and compared statistically. Overall comparison between the two surveys was performed by generalized linear modeling. The proposed AE classification received superior evaluations in terms of consistency of reporting (P < .05) and potential for incorporation into existing quality-assurance framework (P < .05). Respondents gave a higher overall rating to the educational and research value of the new compared with the existing classification (P < .05). This study proposed an AE classification system that outperformed the existing SIR classification in the studied domains. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.

  15. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.

    PubMed

    Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John

    2017-02-01

    The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.

  16. Classification effects of real and imaginary movement selective attention tasks on a P300-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Salvaris, Mathew; Sepulveda, Francisco

    2010-10-01

    Brain-computer interfaces (BCIs) rely on various electroencephalography methodologies that allow the user to convey their desired control to the machine. Common approaches include the use of event-related potentials (ERPs) such as the P300 and modulation of the beta and mu rhythms. All of these methods have their benefits and drawbacks. In this paper, three different selective attention tasks were tested in conjunction with a P300-based protocol (i.e. the standard counting of target stimuli as well as the conduction of real and imaginary movements in sync with the target stimuli). The three tasks were performed by a total of 10 participants, with the majority (7 out of 10) of the participants having never before participated in imaginary movement BCI experiments. Channels and methods used were optimized for the P300 ERP and no sensory-motor rhythms were explicitly used. The classifier used was a simple Fisher's linear discriminant. Results were encouraging, showing that on average the imaginary movement achieved a P300 versus No-P300 classification accuracy of 84.53%. In comparison, mental counting, the standard selective attention task used in previous studies, achieved 78.9% and real movement 90.3%. Furthermore, multiple trial classification results were recorded and compared, with real movement reaching 99.5% accuracy after four trials (12.8 s), imaginary movement reaching 99.5% accuracy after five trials (16 s) and counting reaching 98.2% accuracy after ten trials (32 s).

  17. Classification effects of real and imaginary movement selective attention tasks on a P300-based brain-computer interface.

    PubMed

    Salvaris, Mathew; Sepulveda, Francisco

    2010-10-01

    Brain-computer interfaces (BCIs) rely on various electroencephalography methodologies that allow the user to convey their desired control to the machine. Common approaches include the use of event-related potentials (ERPs) such as the P300 and modulation of the beta and mu rhythms. All of these methods have their benefits and drawbacks. In this paper, three different selective attention tasks were tested in conjunction with a P300-based protocol (i.e. the standard counting of target stimuli as well as the conduction of real and imaginary movements in sync with the target stimuli). The three tasks were performed by a total of 10 participants, with the majority (7 out of 10) of the participants having never before participated in imaginary movement BCI experiments. Channels and methods used were optimized for the P300 ERP and no sensory-motor rhythms were explicitly used. The classifier used was a simple Fisher's linear discriminant. Results were encouraging, showing that on average the imaginary movement achieved a P300 versus No-P300 classification accuracy of 84.53%. In comparison, mental counting, the standard selective attention task used in previous studies, achieved 78.9% and real movement 90.3%. Furthermore, multiple trial classification results were recorded and compared, with real movement reaching 99.5% accuracy after four trials (12.8 s), imaginary movement reaching 99.5% accuracy after five trials (16 s) and counting reaching 98.2% accuracy after ten trials (32 s).

  18. Investigation of computer-aided colonic crypt pattern analysis

    NASA Astrophysics Data System (ADS)

    Qi, Xin; Pan, Yinsheng; Sivak, Michael V., Jr.; Olowe, Kayode; Rollins, Andrew M.

    2007-02-01

    Colorectal cancer is the second leading cause of cancer-related death in the United States. Approximately 50% of these deaths could be prevented by earlier detection through screening. Magnification chromoendoscopy is a technique which utilizes tissue stains applied to the gastrointestinal mucosa and high-magnification endoscopy to better visualize and characterize lesions. Prior studies have shown that shapes of colonic crypts change with disease and show characteristic patterns. Current methods for assessing colonic crypt patterns are somewhat subjective and not standardized. Computerized algorithms could be used to standardize colonic crypt pattern assessment. We have imaged resected colonic mucosa in vitro (N = 70) using methylene blue dye and a surgical microscope to approximately simulate in vivo imaging with magnification chromoendoscopy. We have developed a method of computerized processing to analyze the crypt patterns in the images. The quantitative image analysis consists of three steps. First, the crypts within the region of interest of colonic tissue are semi-automatically segmented using watershed morphological processing. Second, crypt size and shape parameters are extracted from the segmented crypts. Third, each sample is assigned to a category according to the Kudo criteria. The computerized classification is validated by comparison with human classification using the Kudo classification criteria. The computerized colonic crypt pattern analysis algorithm will enable a study of in vivo magnification chromoendoscopy of colonic crypt pattern correlated with risk of colorectal cancer. This study will assess the feasibility of screening and surveillance of the colon using magnification chromoendoscopy.

  19. Automated color classification of urine dipstick image in urine examination

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Royananda; Muchtar, M. A.; Taqiuddin, R.; Adnan, S.; Anugrahwaty, R.; Budiarto, R.

    2018-03-01

    Urine examination using urine dipstick has long been used to determine the health status of a person. The economical and convenient use of urine dipstick is one of the reasons urine dipstick is still used to check people health status. The real-life implementation of urine dipstick is done manually, in general, that is by comparing it with the reference color visually. This resulted perception differences in the color reading of the examination results. In this research, authors used a scanner to obtain the urine dipstick color image. The use of scanner can be one of the solutions in reading the result of urine dipstick because the light produced is consistent. A method is required to overcome the problems of urine dipstick color matching and the test reference color that have been conducted manually. The method proposed by authors is Euclidean Distance, Otsu along with RGB color feature extraction method to match the colors on the urine dipstick with the standard reference color of urine examination. The result shows that the proposed approach was able to classify the colors on a urine dipstick with an accuracy of 95.45%. The accuracy of color classification on urine dipstick against the standard reference color is influenced by the level of scanner resolution used, the higher the scanner resolution level, the higher the accuracy.

  20. The Universal Decimal Classification: Some Factors Concerning Its Origins, Development, and Influence.

    ERIC Educational Resources Information Center

    McIlwaine, I. C.

    1997-01-01

    Discusses the history and development of the Universal Decimal Classification (UDC). Topics include the relationship with Dewey Decimal Classification; revision process; structure; facet analysis; lack of standard rules for application; application in automated systems; influence of UDC on classification development; links with thesauri; and use…

  1. Classification of Instructional Programs - 2000. Public Comment Draft. [Third Revision].

    ERIC Educational Resources Information Center

    Morgan, Robert L.; Hunt, E. Stephen

    This third revision of the Classification of Instructional Programs (CIP) updates and modifies education program classifications, descriptions, and titles at the secondary, postsecondary, and adult education levels. This edition has also been adopted by Canada as its standard for major field of study classification. The volume includes the…

  2. Using an object-based grid system to evaluate a newly developed EP approach to formulate SVMs as applied to the classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun

    2004-04-01

    This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.

  3. A methodology for space-time classification of groundwater quality.

    PubMed

    Passarella, G; Caputo, M C

    2006-04-01

    Safeguarding groundwater from civil, agricultural and industrial contamination is matter of great interest in water resource management. During recent years, much legislation has been produced stating the importance of groundwater as a source for drinking water supplies, underlining its vulnerability and defining the required quality standards. Thus, schematic tools, able to characterise the quality and quantity of groundwater systems, are of very great interest in any territorial planning and/or water resource management activity. This paper proposes a groundwater quality classification method which has been applied to a real aquifer, starting from several studies published by the Italian National Hydrogeologic Catastrophe Defence Group (GNDCI). The methodology is based on the concentration values of several parameters used as indexes of the natural hydro-chemical water condition and of potential man-induced modifications of groundwater quality. The resulting maps, although representative of the quality, do not include any information on its evolution in time. In this paper, this "stationary" classification method has been improved by crossing the quality classes with three indexes of temporal behaviour during recent years. It was then applied to data from monitoring campaigns, performed in spring and autumn, from 1990 to 1996, in the plain of Modena aquifer (central Italy). The results are reported in the form of space-time classification table and maps.

  4. Metabolic Profiling and Classification of Propolis Samples from Southern Brazil: An NMR-Based Platform Coupled with Machine Learning.

    PubMed

    Maraschin, Marcelo; Somensi-Zeggio, Amélia; Oliveira, Simone K; Kuhnen, Shirley; Tomazzoli, Maíra M; Raguzzoni, Josiane C; Zeri, Ana C M; Carreira, Rafael; Correia, Sara; Costa, Christopher; Rocha, Miguel

    2016-01-22

    The chemical composition of propolis is affected by environmental factors and harvest season, making it difficult to standardize its extracts for medicinal usage. By detecting a typical chemical profile associated with propolis from a specific production region or season, certain types of propolis may be used to obtain a specific pharmacological activity. In this study, propolis from three agroecological regions (plain, plateau, and highlands) from southern Brazil, collected over the four seasons of 2010, were investigated through a novel NMR-based metabolomics data analysis workflow. Chemometrics and machine learning algorithms (PLS-DA and RF), including methods to estimate variable importance in classification, were used in this study. The machine learning and feature selection methods permitted construction of models for propolis sample classification with high accuracy (>75%, reaching ∼90% in the best case), better discriminating samples regarding their collection seasons comparatively to the harvest regions. PLS-DA and RF allowed the identification of biomarkers for sample discrimination, expanding the set of discriminating features and adding relevant information for the identification of the class-determining metabolites. The NMR-based metabolomics analytical platform, coupled to bioinformatic tools, allowed characterization and classification of Brazilian propolis samples regarding the metabolite signature of important compounds, i.e., chemical fingerprint, harvest seasons, and production regions.

  5. Entropy-based gene ranking without selection bias for the predictive classification of microarray data.

    PubMed

    Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe

    2003-11-06

    We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  6. 75 FR 51838 - Public Review of Draft Coastal and Marine Ecological Classification Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-23

    ... DEPARTMENT OF THE INTERIOR Geological Survey Public Review of Draft Coastal and Marine Ecological... comments on draft Coastal and Marine Ecological Classification Standard. SUMMARY: The Federal Geographic Data Committee (FGDC) is conducting a public review of the draft Coastal and Marine Ecological...

  7. 46 CFR 8.250 - Acceptance of standards and functions delegated under existing regulations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... APPLICABLE TO THE PUBLIC VESSEL INSPECTION ALTERNATIVES Recognition of a Classification Society § 8.250 Acceptance of standards and functions delegated under existing regulations. (a) Classification society class... society has received authorization to conduct a related delegated function. (b) A recognized...

  8. 46 CFR 8.250 - Acceptance of standards and functions delegated under existing regulations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... APPLICABLE TO THE PUBLIC VESSEL INSPECTION ALTERNATIVES Recognition of a Classification Society § 8.250 Acceptance of standards and functions delegated under existing regulations. (a) Classification society class... society has received authorization to conduct a related delegated function. (b) A recognized...

  9. Interictal Epileptiform Discharges (IEDs) classification in EEG data of epilepsy patients

    NASA Astrophysics Data System (ADS)

    Puspita, J. W.; Soemarno, G.; Jaya, A. I.; Soewono, E.

    2017-12-01

    Interictal Epileptiform Dischargers (IEDs), which consists of spike waves and sharp waves, in human electroencephalogram (EEG) are characteristic signatures of epilepsy. Spike waves are characterized by a pointed peak with a duration of 20-70 ms, while sharp waves has a duration of 70-200 ms. The purpose of the study was to classify spike wave and sharp wave of EEG data of epilepsy patients using Backpropagation Neural Network. The proposed method consists of two main stages: feature extraction stage and classification stage. In the feature extraction stage, we use frequency, amplitude and statistical feature, such as mean, standard deviation, and median, of each wave. The frequency values of the IEDs are very sensitive to the selection of the wave baseline. The selected baseline must contain all data of rising and falling slopes of the IEDs. Thus, we have a feature that is able to represent the type of IEDs, appropriately. The results show that the proposed method achieves the best classification results with the recognition rate of 93.75 % for binary sigmoid activation function and learning rate of 0.1.

  10. Shedding subspecies: The influence of genetics on reptile subspecies taxonomy.

    PubMed

    Torstrom, Shannon M; Pangle, Kevin L; Swanson, Bradley J

    2014-07-01

    The subspecies concept influences multiple aspects of biology and management. The 'molecular revolution' altered traditional methods (morphological traits) of subspecies classification by applying genetic analyses resulting in alternative or contradictory classifications. We evaluated recent reptile literature for bias in the recommendations regarding subspecies status when genetic data were included. Reviewing characteristics of the study, genetic variables, genetic distance values and noting the species concepts, we found that subspecies were more likely elevated to species when using genetic analysis. However, there was no predictive relationship between variables used and taxonomic recommendation. There was a significant difference between the median genetic distance values when researchers elevated or collapsed a subspecies. Our review found nine different concepts of species used when recommending taxonomic change, and studies incorporating multiple species concepts were more likely to recommend a taxonomic change. Since using genetic techniques significantly alter reptile taxonomy there is a need to establish a standard method to determine the species-subspecies boundary in order to effectively use the subspecies classification for research and conservation purposes. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Staging of chronic myeloid leukemia in the imatinib era: an evaluation of the World Health Organization proposal.

    PubMed

    Cortes, Jorge E; Talpaz, Moshe; O'Brien, Susan; Faderl, Stefan; Garcia-Manero, Guillermo; Ferrajoli, Alessandra; Verstovsek, Srdan; Rios, Mary B; Shan, Jenny; Kantarjian, Hagop M

    2006-03-15

    Several staging classification systems, all of which were designed in the preimatinib era, are used for chronic myeloid leukemia (CML). The World Health Organization (WHO) recently proposed a new classification system that has not been validated clinically. The authors investigated the significance of the WHO classification system and compared it with the classification systems used to date in imatinib trials ("standard definition") to determine its impact in establishing the outcome of patients after therapy with imatinib. In total, 809 patients who received imatinib for CML were classified into chronic phase (CP), accelerated phase (AP), and blast phase (BP) based on standard definitions and then were reclassified according to the new WHO classification system. Their outcomes with imatinib therapy were compared, and the value of individual components of these classification systems was determined. With the WHO classification, 78 patients (10%) were reclassified: 45 patients (6%) were reclassified from CP to AP, 14 patients (2%) were reclassified from AP to CP, and 19 patients (2%) were reclassified from AP to BP. The rates of complete cytogenetic response for patients in CP, AP, and BP according to the standard definition were 72%, 45%, and 8%, respectively. After these patients were reclassified according to WHO criteria, the response rates were 77% (P = 0.07), 39% (P = 0.28), and 11% (P = 0.61), respectively. The 3-year survival rates were 91%, 65%, and 10%, respectively, according to the standard classification and 95% (P = 0.05), 63% (P = 0.76), and 16% (P = 0.18), respectively, according to the WHO classification. Patients who had a blast percentage of 20-29%, which is considered CML-BP according to the WHO classification, had a significantly better response rate (21% vs. 8%; P = 0.11) and 3-year survival rate (42% vs. 10%; P = 0.0001) compared with patients who had blasts > or = 30%. Different classification systems had an impact on the outcome of patients, and some prognostic features had different prognostic implications in the imatinib era. The authors believe that a new, uniform staging system for CML is warranted, and they propose such a system. (c) 2006 American Cancer Society.

  12. Investigations in adaptive processing of multispectral data

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Horwitz, H. M.

    1973-01-01

    Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.

  13. Heart Rate Variability Dynamics for the Prognosis of Cardiovascular Risk

    PubMed Central

    Ramirez-Villegas, Juan F.; Lam-Espinosa, Eric; Ramirez-Moreno, David F.; Calvo-Echeverry, Paulo C.; Agredo-Rodriguez, Wilfredo

    2011-01-01

    Statistical, spectral, multi-resolution and non-linear methods were applied to heart rate variability (HRV) series linked with classification schemes for the prognosis of cardiovascular risk. A total of 90 HRV records were analyzed: 45 from healthy subjects and 45 from cardiovascular risk patients. A total of 52 features from all the analysis methods were evaluated using standard two-sample Kolmogorov-Smirnov test (KS-test). The results of the statistical procedure provided input to multi-layer perceptron (MLP) neural networks, radial basis function (RBF) neural networks and support vector machines (SVM) for data classification. These schemes showed high performances with both training and test sets and many combinations of features (with a maximum accuracy of 96.67%). Additionally, there was a strong consideration for breathing frequency as a relevant feature in the HRV analysis. PMID:21386966

  14. Error, Power, and Blind Sentinels: The Statistics of Seagrass Monitoring

    PubMed Central

    Schultz, Stewart T.; Kruschel, Claudia; Bakran-Petricioli, Tatjana; Petricioli, Donat

    2015-01-01

    We derive statistical properties of standard methods for monitoring of habitat cover worldwide, and criticize them in the context of mandated seagrass monitoring programs, as exemplified by Posidonia oceanica in the Mediterranean Sea. We report the novel result that cartographic methods with non-trivial classification errors are generally incapable of reliably detecting habitat cover losses less than about 30 to 50%, and the field labor required to increase their precision can be orders of magnitude higher than that required to estimate habitat loss directly in a field campaign. We derive a universal utility threshold of classification error in habitat maps that represents the minimum habitat map accuracy above which direct methods are superior. Widespread government reliance on blind-sentinel methods for monitoring seafloor can obscure the gradual and currently ongoing losses of benthic resources until the time has long passed for meaningful management intervention. We find two classes of methods with very high statistical power for detecting small habitat cover losses: 1) fixed-plot direct methods, which are over 100 times as efficient as direct random-plot methods in a variable habitat mosaic; and 2) remote methods with very low classification error such as geospatial underwater videography, which is an emerging, low-cost, non-destructive method for documenting small changes at millimeter visual resolution. General adoption of these methods and their further development will require a fundamental cultural change in conservation and management bodies towards the recognition and promotion of requirements of minimal statistical power and precision in the development of international goals for monitoring these valuable resources and the ecological services they provide. PMID:26367863

  15. Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆

    PubMed Central

    Cao, Houwei; Verma, Ragini; Nenkova, Ani

    2014-01-01

    We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion. PMID:25422534

  16. Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆

    PubMed

    Cao, Houwei; Verma, Ragini; Nenkova, Ani

    2015-01-01

    We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion.

  17. Classifying BCI signals from novice users with extreme learning machine

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bermúdez, Germán; Bueno-Crespo, Andrés; José Martinez-Albaladejo, F.

    2017-07-01

    Brain computer interface (BCI) allows to control external devices only with the electrical activity of the brain. In order to improve the system, several approaches have been proposed. However it is usual to test algorithms with standard BCI signals from experts users or from repositories available on Internet. In this work, extreme learning machine (ELM) has been tested with signals from 5 novel users to compare with standard classification algorithms. Experimental results show that ELM is a suitable method to classify electroencephalogram signals from novice users.

  18. Enterprise Standard Industrial Classification Manual. 1974.

    ERIC Educational Resources Information Center

    Executive Office of the President, Washington, DC. Statistical Policy Div.

    This classification is presented to provide a standard for use with statistics about enterprises (i.e., companies, rather than their individual establishments) by kind of economic activity. The enterprise unit consists of all establishments under common direct or indirect ownership. It is defined to include all entities, including subsidiaries,…

  19. Validity: Applying Current Concepts and Standards to Gynecologic Surgery Performance Assessments

    ERIC Educational Resources Information Center

    LeClaire, Edgar L.; Nihira, Mikio A.; Hardré, Patricia L.

    2015-01-01

    Validity is critical for meaningful assessment of surgical competency. According to the Standards for Educational and Psychological Testing, validation involves the integration of data from well-defined classifications of evidence. In the authoritative framework, data from all classifications support construct validity claims. The two aims of this…

  20. A standard lexicon for biodiversity conservation: unified classifications of threats and actions.

    PubMed

    Salafsky, Nick; Salzer, Daniel; Stattersfield, Alison J; Hilton-Taylor, Craig; Neugarten, Rachel; Butchart, Stuart H M; Collen, Ben; Cox, Neil; Master, Lawrence L; O'Connor, Sheila; Wilkie, David

    2008-08-01

    An essential foundation of any science is a standard lexicon. Any given conservation project can be described in terms of the biodiversity targets, direct threats, contributing factors at the project site, and the conservation actions that the project team is employing to change the situation. These common elements can be linked in a causal chain, which represents a theory of change about how the conservation actions are intended to bring about desired project outcomes. If project teams want to describe and share their work and learn from one another, they need a standard and precise lexicon to specifically describe each node along this chain. To date, there have been several independent efforts to develop standard classifications for the direct threats that affect biodiversity and the conservation actions required to counteract these threats. Recognizing that it is far more effective to have only one accepted global scheme, we merged these separate efforts into unified classifications of threats and actions, which we present here. Each classification is a hierarchical listing of terms and associated definitions. The classifications are comprehensive and exclusive at the upper levels of the hierarchy, expandable at the lower levels, and simple, consistent, and scalable at all levels. We tested these classifications by applying them post hoc to 1191 threatened bird species and 737 conservation projects. Almost all threats and actions could be assigned to the new classification systems, save for some cases lacking detailed information. Furthermore, the new classification systems provided an improved way of analyzing and comparing information across projects when compared with earlier systems. We believe that widespread adoption of these classifications will help practitioners more systematically identify threats and appropriate actions, managers to more efficiently set priorities and allocate resources, and most important, facilitate cross-project learning and the development of a systematic science of conservation.

  1. Laser-induced breakdown spectroscopy-based investigation and classification of pharmaceutical tablets using multivariate chemometric analysis

    PubMed Central

    Myakalwar, Ashwin Kumar; Sreedhar, S.; Barman, Ishan; Dingari, Narahara Chari; Rao, S. Venugopal; Kiran, P. Prem; Tewari, Surya P.; Kumar, G. Manoj

    2012-01-01

    We report the effectiveness of laser-induced breakdown spectroscopy (LIBS) in probing the content of pharmaceutical tablets and also investigate its feasibility for routine classification. This method is particularly beneficial in applications where its exquisite chemical specificity and suitability for remote and on site characterization significantly improves the speed and accuracy of quality control and assurance process. Our experiments reveal that in addition to the presence of carbon, hydrogen, nitrogen and oxygen, which can be primarily attributed to the active pharmaceutical ingredients, specific inorganic atoms were also present in all the tablets. Initial attempts at classification by a ratiometric approach using oxygen to nitrogen compositional values yielded an optimal value (at 746.83 nm) with the least relative standard deviation but nevertheless failed to provide an acceptable classification. To overcome this bottleneck in the detection process, two chemometric algorithms, i.e. principal component analysis (PCA) and soft independent modeling of class analogy (SIMCA), were implemented to exploit the multivariate nature of the LIBS data demonstrating that LIBS has the potential to differentiate and discriminate among pharmaceutical tablets. We report excellent prospective classification accuracy using supervised classification via the SIMCA algorithm, demonstrating its potential for future applications in process analytical technology, especially for fast on-line process control monitoring applications in the pharmaceutical industry. PMID:22099648

  2. A systematic review of the Robson classification for caesarean section: what works, doesn't work and how to improve it.

    PubMed

    Betrán, Ana Pilar; Vindevoghel, Nadia; Souza, Joao Paulo; Gülmezoglu, A Metin; Torloni, Maria Regina

    2014-01-01

    Caesarean sections (CS) rates continue to increase worldwide without a clear understanding of the main drivers and consequences. The lack of a standardized internationally-accepted classification system to monitor and compare CS rates is one of the barriers to a better understanding of this trend. The Robson's 10-group classification is based on simple obstetrical parameters (parity, previous CS, gestational age, onset of labour, fetal presentation and number of fetuses) and does not involve the indication for CS. This classification has become very popular over the last years in many countries. We conducted a systematic review to synthesize the experience of users on the implementation of this classification and proposed adaptations. Four electronic databases were searched. A three-step thematic synthesis approach and a qualitative metasummary method were used. 232 unique reports were identified, 97 were selected for full-text evaluation and 73 were included. These publications reported on the use of Robson's classification in over 33 million women from 31 countries. According to users, the main strengths of the classification are its simplicity, robustness, reliability and flexibility. However, missing data, misclassification of women and lack of definition or consensus on core variables of the classification are challenges. To improve the classification for local use and to decrease heterogeneity within groups, several subdivisions in each of the 10 groups have been proposed. Group 5 (women with previous CS) received the largest number of suggestions. The use of the Robson classification is increasing rapidly and spontaneously worldwide. Despite some limitations, this classification is easy to implement and interpret. Several suggested modifications could be useful to help facilities and countries as they work towards its implementation.

  3. Classification of ECG beats using deep belief network and active learning.

    PubMed

    G, Sayantan; T, Kien P; V, Kadambari K

    2018-04-12

    A new semi-supervised approach based on deep learning and active learning for classification of electrocardiogram signals (ECG) is proposed. The objective of the proposed work is to model a scientific method for classification of cardiac irregularities using electrocardiogram beats. The model follows the Association for the Advancement of medical instrumentation (AAMI) standards and consists of three phases. In phase I, feature representation of ECG is learnt using Gaussian-Bernoulli deep belief network followed by a linear support vector machine (SVM) training in the consecutive phase. It yields three deep models which are based on AAMI-defined classes, namely N, V, S, and F. In the last phase, a query generator is introduced to interact with the expert to label few beats to improve accuracy and sensitivity. The proposed approach depicts significant improvement in accuracy with minimal queries posed to the expert and fast online training as tested on the MIT-BIH Arrhythmia Database and the MIT-BIH Supra-ventricular Arrhythmia Database (SVDB). With 100 queries labeled by the expert in phase III, the method achieves an accuracy of 99.5% in "S" versus all classifications (SVEB) and 99.4% accuracy in "V " versus all classifications (VEB) on MIT-BIH Arrhythmia Database. In a similar manner, it is attributed that an accuracy of 97.5% for SVEB and 98.6% for VEB on SVDB database is achieved respectively. Graphical Abstract Reply- Deep belief network augmented by active learning for efficient prediction of arrhythmia.

  4. Rock classification based on resistivity patterns in electrical borehole wall images

    NASA Astrophysics Data System (ADS)

    Linek, Margarete; Jungmann, Matthias; Berlage, Thomas; Pechnig, Renate; Clauser, Christoph

    2007-06-01

    Electrical borehole wall images represent grey-level-coded micro-resistivity measurements at the borehole wall. Different scientific methods have been implemented to transform image data into quantitative log curves. We introduce a pattern recognition technique applying texture analysis, which uses second-order statistics based on studying the occurrence of pixel pairs. We calculate so-called Haralick texture features such as contrast, energy, entropy and homogeneity. The supervised classification method is used for assigning characteristic texture features to different rock classes and assessing the discriminative power of these image features. We use classifiers obtained from training intervals to characterize the entire image data set recovered in ODP hole 1203A. This yields a synthetic lithology profile based on computed texture data. We show that Haralick features accurately classify 89.9% of the training intervals. We obtained misclassification for vesicular basaltic rocks. Hence, further image analysis tools are used to improve the classification reliability. We decompose the 2D image signal by the application of wavelet transformation in order to enhance image objects horizontally, diagonally and vertically. The resulting filtered images are used for further texture analysis. This combined classification based on Haralick features and wavelet transformation improved our classification up to a level of 98%. The application of wavelet transformation increases the consistency between standard logging profiles and texture-derived lithology. Texture analysis of borehole wall images offers the potential to facilitate objective analysis of multiple boreholes with the same lithology.

  5. 7 CFR 28.116 - Amounts of fees for classification; exemption.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... not applicable to review of classification if made on the same sample as the original class or... 7 Agriculture 2 2011-01-01 2011-01-01 false Amounts of fees for classification; exemption. 28.116... Standards Act Fees and Costs § 28.116 Amounts of fees for classification; exemption. (a) For the...

  6. 7 CFR 28.116 - Amounts of fees for classification; exemption.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... not applicable to review of classification if made on the same sample as the original class or... 7 Agriculture 2 2010-01-01 2010-01-01 false Amounts of fees for classification; exemption. 28.116... Standards Act Fees and Costs § 28.116 Amounts of fees for classification; exemption. (a) For the...

  7. The Power of Neuroimaging Biomarkers for Screening Frontotemporal Dementia

    PubMed Central

    McMillan, Corey T.; Avants, Brian B.; Cook, Philip; Ungar, Lyle; Trojanowski, John Q.; Grossman, Murray

    2014-01-01

    Frontotemporal dementia (FTD) is a clinically and pathologically heterogeneous neurodegenerative disease that can result from either frontotemporal lobar degeneration (FTLD) or Alzheimer’s disease (AD) pathology. It is critical to establish statistically powerful biomarkers that can achieve substantial cost-savings and increase feasibility of clinical trials. We assessed three broad categories of neuroimaging methods to screen underlying FTLD and AD pathology in a clinical FTD series: global measures (e.g., ventricular volume), anatomical volumes of interest (VOIs) (e.g., hippocampus) using a standard atlas, and data-driven VOIs using Eigenanatomy. We evaluated clinical FTD patients (N=93) with cerebrospinal fluid, gray matter (GM) MRI, and diffusion tensor imaging (DTI) to assess whether they had underlying FTLD or AD pathology. Linear regression was performed to identify the optimal VOIs for each method in a training dataset and then we evaluated classification sensitivity and specificity in an independent test cohort. Power was evaluated by calculating minimum sample sizes (mSS) required in the test classification analyses for each model. The data-driven VOI analysis using a multimodal combination of GM MRI and DTI achieved the greatest classification accuracy (89% SENSITIVE; 89% SPECIFIC) and required a lower minimum sample size (N=26) relative to anatomical VOI and global measures. We conclude that a data-driven VOI approach employing Eigenanatomy provides more accurate classification, benefits from increased statistical power in unseen datasets, and therefore provides a robust method for screening underlying pathology in FTD patients for entry into clinical trials. PMID:24687814

  8. Improved classification and visualization of healthy and pathological hard dental tissues by modeling specular reflections in NIR hyperspectral images

    NASA Astrophysics Data System (ADS)

    Usenik, Peter; Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2012-03-01

    Despite major improvements in dental healthcare and technology, dental caries remains one of the most prevalent chronic diseases of modern society. The initial stages of dental caries are characterized by demineralization of enamel crystals, commonly known as white spots, which are difficult to diagnose. Near-infrared (NIR) hyperspectral imaging is a new promising technique for early detection of demineralization which can classify healthy and pathological dental tissues. However, due to non-ideal illumination of the tooth surface the hyperspectral images can exhibit specular reflections, in particular around the edges and the ridges of the teeth. These reflections significantly affect the performance of automated classification and visualization methods. Cross polarized imaging setup can effectively remove the specular reflections, however is due to the complexity and other imaging setup limitations not always possible. In this paper, we propose an alternative approach based on modeling the specular reflections of hard dental tissues, which significantly improves the classification accuracy in the presence of specular reflections. The method was evaluated on five extracted human teeth with corresponding gold standard for 6 different healthy and pathological hard dental tissues including enamel, dentin, calculus, dentin caries, enamel caries and demineralized regions. Principal component analysis (PCA) was used for multivariate local modeling of healthy and pathological dental tissues. The classification was performed by employing multiple discriminant analysis. Based on the obtained results we believe the proposed method can be considered as an effective alternative to the complex cross polarized imaging setups.

  9. Neural net applied to anthropological material: a methodical study on the human nasal skeleton.

    PubMed

    Prescher, Andreas; Meyers, Anne; Gerf von Keyserlingk, Diedrich

    2005-07-01

    A new information processing method, an artificial neural net, was applied to characterise the variability of anthropological features of the human nasal skeleton. The aim was to find different types of nasal skeletons. A neural net with 15*15 nodes was trained by 17 standard anthropological parameters taken from 184 skulls of the Aachen collection. The trained neural net delivers its classification in a two-dimensional map. Different types of noses were locally separated within the map. Rare and frequent types may be distinguished after one passage of the complete collection through the net. Statistical descriptive analysis, hierarchical cluster analysis, and discriminant analysis were applied to the same data set. These parallel applications allowed comparison of the new approach to the more traditional ones. In general the classification by the neural net is in correspondence with cluster analysis and discriminant analysis. However, it goes beyond these classifications because of the possibility of differentiating the types in multi-dimensional dependencies. Furthermore, places in the map are kept blank for intermediate forms, which may be theoretically expected, but were not included in the training set. In conclusion, the application of a neural network is a suitable method for investigating large collections of biological material. The gained classification may be helpful in anatomy and anthropology as well as in forensic medicine. It may be used to characterise the peculiarity of a whole set as well as to find particular cases within the set.

  10. Associations among hydrologic classifications and fish traits to support environmental flow standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A; Bevelhimer, Mark S; Frimpong, Dr. Emmanuel A,

    2014-01-01

    Classification systems are valuable to ecological management in that they organize information into consolidated units thereby providing efficient means to achieve conservation objectives. Of the many ways classifications benefit management, hypothesis generation has been discussed as the most important. However, in order to provide templates for developing and testing ecologically relevant hypotheses, classifications created using environmental variables must be linked to ecological patterns. Herein, we develop associations between a recent US hydrologic classification and fish traits in order to form a template for generating flow ecology hypotheses and supporting environmental flow standard development. Tradeoffs in adaptive strategies for fish weremore » observed across a spectrum of stable, perennial flow to unstable intermittent flow. In accordance with theory, periodic strategists were associated with stable, predictable flow, whereas opportunistic strategists were more affiliated with intermittent, variable flows. We developed linkages between the uniqueness of hydrologic character and ecological distinction among classes, which may translate into predictions between losses in hydrologic uniqueness and ecological community response. Comparisons of classification strength between hydrologic classifications and other frameworks suggested that spatially contiguous classifications with higher regionalization will tend to explain more variation in ecological patterns. Despite explaining less ecological variation than other frameworks, we contend that hydrologic classifications are still useful because they provide a conceptual linkage between hydrologic variation and ecological communities to support flow ecology relationships. Mechanistic associations among fish traits and hydrologic classes support the presumption that environmental flow standards should be developed uniquely for stream classes and ecological communities, therein.« less

  11. SVM Classifier - a comprehensive java interface for support vector machine classification of microarray data.

    PubMed

    Pirooznia, Mehdi; Deng, Youping

    2006-12-12

    Graphical user interface (GUI) software promotes novelty by allowing users to extend the functionality. SVM Classifier is a cross-platform graphical application that handles very large datasets well. The purpose of this study is to create a GUI application that allows SVM users to perform SVM training, classification and prediction. The GUI provides user-friendly access to state-of-the-art SVM methods embodied in the LIBSVM implementation of Support Vector Machine. We implemented the java interface using standard swing libraries. We used a sample data from a breast cancer study for testing classification accuracy. We achieved 100% accuracy in classification among the BRCA1-BRCA2 samples with RBF kernel of SVM. We have developed a java GUI application that allows SVM users to perform SVM training, classification and prediction. We have demonstrated that support vector machines can accurately classify genes into functional categories based upon expression data from DNA microarray hybridization experiments. Among the different kernel functions that we examined, the SVM that uses a radial basis kernel function provides the best performance. The SVM Classifier is available at http://mfgn.usm.edu/ebl/svm/.

  12. Learning machines and sleeping brains: Automatic sleep stage classification using decision-tree multi-class support vector machines.

    PubMed

    Lajnef, Tarek; Chaibi, Sahbi; Ruby, Perrine; Aguera, Pierre-Emmanuel; Eichenlaub, Jean-Baptiste; Samet, Mounir; Kachouri, Abdennaceur; Jerbi, Karim

    2015-07-30

    Sleep staging is a critical step in a range of electrophysiological signal processing pipelines used in clinical routine as well as in sleep research. Although the results currently achievable with automatic sleep staging methods are promising, there is need for improvement, especially given the time-consuming and tedious nature of visual sleep scoring. Here we propose a sleep staging framework that consists of a multi-class support vector machine (SVM) classification based on a decision tree approach. The performance of the method was evaluated using polysomnographic data from 15 subjects (electroencephalogram (EEG), electrooculogram (EOG) and electromyogram (EMG) recordings). The decision tree, or dendrogram, was obtained using a hierarchical clustering technique and a wide range of time and frequency-domain features were extracted. Feature selection was carried out using forward sequential selection and classification was evaluated using k-fold cross-validation. The dendrogram-based SVM (DSVM) achieved mean specificity, sensitivity and overall accuracy of 0.92, 0.74 and 0.88 respectively, compared to expert visual scoring. Restricting DSVM classification to data where both experts' scoring was consistent (76.73% of the data) led to a mean specificity, sensitivity and overall accuracy of 0.94, 0.82 and 0.92 respectively. The DSVM framework outperforms classification with more standard multi-class "one-against-all" SVM and linear-discriminant analysis. The promising results of the proposed methodology suggest that it may be a valuable alternative to existing automatic methods and that it could accelerate visual scoring by providing a robust starting hypnogram that can be further fine-tuned by expert inspection. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Satellite inventory of Minnesota forest resources

    NASA Technical Reports Server (NTRS)

    Bauer, Marvin E.; Burk, Thomas E.; Ek, Alan R.; Coppin, Pol R.; Lime, Stephen D.; Walsh, Terese A.; Walters, David K.; Befort, William; Heinzen, David F.

    1993-01-01

    The methods and results of using Landsat Thematic Mapper (TM) data to classify and estimate the acreage of forest covertypes in northeastern Minnesota are described. Portions of six TM scenes covering five counties with a total area of 14,679 square miles were classified into six forest and five nonforest classes. The approach involved the integration of cluster sampling, image processing, and estimation. Using cluster sampling, 343 plots, each 88 acres in size, were photo interpreted and field mapped as a source of reference data for classifier training and calibration of the TM data classifications. Classification accuracies of up to 75 percent were achieved; most misclassification was between similar or related classes. An inverse method of calibration, based on the error rates obtained from the classifications of the cluster plots, was used to adjust the classification class proportions for classification errors. The resulting area estimates for total forest land in the five-county area were within 3 percent of the estimate made independently by the USDA Forest Service. Area estimates for conifer and hardwood forest types were within 0.8 and 6.0 percent respectively, of the Forest Service estimates. A trial of a second method of estimating the same classes as the Forest Service resulted in standard errors of 0.002 to 0.015. A study of the use of multidate TM data for change detection showed that forest canopy depletion, canopy increment, and no change could be identified with greater than 90 percent accuracy. The project results have been the basis for the Minnesota Department of Natural Resources and the Forest Service to define and begin to implement an annual system of forest inventory which utilizes Landsat TM data to detect changes in forest cover.

  14. [Principles and Methods for Formulating National Standards of "Regulations of Acupuncture-nee- dle Manipulating techniques"].

    PubMed

    Gang, Wei-juan; Wang, Xin; Wang, Fang; Dong, Guo-feng; Wu, Xiao-dong

    2015-08-01

    The national standard of "Regulations of Acupuncture-needle Manipulating Techniques" is one of the national Criteria of Acupuncturology for which a total of 22 items have been already established. In the process of formulation, a series of common and specific problems have been met. In the present paper, the authors expound these problems from 3 aspects, namely principles for formulation, methods for formulating criteria, and considerations about some problems. The formulating principles include selection and regulations of principles for technique classification and technique-related key factors. The main methods for formulating criteria are 1) taking the literature as the theoretical foundation, 2) taking the clinical practice as the supporting evidence, and 3) taking the expounded suggestions or conclusions through peer review.

  15. Detection of eardrum abnormalities using ensemble deep learning approaches

    NASA Astrophysics Data System (ADS)

    Senaras, Caglar; Moberly, Aaron C.; Teknos, Theodoros; Essig, Garth; Elmaraghy, Charles; Taj-Schaal, Nazhat; Yua, Lianbo; Gurcan, Metin N.

    2018-02-01

    In this study, we proposed an approach to report the condition of the eardrum as "normal" or "abnormal" by ensembling two different deep learning architectures. In the first network (Network 1), we applied transfer learning to the Inception V3 network by using 409 labeled samples. As a second network (Network 2), we designed a convolutional neural network to take advantage of auto-encoders by using additional 673 unlabeled eardrum samples. The individual classification accuracies of the Network 1 and Network 2 were calculated as 84.4%(+/- 12.1%) and 82.6% (+/- 11.3%), respectively. Only 32% of the errors of the two networks were the same, making it possible to combine two approaches to achieve better classification accuracy. The proposed ensemble method allows us to achieve robust classification because it has high accuracy (84.4%) with the lowest standard deviation (+/- 10.3%).

  16. New nonlinear features for inspection, robotics, and face recognition

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit

    1999-10-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data. Other applications of these new feature spaces in robotics and face recognition are also noted.

  17. An evidence-based diagnostic classification system for low back pain

    PubMed Central

    Vining, Robert; Potocki, Eric; Seidman, Michael; Morgenthal, A. Paige

    2013-01-01

    Introduction: While clinicians generally accept that musculoskeletal low back pain (LBP) can arise from specific tissues, it remains difficult to confirm specific sources. Methods: Based on evidence supported by diagnostic utility studies, doctors of chiropractic functioning as members of a research clinic created a diagnostic classification system, corresponding exam and checklist based on strength of evidence, and in-office efficiency. Results: The diagnostic classification system contains one screening category, two pain categories: Nociceptive, Neuropathic, one functional evaluation category, and one category for unknown or poorly defined diagnoses. Nociceptive and neuropathic pain categories are each divided into 4 subcategories. Conclusion: This article describes and discusses the strength of evidence surrounding diagnostic categories for an in-office, clinical exam and checklist tool for LBP diagnosis. The use of a standardized tool for diagnosing low back pain in clinical and research settings is encouraged. PMID:23997245

  18. [Biogeography: geography or biology?].

    PubMed

    Kafanov, A I

    2009-01-01

    General biogeography is an interdisciplinary science, which combines geographic and biological aspects constituting two distinct research fields: biological geography and geographic biology. These fields differ in the nature of their objects of study, employ different methods and represent Earth sciences and biological sciences, respectively. It is suggested therefore that the classification codes for research fields and the state professional education standard should be revised.

  19. Weighted Markov chains for forecasting and analysis in Incidence of infectious diseases in jiangsu Province, China☆

    PubMed Central

    Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng

    2010-01-01

    This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology. PMID:23554632

  20. Weighted Markov chains for forecasting and analysis in Incidence of infectious diseases in jiangsu Province, China.

    PubMed

    Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng

    2010-05-01

    This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology.

  1. Exploiting salient semantic analysis for information retrieval

    NASA Astrophysics Data System (ADS)

    Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui

    2016-11-01

    Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.

  2. Development and validation of a casemix classification to predict costs of specialist palliative care provision across inpatient hospice, hospital and community settings in the UK: a study protocol

    PubMed Central

    Guo, Ping; Dzingina, Mendwas; Firth, Alice M; Davies, Joanna M; Douiri, Abdel; O’Brien, Suzanne M; Pinto, Cathryn; Pask, Sophie; Higginson, Irene J; Eagar, Kathy; Murtagh, Fliss E M

    2018-01-01

    Introduction Provision of palliative care is inequitable with wide variations across conditions and settings in the UK. Lack of a standard way to classify by case complexity is one of the principle obstacles to addressing this. We aim to develop and validate a casemix classification to support the prediction of costs of specialist palliative care provision. Methods and analysis Phase I: A cohort study to determine the variables and potential classes to be included in a casemix classification. Data are collected from clinicians in palliative care services across inpatient hospice, hospital and community settings on: patient demographics, potential complexity/casemix criteria and patient-level resource use. Cost predictors are derived using multivariate regression and then incorporated into a classification using classification and regression trees. Internal validation will be conducted by bootstrapping to quantify any optimism in the predictive performance (calibration and discrimination) of the developed classification. Phase II: A mixed-methods cohort study across settings for external validation of the classification developed in phase I. Patient and family caregiver data will be collected longitudinally on demographics, potential complexity/casemix criteria and patient-level resource use. This will be triangulated with data collected from clinicians on potential complexity/casemix criteria and patient-level resource use, and with qualitative interviews with patients and caregivers about care provision across difference settings. The classification will be refined on the basis of its performance in the validation data set. Ethics and dissemination The study has been approved by the National Health Service Health Research Authority Research Ethics Committee. The results are expected to be disseminated in 2018 through papers for publication in major palliative care journals; policy briefs for clinicians, commissioning leads and policy makers; and lay summaries for patients and public. Trial registration number ISRCTN90752212. PMID:29550781

  3. Deep Learning for Classification of Colorectal Polyps on Whole-slide Images

    PubMed Central

    Korbar, Bruno; Olofson, Andrea M.; Miraflor, Allen P.; Nicka, Catherine M.; Suriawinata, Matthew A.; Torresani, Lorenzo; Suriawinata, Arief A.; Hassanpour, Saeed

    2017-01-01

    Context: Histopathological characterization of colorectal polyps is critical for determining the risk of colorectal cancer and future rates of surveillance for patients. However, this characterization is a challenging task and suffers from significant inter- and intra-observer variability. Aims: We built an automatic image analysis method that can accurately classify different types of colorectal polyps on whole-slide images to help pathologists with this characterization and diagnosis. Setting and Design: Our method is based on deep-learning techniques, which rely on numerous levels of abstraction for data representation and have shown state-of-the-art results for various image analysis tasks. Subjects and Methods: Our method covers five common types of polyps (i.e., hyperplastic, sessile serrated, traditional serrated, tubular, and tubulovillous/villous) that are included in the US Multisociety Task Force guidelines for colorectal cancer risk assessment and surveillance. We developed multiple deep-learning approaches by leveraging a dataset of 2074 crop images, which were annotated by multiple domain expert pathologists as reference standards. Statistical Analysis: We evaluated our method on an independent test set of 239 whole-slide images and measured standard machine-learning evaluation metrics of accuracy, precision, recall, and F1 score and their 95% confidence intervals. Results: Our evaluation shows that our method with residual network architecture achieves the best performance for classification of colorectal polyps on whole-slide images (overall accuracy: 93.0%, 95% confidence interval: 89.0%–95.9%). Conclusions: Our method can reduce the cognitive burden on pathologists and improve their efficacy in histopathological characterization of colorectal polyps and in subsequent risk assessment and follow-up recommendations. PMID:28828201

  4. Differential Diagnosis of Erythmato-Squamous Diseases Using Classification and Regression Tree

    PubMed Central

    Maghooli, Keivan; Langarizadeh, Mostafa; Shahmoradi, Leila; Habibi-koolaee, Mahdi; Jebraeily, Mohamad; Bouraghi, Hamid

    2016-01-01

    Introduction: Differential diagnosis of Erythmato-Squamous Diseases (ESD) is a major challenge in the field of dermatology. The ESD diseases are placed into six different classes. Data mining is the process for detection of hidden patterns. In the case of ESD, data mining help us to predict the diseases. Different algorithms were developed for this purpose. Objective: we aimed to use the Classification and Regression Tree (CART) to predict differential diagnosis of ESD. Methods: we used the Cross Industry Standard Process for Data Mining (CRISP-DM) methodology. For this purpose, the dermatology data set from machine learning repository, UCI was obtained. The Clementine 12.0 software from IBM Company was used for modelling. In order to evaluation of the model we calculate the accuracy, sensitivity and specificity of the model. Results: The proposed model had an accuracy of 94.84% ( Standard Deviation: 24.42) in order to correct prediction of the ESD disease. Conclusions: Results indicated that using of this classifier could be useful. But, it would be strongly recommended that the combination of machine learning methods could be more useful in terms of prediction of ESD. PMID:28077889

  5. A spectral-structural bag-of-features scene classifier for very high spatial resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhao, Bei; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Land-use classification of very high spatial resolution remote sensing (VHSR) imagery is one of the most challenging tasks in the field of remote sensing image processing. However, the land-use classification is hard to be addressed by the land-cover classification techniques, due to the complexity of the land-use scenes. Scene classification is considered to be one of the expected ways to address the land-use classification issue. The commonly used scene classification methods of VHSR imagery are all derived from the computer vision community that mainly deal with terrestrial image recognition. Differing from terrestrial images, VHSR images are taken by looking down with airborne and spaceborne sensors, which leads to the distinct light conditions and spatial configuration of land cover in VHSR imagery. Considering the distinct characteristics, two questions should be answered: (1) Which type or combination of information is suitable for the VHSR imagery scene classification? (2) Which scene classification algorithm is best for VHSR imagery? In this paper, an efficient spectral-structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery. SSBFC utilizes the first- and second-order statistics (the mean and standard deviation values, MeanStd) as the statistical spectral descriptor for the spectral information of the VHSR imagery, and uses dense scale-invariant feature transform (SIFT) as the structural feature descriptor. From the experimental results, the spectral information works better than the structural information, while the combination of the spectral and structural information is better than any single type of information. Taking the characteristic of the spatial configuration into consideration, SSBFC uses the whole image scene as the scope of the pooling operator, instead of the scope generated by a spatial pyramid (SP) commonly used in terrestrial image classification. The experimental results show that the whole image as the scope of the pooling operator performs better than the scope generated by SP. In addition, SSBFC codes and pools the spectral and structural features separately to avoid mutual interruption between the spectral and structural features. The coding vectors of spectral and structural features are then concatenated into a final coding vector. Finally, SSBFC classifies the final coding vector by support vector machine (SVM) with a histogram intersection kernel (HIK). Compared with the latest scene classification methods, the experimental results with three VHSR datasets demonstrate that the proposed SSBFC performs better than the other classification methods for VHSR image scenes.

  6. Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Chen, Xiaogang; Wang, Yijun; Gao, Shangkai; Jung, Tzyy-Ping; Gao, Xiaorong

    2015-08-01

    Objective. Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) due to its high efficiency, robustness, and simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established. Approach. This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8-15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects. Main results. The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of ˜33.3 characters/min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min-1. Significance. By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.

  7. Cognitively Elite, Cognitively Normal, and Cognitively Impaired Aging: Neurocognitive Status and Stability Moderate Memory Performance

    PubMed Central

    Dixon, Roger A.; de Frias, Cindy M.

    2014-01-01

    Objective Although recent theories of brain and cognitive aging distinguish among normal, exceptional, and impaired groups, further empirical evidence is required. We adapted and applied standard procedures for classifying groups of cognitively impaired (CI) and cognitively normal (CN) older adults to a third classification, cognitively healthy, exceptional, or elite (CE) aging. We then examined concurrent and two-wave longitudinal performance on composite variables of episodic, semantic, and working memory. Method We began with a two-wave source sample from the Victoria Longitudinal Study (VLS) (source n=570; baseline age=53–90 years). The goals were to: (a) apply standard and objective classification procedures to discriminate three cognitive status groups, (b) conduct baseline comparisons of memory performance, (c) develop two-wave status stability and change subgroups, and (d) compare of stability subgroup differences in memory performance and change. Results As expected, the CE group performed best on all three memory composites. Similarly, expected status stability effects were observed: (a) stable CE and CN groups performed memory tasks better than their unstable counterparts and (b) stable (and chronic) CI group performed worse than its unstable (variable) counterpart. These stability group differences were maintained over two waves. Conclusion New data validate the expectations that (a) objective clinical classification procedures for cognitive impairment can be adapted for detecting cognitively advantaged older adults and (b) performance in three memory systems is predictably related to the tripartite classification. PMID:24742143

  8. Clinical, aetiological, anatomical and pathological classification (CEAP): gold standard and limits.

    PubMed

    Rabe, E; Pannier, F

    2012-03-01

    The first CEAP (clinical, aetiological, anatomical and pathological elements) consensus document was published after a consensus conference of the American Venous Forum, held at the sixth annual meeting of the AVF in February 1994 in Maui, Hawaii. In the following years the CEAP classification was published in many international journals and books which has led to widespread international use of the CEAP classification since 1995. The aim of this paper is to review the benefits and limits of CEAP from the available literature. In an actual Medline analysis with the keywords 'CEAP' and 'venous insufficiency', 266 publications using the CEAP classification in venous diseases are available. The CEAP classification was accepted in the venous community and used in scientific publications, but in most of the cases only the clinical classification was used. Limitations of the first version including a lack of clear definition of clinical signs led to a revised version. The CEAP classification is the gold standard of classification of chronic venous disorders today. Nevertheless for proper use some facts have to be taken into account: the CEAP classification is not a severity classification, C2 summarizes all kinds of varicose veins, in C3 it may be difficult to separate venous and other reasons for oedema, and corona phlebectatica is not included in the classification. Further revisions of the CEAP classification may help to overcome the still-existing deficits.

  9. Examiner Training and Reliability in Two Randomized Clinical Trials of Adult Dental Caries

    PubMed Central

    Banting, David W.; Amaechi, Bennett T.; Bader, James D.; Blanchard, Peter; Gilbert, Gregg H.; Gullion, Christina M.; Holland, Jan Carlton; Makhija, Sonia K.; Papas, Athena; Ritter, André V.; Singh, Mabi L.; Vollmer, William M.

    2013-01-01

    Objectives This report describes the training of dental examiners participating in two dental caries clinical trials and reports the inter- and intra- examiner reliability scores from the initial standardization sessions. Methods Study examiners were trained to use a modified ICDAS-II system to detect the visual signs of non-cavitated and cavitated dental caries in adult subjects. Dental caries was classified as no caries (S), non-cavitated caries (D1), enamel caries (D2) and dentine caries (D3). Three standardization sessions involving 60 subjects and 3604 tooth surface calls were used to calculate several measures of examiner reliability. Results The prevalence of dental caries observed in the standardization sessions ranged from 1.4% to 13.5% of the coronal tooth surfaces examined. Overall agreement between pairs of examiners ranged from 0.88 to 0.99. An intra-class coefficient threshold of 0.60 was surpassed for all but one examiner. Inter-examiner unweighted kappa values were low (0.23– 0.35) but weighted kappas and the ratio of observed to maximum kappas were more encouraging (0.42– 0.83). The highest kappa values occurred for the S/D1 vs. D2/D3 two-level classification of dental caries, for which seven of the eight examiners achieved observed to maximum kappa values over 0.90.Intra-examiner reliability was notably higher than inter-examiner reliability for all measures and dental caries classification systems employed. Conclusion The methods and results for the initial examiner training and standardization sessions for two large clinical trials are reported. Recommendations for others planning examiner training and standardization sessions are offered. PMID:22320292

  10. Reliability of the Walker Cranial Nonmetric Method and Implications for Sex Estimation.

    PubMed

    Lewis, Cheyenne J; Garvin, Heather M

    2016-05-01

    The cranial trait scoring method presented in Buikstra and Ubelaker (Standards for data collection from human skeletal remains. Fayetteville, AR: Arkansas Archeological Survey Research Series No. 44, 1994) and Walker (Am J Phys Anthropol, 136, 2008 and 39) is the most common nonmetric cranial sex estimation method utilized by physical and forensic anthropologists. As such, the reliability and accuracy of the method is vital to ensure its validity in forensic applications. In this study, inter- and intra-observer error rates for the Walker scoring method were calculated using a sample of U.S. White and Black individuals (n = 135). Cohen's weighted kappas, intraclass correlation coefficients, and percentage agreements indicate good agreement between trials and observers for all traits except the mental eminence. Slight disagreement in scoring, however, was found to impact sex classifications, leading to lower accuracy rates than those published by Walker. Furthermore, experience does appear to impact trait scoring and sex classification. The use of revised population-specific equations that avoid the mental eminence is highly recommended to minimize the potential for misclassifications. © 2016 American Academy of Forensic Sciences.

  11. SSAW: A new sequence similarity analysis method based on the stationary discrete wavelet transform.

    PubMed

    Lin, Jie; Wei, Jing; Adjeroh, Donald; Jiang, Bing-Hua; Jiang, Yue

    2018-05-02

    Alignment-free sequence similarity analysis methods often lead to significant savings in computational time over alignment-based counterparts. A new alignment-free sequence similarity analysis method, called SSAW is proposed. SSAW stands for Sequence Similarity Analysis using the Stationary Discrete Wavelet Transform (SDWT). It extracts k-mers from a sequence, then maps each k-mer to a complex number field. Then, the series of complex numbers formed are transformed into feature vectors using the stationary discrete wavelet transform. After these steps, the original sequence is turned into a feature vector with numeric values, which can then be used for clustering and/or classification. Using two different types of applications, namely, clustering and classification, we compared SSAW against the the-state-of-the-art alignment free sequence analysis methods. SSAW demonstrates competitive or superior performance in terms of standard indicators, such as accuracy, F-score, precision, and recall. The running time was significantly better in most cases. These make SSAW a suitable method for sequence analysis, especially, given the rapidly increasing volumes of sequence data required by most modern applications.

  12. Assessing and minimizing contamination in time of flight based validation data

    NASA Astrophysics Data System (ADS)

    Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald

    2017-10-01

    Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.

  13. Earthquake Building Damage Mapping Based on Feature Analyzing Method from Synthetic Aperture Radar Data

    NASA Astrophysics Data System (ADS)

    An, L.; Zhang, J.; Gong, L.

    2018-04-01

    Playing an important role in gathering information of social infrastructure damage, Synthetic Aperture Radar (SAR) remote sensing is a useful tool for monitoring earthquake disasters. With the wide application of this technique, a standard method, comparing post-seismic to pre-seismic data, become common. However, multi-temporal SAR processes, are not always achievable. To develop a post-seismic data only method for building damage detection, is of great importance. In this paper, the authors are now initiating experimental investigation to establish an object-based feature analysing classification method for building damage recognition.

  14. Machine Learning Methods for Production Cases Analysis

    NASA Astrophysics Data System (ADS)

    Mokrova, Nataliya V.; Mokrov, Alexander M.; Safonova, Alexandra V.; Vishnyakov, Igor V.

    2018-03-01

    Approach to analysis of events occurring during the production process were proposed. Described machine learning system is able to solve classification tasks related to production control and hazard identification at an early stage. Descriptors of the internal production network data were used for training and testing of applied models. k-Nearest Neighbors and Random forest methods were used to illustrate and analyze proposed solution. The quality of the developed classifiers was estimated using standard statistical metrics, such as precision, recall and accuracy.

  15. 7 CFR 51.1837 - Classification of defects.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... STANDARDS) United States Standards for Grades of Florida Tangerines Definitions § 51.1837 Classification of...) at stem end, or the equivalent of this amount, by volume, when occurring in other portions of the fruit Affecting all segments more than 1/4 inch (6.4 mm) at stem end, or the equivalent of this amount...

  16. 7 CFR 51.784 - Classification of defects.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... STANDARDS) United States Standards for Grades of Florida Grapefruit Definitions § 51.784 Classification of.... Dryness or mushy condition Affecting all segments more than 1/4 inch (6.4 mm) at stem end, or the... more than 1/2 inch (12.7 mm) at stem end, or the equivalent of this amount, by volume, when occurring...

  17. 7 CFR 51.1837 - Classification of defects.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... STANDARDS) United States Standards for Grades of Florida Tangerines Definitions § 51.1837 Classification of...) at stem end, or the equivalent of this amount, by volume, when occurring in other portions of the fruit Affecting all segments more than 1/4 inch (6.4 mm) at stem end, or the equivalent of this amount...

  18. 48 CFR 19.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 19.303 Section 19.303 Federal Acquisition... of Small Business Status for Small Business Programs 19.303 Determining North American Industry...

  19. 7 CFR 51.784 - Classification of defects.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... STANDARDS) United States Standards for Grades of Florida Grapefruit Definitions § 51.784 Classification of... discoloration permitted in the grade Very deep or very rough aggregating more than a circle 1/2 inch (12.7 mm) in diameter; deep or rough aggregating more than a circle 1 inch (25.4 mm) in diameter; slightly...

  20. 7 CFR 51.1837 - Classification of defects.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... STANDARDS) United States Standards for Grades of Florida Tangerines Definitions § 51.1837 Classification of....1828.] Deep or rough aggregating more than a circle 1/4 inch (6.4 mm) in diameter; slightly rough with... slight depth aggregating more than a circle 11/8 inches (28.6 mm) in diameter Deep or rough aggregating...

  1. 7 CFR 28.909 - Costs.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... services provide under this section when billing is made to voluntary agents. Classification ..., TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Sampling § 28.909 Costs... the service. After classification the samples shall become the property of the Government. The...

  2. Comparison of Segmental Versus Longitudinal Intravascular Ultrasound Analysis for Pediatric Cardiac Allograft Vasculopathy.

    PubMed

    Kuhn, M A; Burch, M; Chinnock, R E; Fenton, M J

    2017-10-01

    Intravascular ultrasound (IVUS) has been routinely used in some centers to investigate cardiac allograft vasculopathy in pediatric heart transplant recipients. We present an alternative method using more sophisticated imaging software. This study presents a comparison of this method with an established standard method. All patients who had IVUS performed in 2014 were retrospectively evaluated. The standard technique consisted of analysis of 10 operator-selected segments along the vessel. Each study was re-evaluated using a longitudinal technique, taken at every third cardiac cycle, along the entire vessel. Semiautomatic edge detection software was used to detect vessel imaging planes. Measurements included outer and inner diameter, total and luminal area, maximal intimal thickness (MIT), and intimal index. Each IVUS was graded for severity using the Stanford classification. All results were given as mean ± standard deviation (SD). Groups were compared using Student t test. A P value <.05 was considered significant. There were 59 IVUS studies performed on 58 patients. There was no statistically significant difference between outer diameter, inner diameter, or total area. In the longitudinal group, there was a significantly smaller luminal area, higher MIT, and higher intimal index. Using the longitudinal technique, there was an increase in Stanford classification in 20 patients. The longitudinal technique appeared more sensitive in assessing the degree of cardiac allograft vasculopathy and may play a role in the increase in the degree of thickening seen. It may offer an alternative way of grading severity of cardiac allograft vasculopathy in pediatric heart transplant recipients. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Considerations of Unmanned Aircraft Classification for Civil Airworthiness Standards

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Hayhurst, Kelly J.; Morris, A. Terry; Verstynen, Harry A.

    2013-01-01

    The use of unmanned aircraft in the National Airspace System (NAS) has been characterized as the next great step forward in the evolution of civil aviation. Although use of unmanned aircraft systems (UAS) in military and public service operations is proliferating, civil use of UAS remains limited in the United States today. This report focuses on one particular regulatory challenge: classifying UAS to assign airworthiness standards. Classification is useful for ensuring that meaningful differences in design are accommodated by certification to different standards, and that aircraft with similar risk profiles are held to similar standards. This paper provides observations related to how the current regulations for classifying manned aircraft, based on dimensions of aircraft class and operational aircraft categories, could apply to UAS. This report finds that existing aircraft classes are well aligned with the types of UAS that currently exist; however, the operational categories are more difficult to align to proposed UAS use in the NAS. Specifically, the factors used to group manned aircraft into similar risk profiles do not necessarily capture all relevant UAS risks. UAS classification is investigated through gathering approaches to classification from a broad spectrum of organizations, and then identifying and evaluating the classification factors from these approaches. This initial investigation concludes that factors in addition to those currently used today to group manned aircraft for the purpose of assigning airworthiness standards will be needed to adequately capture risks associated with UAS and their operations.

  4. Subsampled Hessian Newton Methods for Supervised Learning.

    PubMed

    Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen

    2015-08-01

    Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.

  5. Expected energy-based restricted Boltzmann machine for classification.

    PubMed

    Elfwing, S; Uchibe, E; Doya, K

    2015-04-01

    In classification tasks, restricted Boltzmann machines (RBMs) have predominantly been used in the first stage, either as feature extractors or to provide initialization of neural networks. In this study, we propose a discriminative learning approach to provide a self-contained RBM method for classification, inspired by free-energy based function approximation (FE-RBM), originally proposed for reinforcement learning. For classification, the FE-RBM method computes the output for an input vector and a class vector by the negative free energy of an RBM. Learning is achieved by stochastic gradient-descent using a mean-squared error training objective. In an earlier study, we demonstrated that the performance and the robustness of FE-RBM function approximation can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that the learning performance of RBM function approximation can be further improved by computing the output by the negative expected energy (EE-RBM), instead of the negative free energy. To create a deep learning architecture, we stack several RBMs on top of each other. We also connect the class nodes to all hidden layers to try to improve the performance even further. We validate the classification performance of EE-RBM using the MNIST data set and the NORB data set, achieving competitive performance compared with other classifiers such as standard neural networks, deep belief networks, classification RBMs, and support vector machines. The purpose of using the NORB data set is to demonstrate that EE-RBM with binary input nodes can achieve high performance in the continuous input domain. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. 3D shape representation with spatial probabilistic distribution of intrinsic shape keypoints

    NASA Astrophysics Data System (ADS)

    Ghorpade, Vijaya K.; Checchin, Paul; Malaterre, Laurent; Trassoudaine, Laurent

    2017-12-01

    The accelerated advancement in modeling, digitizing, and visualizing techniques for 3D shapes has led to an increasing amount of 3D models creation and usage, thanks to the 3D sensors which are readily available and easy to utilize. As a result, determining the similarity between 3D shapes has become consequential and is a fundamental task in shape-based recognition, retrieval, clustering, and classification. Several decades of research in Content-Based Information Retrieval (CBIR) has resulted in diverse techniques for 2D and 3D shape or object classification/retrieval and many benchmark data sets. In this article, a novel technique for 3D shape representation and object classification has been proposed based on analyses of spatial, geometric distributions of 3D keypoints. These distributions capture the intrinsic geometric structure of 3D objects. The result of the approach is a probability distribution function (PDF) produced from spatial disposition of 3D keypoints, keypoints which are stable on object surface and invariant to pose changes. Each class/instance of an object can be uniquely represented by a PDF. This shape representation is robust yet with a simple idea, easy to implement but fast enough to compute. Both Euclidean and topological space on object's surface are considered to build the PDFs. Topology-based geodesic distances between keypoints exploit the non-planar surface properties of the object. The performance of the novel shape signature is tested with object classification accuracy. The classification efficacy of the new shape analysis method is evaluated on a new dataset acquired with a Time-of-Flight camera, and also, a comparative evaluation on a standard benchmark dataset with state-of-the-art methods is performed. Experimental results demonstrate superior classification performance of the new approach on RGB-D dataset and depth data.

  7. Nursing interventions for rehabilitation in Parkinson's disease: cross mapping of terms

    PubMed Central

    Tosin, Michelle Hyczy de Siqueira; Campos, Débora Moraes; de Andrade, Leonardo Tadeu; de Oliveira, Beatriz Guitton Renaud Baptista; Santana, Rosimere Ferreira

    2016-01-01

    ABSTRACT Objective: to perform a cross-term mapping of nursing language in the patient record with the Nursing Interventions Classification system, in rehabilitation patients with Parkinson's disease. Method: a documentary research study to perform cross mapping. A probabilistic, simple random sample composed of 67 records of patients with Parkinson's disease who participated in a rehabilitation program, between March of 2009 and April of 2013. The research was conducted in three stages, in which the nursing terms were mapped to natural language and crossed with the Nursing Interventions Classification. Results: a total of 1,077 standard interventions that, after crossing with the taxonomy and refinement performed by the experts, resulted in 32 interventions equivalent to the Nursing Interventions Classification (NIC) system. The NICs, "Education: The process of the disease.", "Contract with the patient", and "Facilitation of Learning" were present in 100% of the records. For these interventions, 40 activities were described, representing 13 activities by intervention. Conclusion: the cross mapping allowed for the identification of corresponding terms with the nursing interventions used every day in rehabilitation nursing, and compared them to the Nursing Interventions Classification. PMID:27508903

  8. Using two classification schemes to develop vegetation indices of biological integrity for wetlands in West Virginia, USA.

    PubMed

    Veselka, Walter; Rentch, James S; Grafton, William N; Kordek, Walter S; Anderson, James T

    2010-11-01

    Bioassessment methods for wetlands, and other bodies of water, have been developed worldwide to measure and quantify changes in "biological integrity." These assessments are based on a classification system, meant to ensure appropriate comparisons between wetland types. Using a local site-specific disturbance gradient, we built vegetation indices of biological integrity (Veg-IBIs) based on two commonly used wetland classification systems in the USA: One based on vegetative structure and the other based on a wetland's position in a landscape and sources of water. The resulting class-specific Veg-IBIs were comprised of 1-5 metrics that varied in their sensitivity to the disturbance gradient (R2=0.14-0.65). Moreover, the sensitivity to the disturbance gradient increased as metrics from each of the two classification schemes were combined (added). Using this information to monitor natural and created wetlands will help natural resource managers track changes in biological integrity of wetlands in response to anthropogenic disturbance and allows the use of vegetative communities to set ecological performance standards for mitigation banks.

  9. Ecotoxicological characterization of hazardous wastes.

    PubMed

    Wilke, B-M; Riepert, F; Koch, Christine; Kühne, T

    2008-06-01

    In Europe hazardous wastes are classified by 14 criteria including ecotoxicity (H 14). Standardized methods originally developed for chemical and soil testing were adapted for the ecotoxicological characterization of wastes including leachate and solid phase tests. A consensus on which tests should be recommended as mandatory is still missing. Up to now, only a guidance on how to proceed with the preparation of waste materials has been standardized by CEN as EN 14735. In this study, tests including higher plants, earthworms, collembolans, microorganisms, duckweed and luminescent bacteria were selected to characterize the ecotoxicological potential of a boiler slag, a dried sewage sludge, a thin sludge and a waste petrol. In general, the instructions given in EN 14735 were suitable for all wastes used. The evaluation of the different test systems by determining the LC/EC(50) or NOEC-values revealed that the collembolan reproduction and the duckweed frond numbers were the most sensitive endpoints. For a final classification and ranking of wastes the Toxicity Classification System (TCS) using EC/LC(50) values seems to be appropriate.

  10. Deep Learning for Classification of Colorectal Polyps on Whole-slide Images.

    PubMed

    Korbar, Bruno; Olofson, Andrea M; Miraflor, Allen P; Nicka, Catherine M; Suriawinata, Matthew A; Torresani, Lorenzo; Suriawinata, Arief A; Hassanpour, Saeed

    2017-01-01

    Histopathological characterization of colorectal polyps is critical for determining the risk of colorectal cancer and future rates of surveillance for patients. However, this characterization is a challenging task and suffers from significant inter- and intra-observer variability. We built an automatic image analysis method that can accurately classify different types of colorectal polyps on whole-slide images to help pathologists with this characterization and diagnosis. Our method is based on deep-learning techniques, which rely on numerous levels of abstraction for data representation and have shown state-of-the-art results for various image analysis tasks. Our method covers five common types of polyps (i.e., hyperplastic, sessile serrated, traditional serrated, tubular, and tubulovillous/villous) that are included in the US Multisociety Task Force guidelines for colorectal cancer risk assessment and surveillance. We developed multiple deep-learning approaches by leveraging a dataset of 2074 crop images, which were annotated by multiple domain expert pathologists as reference standards. We evaluated our method on an independent test set of 239 whole-slide images and measured standard machine-learning evaluation metrics of accuracy, precision, recall, and F1 score and their 95% confidence intervals. Our evaluation shows that our method with residual network architecture achieves the best performance for classification of colorectal polyps on whole-slide images (overall accuracy: 93.0%, 95% confidence interval: 89.0%-95.9%). Our method can reduce the cognitive burden on pathologists and improve their efficacy in histopathological characterization of colorectal polyps and in subsequent risk assessment and follow-up recommendations.

  11. 46 CFR 8.100 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... § 8.100 Definitions. Authorized Classification Society means a recognized classification society that... 46 Shipping 1 2010-10-01 2010-10-01 false Definitions. 8.100 Section 8.100 Shipping COAST GUARD... Coast Guard. Class Rules means the standards developed and published by a classification society...

  12. 7 CFR 400.309 - Requests for reconsideration.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... classification becomes effective. The request will be considered to have been made when received, in writing, by..., DEPARTMENT OF AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS Non-Standard Underwriting Classification System... be assigned a nonstandard classification under this subpart will be notified of and allowed not less...

  13. 7 CFR 28.909 - Costs.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Sampling § 28.909 Costs... the service. After classification the samples shall become the property of the Government. The... this subpart. (b) The cost of High Volume Instrument (HVI) cotton classification service to producers...

  14. 7 CFR 400.309 - Requests for reconsideration.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... classification becomes effective. The request will be considered to have been made when received, in writing, by..., DEPARTMENT OF AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS Non-Standard Underwriting Classification System... be assigned a nonstandard classification under this subpart will be notified of and allowed not less...

  15. Endoscopic ultrasound as an adjunctive evaluation in patients with esophageal motor disorders subtyped by high-resolution manometry

    PubMed Central

    Krishnan, Kumar; Lin, Chen-Yuan; Keswani, Rajesh; Pandolfino, John E; Kahrilas, Peter J; Komanduri, Srinadh

    2015-01-01

    Background and aims Esophageal motor disorders are a heterogenous group of conditions identified by esophageal manometry that lead to esophageal dysfunction. The aim of this study was to assess the clinical utility of endoscopic ultrasound in the further evaluation of patients with esophageal motor disorders categorized using the updated Chicago Classification. Methods We performed a retrospective, single center study of 62 patients with esophageal motor disorders categorized according to the Chicago Classification. All patients underwent standard radial endosonography to assess for extra esophageal findings or alternative explanations for esophageal outflow obstruction. Secondary outcomes included esophageal wall thickness among the different patient subsets within the Chicago Classification Key Results EUS identified 9/62 (15%) clinically relevant findings that altered patient management and explained the etiology of esophageal outflow obstruction. We further identified substantial variability in esophageal wall thickness in a proportion of patients including some with a significantly thickened non-muscular layer. Conclusions EUS findings are clinically relevant in a significant number of patients with motor disorders and can alter clinical management. Variability in esophageal wall thickness of the muscularis propria and non-muscular layers identified by EUS may also explain the observed variability in response to standard therapies for achalasia. PMID:25041229

  16. Deep learning architectures for multi-label classification of intelligent health risk prediction.

    PubMed

    Maxwell, Andrew; Li, Runzhi; Yang, Bei; Weng, Heng; Ou, Aihua; Hong, Huixiao; Zhou, Zhaoxian; Gong, Ping; Zhang, Chaoyang

    2017-12-28

    Multi-label classification of data remains to be a challenging problem. Because of the complexity of the data, it is sometimes difficult to infer information about classes that are not mutually exclusive. For medical data, patients could have symptoms of multiple different diseases at the same time and it is important to develop tools that help to identify problems early. Intelligent health risk prediction models built with deep learning architectures offer a powerful tool for physicians to identify patterns in patient data that indicate risks associated with certain types of chronic diseases. Physical examination records of 110,300 anonymous patients were used to predict diabetes, hypertension, fatty liver, a combination of these three chronic diseases, and the absence of disease (8 classes in total). The dataset was split into training (90%) and testing (10%) sub-datasets. Ten-fold cross validation was used to evaluate prediction accuracy with metrics such as precision, recall, and F-score. Deep Learning (DL) architectures were compared with standard and state-of-the-art multi-label classification methods. Preliminary results suggest that Deep Neural Networks (DNN), a DL architecture, when applied to multi-label classification of chronic diseases, produced accuracy that was comparable to that of common methods such as Support Vector Machines. We have implemented DNNs to handle both problem transformation and algorithm adaption type multi-label methods and compare both to see which is preferable. Deep Learning architectures have the potential of inferring more information about the patterns of physical examination data than common classification methods. The advanced techniques of Deep Learning can be used to identify the significance of different features from physical examination data as well as to learn the contributions of each feature that impact a patient's risk for chronic diseases. However, accurate prediction of chronic disease risks remains a challenging problem that warrants further studies.

  17. Linking pesticides and human health: a geographic information system (GIS) and Landsat remote sensing method to estimate agricultural pesticide exposure.

    PubMed

    VoPham, Trang; Wilson, John P; Ruddell, Darren; Rashed, Tarek; Brooks, Maria M; Yuan, Jian-Min; Talbott, Evelyn O; Chang, Chung-Chou H; Weissfeld, Joel L

    2015-08-01

    Accurate pesticide exposure estimation is integral to epidemiologic studies elucidating the role of pesticides in human health. Humans can be exposed to pesticides via residential proximity to agricultural pesticide applications (drift). We present an improved geographic information system (GIS) and remote sensing method, the Landsat method, to estimate agricultural pesticide exposure through matching pesticide applications to crops classified from temporally concurrent Landsat satellite remote sensing images in California. The image classification method utilizes Normalized Difference Vegetation Index (NDVI) values in a combined maximum likelihood classification and per-field (using segments) approach. Pesticide exposure is estimated according to pesticide-treated crop fields intersecting 500 m buffers around geocoded locations (e.g., residences) in a GIS. Study results demonstrate that the Landsat method can improve GIS-based pesticide exposure estimation by matching more pesticide applications to crops (especially temporary crops) classified using temporally concurrent Landsat images compared to the standard method that relies on infrequently updated land use survey (LUS) crop data. The Landsat method can be used in epidemiologic studies to reconstruct past individual-level exposure to specific pesticides according to where individuals are located.

  18. Leucocyte classification for leukaemia detection using image processing techniques.

    PubMed

    Putzu, Lorenzo; Caocci, Giovanni; Di Ruberto, Cecilia

    2014-11-01

    The counting and classification of blood cells allow for the evaluation and diagnosis of a vast number of diseases. The analysis of white blood cells (WBCs) allows for the detection of acute lymphoblastic leukaemia (ALL), a blood cancer that can be fatal if left untreated. Currently, the morphological analysis of blood cells is performed manually by skilled operators. However, this method has numerous drawbacks, such as slow analysis, non-standard accuracy, and dependences on the operator's skill. Few examples of automated systems that can analyse and classify blood cells have been reported in the literature, and most of these systems are only partially developed. This paper presents a complete and fully automated method for WBC identification and classification using microscopic images. In contrast to other approaches that identify the nuclei first, which are more prominent than other components, the proposed approach isolates the whole leucocyte and then separates the nucleus and cytoplasm. This approach is necessary to analyse each cell component in detail. From each cell component, different features, such as shape, colour and texture, are extracted using a new approach for background pixel removal. This feature set was used to train different classification models in order to determine which one is most suitable for the detection of leukaemia. Using our method, 245 of 267 total leucocytes were properly identified (92% accuracy) from 33 images taken with the same camera and under the same lighting conditions. Performing this evaluation using different classification models allowed us to establish that the support vector machine with a Gaussian radial basis kernel is the most suitable model for the identification of ALL, with an accuracy of 93% and a sensitivity of 98%. Furthermore, we evaluated the goodness of our new feature set, which displayed better performance with each evaluated classification model. The proposed method permits the analysis of blood cells automatically via image processing techniques, and it represents a medical tool to avoid the numerous drawbacks associated with manual observation. This process could also be used for counting, as it provides excellent performance and allows for early diagnostic suspicion, which can then be confirmed by a haematologist through specialised techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    PubMed Central

    Jiang, Hao; Zhao, Dehua; Cai, Ying; An, Shuqing

    2012-01-01

    In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI) as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal) thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3%) and overall (92.0%–93.1%) accuracies. Our results suggest that Method of 0.1% index scaling provides a feasible way to apply CT models directly to images from sensors or time periods that differ from those of the images used to develop the original models.

  20. Coal-cleaning plant refuse characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavalet, J.R.; Torak, E.R.

    1985-06-01

    This report describes a study performed for the Electric Power Research Institute's Coal Cleaning Test Facility in Homer City, Pennsylvania. The purpose of the study was to design a standard methods for chemically and physically classifying refuse generated by physical coal cleaning and to construct a matrix that will accurately predict how a particular refuse will react to particular disposal methods - based solely on raw-coal characteristics and the process used to clean the coal. The value of such a classification system (which has not existed to this point) is the ability to design efficient and economical systems for disposingmore » of specific coal cleaning refuse. The report describes the project's literature search and a four-tier classification system. It also provides designs for test piles, sampling procedures, and guidelines for a series of experiments to test the classfication system and create an accurate, reliable predictive matrix. 38 refs., 39 figs., 35 tabs.« less

Top