Seurinck, Sylvie; Deschepper, Ellen; Deboch, Bishaw; Verstraete, Willy; Siciliano, Steven
2006-03-01
Microbial source tracking (MST) methods need to be rapid, inexpensive and accurate. Unfortunately, many MST methods provide a wealth of information that is difficult to interpret by the regulators who use this information to make decisions. This paper describes the use of classification tree analysis to interpret the results of a MST method based on fatty acid methyl ester (FAME) profiles of Escherichia coli isolates, and to present results in a format readily interpretable by water quality managers. Raw sewage E. coli isolates and animal E. coli isolates from cow, dog, gull, and horse were isolated and their FAME profiles collected. Correct classification rates determined with leaveone-out cross-validation resulted in an overall low correct classification rate of 61%. A higher overall correct classification rate of 85% was obtained when the animal isolates were pooled together and compared to the raw sewage isolates. Bootstrap aggregation or adaptive resampling and combining of the FAME profile data increased correct classification rates substantially. Other MST methods may be better suited to differentiate between different fecal sources but classification tree analysis has enabled us to distinguish raw sewage from animal E. coli isolates, which previously had not been possible with other multivariate methods such as principal component analysis and cluster analysis.
Kopps, Anna M; Kang, Jungkoo; Sherwin, William B; Palsbøll, Per J
2015-06-30
Kinship analyses are important pillars of ecological and conservation genetic studies with potentially far-reaching implications. There is a need for power analyses that address a range of possible relationships. Nevertheless, such analyses are rarely applied, and studies that use genetic-data-based-kinship inference often ignore the influence of intrinsic population characteristics. We investigated 11 questions regarding the correct classification rate of dyads to relatedness categories (relatedness category assignments; RCA) using an individual-based model with realistic life history parameters. We investigated the effects of the number of genetic markers; marker type (microsatellite, single nucleotide polymorphism SNP, or both); minor allele frequency; typing error; mating system; and the number of overlapping generations under different demographic conditions. We found that (i) an increasing number of genetic markers increased the correct classification rate of the RCA so that up to >80% first cousins can be correctly assigned; (ii) the minimum number of genetic markers required for assignments with 80 and 95% correct classifications differed between relatedness categories, mating systems, and the number of overlapping generations; (iii) the correct classification rate was improved by adding additional relatedness categories and age and mitochondrial DNA data; and (iv) a combination of microsatellite and single-nucleotide polymorphism data increased the correct classification rate if <800 SNP loci were available. This study shows how intrinsic population characteristics, such as mating system and the number of overlapping generations, life history traits, and genetic marker characteristics, can influence the correct classification rate of an RCA study. Therefore, species-specific power analyses are essential for empirical studies. Copyright © 2015 Kopps et al.
Boudissa, M; Orfeuvre, B; Chabanas, M; Tonetti, J
2017-09-01
The Letournel classification of acetabular fracture shows poor reproducibility in inexperienced observers, despite the introduction of 3D imaging. We therefore developed a method of semi-automatic segmentation based on CT data. The present prospective study aimed to assess: (1) whether semi-automatic bone-fragment segmentation increased the rate of correct classification; (2) if so, in which fracture types; and (3) feasibility using the open-source itksnap 3.0 software package without incurring extra cost for users. Semi-automatic segmentation of acetabular fractures significantly increases the rate of correct classification by orthopedic surgery residents. Twelve orthopedic surgery residents classified 23 acetabular fractures. Six used conventional 3D reconstructions provided by the center's radiology department (conventional group) and 6 others used reconstructions obtained by semi-automatic segmentation using the open-source itksnap 3.0 software package (segmentation group). Bone fragments were identified by specific colors. Correct classification rates were compared between groups on Chi 2 test. Assessment was repeated 2 weeks later, to determine intra-observer reproducibility. Correct classification rates were significantly higher in the "segmentation" group: 114/138 (83%) versus 71/138 (52%); P<0.0001. The difference was greater for simple (36/36 (100%) versus 17/36 (47%); P<0.0001) than complex fractures (79/102 (77%) versus 54/102 (53%); P=0.0004). Mean segmentation time per fracture was 27±3min [range, 21-35min]. The segmentation group showed excellent intra-observer correlation coefficients, overall (ICC=0.88), and for simple (ICC=0.92) and complex fractures (ICC=0.84). Semi-automatic segmentation, identifying the various bone fragments, was effective in increasing the rate of correct acetabular fracture classification on the Letournel system by orthopedic surgery residents. It may be considered for routine use in education and training. III: prospective case-control study of a diagnostic procedure. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Stability and bias of classification rates in biological applications of discriminant analysis
Williams, B.K.; Titus, K.; Hines, J.E.
1990-01-01
We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases
Application of visible and near-infrared spectroscopy to classification of Miscanthus species
Jin, Xiaoli; Chen, Xiaoling; Xiao, Liang; ...
2017-04-03
Here, the feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validationmore » results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species.« less
Application of visible and near-infrared spectroscopy to classification of Miscanthus species
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Xiaoli; Chen, Xiaoling; Xiao, Liang
Here, the feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validationmore » results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species.« less
Application of visible and near-infrared spectroscopy to classification of Miscanthus species.
Jin, Xiaoli; Chen, Xiaoling; Xiao, Liang; Shi, Chunhai; Chen, Liang; Yu, Bin; Yi, Zili; Yoo, Ji Hye; Heo, Kweon; Yu, Chang Yeon; Yamada, Toshihiko; Sacks, Erik J; Peng, Junhua
2017-01-01
The feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validation results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species.
Application of visible and near-infrared spectroscopy to classification of Miscanthus species
Shi, Chunhai; Chen, Liang; Yu, Bin; Yi, Zili; Yoo, Ji Hye; Heo, Kweon; Yu, Chang Yeon; Yamada, Toshihiko; Sacks, Erik J.; Peng, Junhua
2017-01-01
The feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validation results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species. PMID:28369059
NASA Astrophysics Data System (ADS)
Werdiningsih, Indah; Zaman, Badrus; Nuqoba, Barry
2017-08-01
This paper presents classification of brain cancer using wavelet transformation and Adaptive Neighborhood Based Modified Backpropagation (ANMBP). Three stages of the processes, namely features extraction, features reduction, and classification process. Wavelet transformation is used for feature extraction and ANMBP is used for classification process. The result of features extraction is feature vectors. Features reduction used 100 energy values per feature and 10 energy values per feature. Classifications of brain cancer are normal, alzheimer, glioma, and carcinoma. Based on simulation results, 10 energy values per feature can be used to classify brain cancer correctly. The correct classification rate of proposed system is 95 %. This research demonstrated that wavelet transformation can be used for features extraction and ANMBP can be used for classification of brain cancer.
Ensemble of classifiers for confidence-rated classification of NDE signal
NASA Astrophysics Data System (ADS)
Banerjee, Portia; Safdarnejad, Seyed; Udpa, Lalita; Udpa, Satish
2016-02-01
Ensemble of classifiers in general, aims to improve classification accuracy by combining results from multiple weak hypotheses into a single strong classifier through weighted majority voting. Improved versions of ensemble of classifiers generate self-rated confidence scores which estimate the reliability of each of its prediction and boost the classifier using these confidence-rated predictions. However, such a confidence metric is based only on the rate of correct classification. In existing works, although ensemble of classifiers has been widely used in computational intelligence, the effect of all factors of unreliability on the confidence of classification is highly overlooked. With relevance to NDE, classification results are affected by inherent ambiguity of classifica-tion, non-discriminative features, inadequate training samples and noise due to measurement. In this paper, we extend the existing ensemble classification by maximizing confidence of every classification decision in addition to minimizing the classification error. Initial results of the approach on data from eddy current inspection show improvement in classification performance of defect and non-defect indications.
Comparative Analysis of RF Emission Based Fingerprinting Techniques for ZigBee Device Classification
quantify the differences invarious RF fingerprinting techniques via comparative analysis of MDA/ML classification results. The findings herein demonstrate...correct classification rates followed by COR-DNA and then RF-DNA in most test cases and especially in low Eb/N0 ranges, where ZigBee is designed to operate.
Waldman, John R.; Fabrizio, Mary C.
1994-01-01
Stock contribution studies of mixed-stock fisheries rely on the application of classification algorithms to samples of unknown origin. Although the performance of these algorithms can be assessed, there are no guidelines regarding decisions about including minor stocks, pooling stocks into regional groups, or sampling discrete substocks to adequately characterize a stock. We examined these questions for striped bass Morone saxatilis of the U.S. Atlantic coast by applying linear discriminant functions to meristic and morphometric data from fish collected from spawning areas. Some of our samples were from the Hudson and Roanoke rivers and four tributaries of the Chesapeake Bay. We also collected fish of mixed-stock origin from the Atlantic Ocean near Montauk, New York. Inclusion of the minor stock from the Roanoke River in the classification algorithm decreased the correct-classification rate, whereas grouping of the Roanoke River and Chesapeake Bay stock into a regional (''southern'') group increased the overall resolution. The increased resolution was offset by our inability to obtain separate contribution estimates of the groups that were pooled. Although multivariate analysis of variance indicated significant differences among Chesapeake Bay substocks, increasing the number of substocks in the discriminant analysis decreased the overall correct-classification rate. Although the inclusion of one, two, three, or four substocks in the classification algorithm did not greatly affect the overall correct-classification rates, the specific combination of substocks significantly affected the relative contribution estimates derived from the mixed-stock sample. Future studies of this kind must balance the costs and benefits of including minor stocks and would profit from examination of the variation in discriminant characters among all Chesapeake Bay substocks.
Shen, Jing; Hu, FangKe; Zhang, LiHai; Tang, PeiFu; Bi, ZhengGang
2013-04-01
The accuracy of intertrochanteric fracture classification is important; indeed, the patient outcomes are dependent on their classification. The aim of this study was to use the AO classification system to evaluate the variation in classification between X-ray and computed tomography (CT)/3D CT images. Then, differences in the length of surgery were evaluated based on two examinations. Intertrochanteric fractures were reviewed and surgeons were interviewed. The rates of correct discrimination and misclassification (overestimates and underestimates) probabilities were determined. The impact of misclassification on length of surgery was also evaluated. In total, 370 patents and four surgeons were included in the study. All patients had X-ray images and 210 patients had CT/3D CT images. Of them, 214 and 156 patients were treated by intramedullary and extramedullary fixation systems, respectively. The mean length of surgery was 62.1 ± 17.7 min. The overall rate of correct discrimination was 83.8 % and in the classification of A1, A2 and A3 were 80.0, 85.7 and 82.4 %, respectively. The rate of misclassification showed no significant difference between stable and unstable fractures (21.3 vs 13.1 %, P = 0.173). The overall rates of overestimates and underestimates were significantly different (5 vs 11.25 %, P = 0.041). Subtracting the rate of overestimates from underestimates had a positive correlation with prolonged surgery and showed a significant difference with intramedullary fixation (P < 0.001). Classification based on the AO system was good in terms of consistency. CT/3D CT examination was more reliable and more helpful for preoperative assessment, especially for performance of an intramedullary fixation.
Error Detection in Mechanized Classification Systems
ERIC Educational Resources Information Center
Hoyle, W. G.
1976-01-01
When documentary material is indexed by a mechanized classification system, and the results judged by trained professionals, the number of documents in disagreement, after suitable adjustment, defines the error rate of the system. In a test case disagreement was 22 percent and, of this 22 percent, the computer correctly identified two-thirds of…
Nesteruk, Tomasz; Nesteruk, Marta; Styczyńska, Maria; Barcikowska-Kotowicz, Maria; Walecki, Jerzy
2016-01-01
The aim of the study was to evaluate the diagnostic value of two measurement techniques in patients with cognitive impairment - automated volumetry of the hippocampus, entorhinal cortex, parahippocampal gyrus, posterior cingulate gyrus, cortex of the temporal lobes and corpus callosum, and fractional anisotropy (FA) index measurement of the corpus callosum using diffusion tensor imaging. A total number of 96 patients underwent magnetic resonance imaging study of the brain - 33 healthy controls (HC), 33 patients with diagnosed mild cognitive impairment (MCI) and 30 patients with Alzheimer's disease (AD) in early stage. The severity of the dementia was evaluated with neuropsychological test battery. The volumetric measurements were performed automatically using FreeSurfer imaging software. The measurements of FA index were performed manually using ROI (region of interest) tool. The volumetric measurement of the temporal lobe cortex had the highest correct classification rate (68.7%), whereas the lowest was achieved with FA index measurement of the corpus callosum (51%). The highest sensitivity and specificity in discriminating between the patients with MCI vs. early AD was achieved with the volumetric measurement of the corpus callosum - the values were 73% and 71%, respectively, and the correct classification rate was 72%. The highest sensitivity and specificity in discriminating between HC and the patients with early AD was achieved with the volumetric measurement of the entorhinal cortex - the values were 94% and 100%, respectively, and the correct classification rate was 97%. The highest sensitivity and specificity in discriminating between HC and the patients with MCI was achieved with the volumetric measurement of the temporal lobe cortex - the values were 90% and 93%, respectively, and the correct classification rate was 92%. The diagnostic value varied depending on the measurement technique. The volumetric measurement of the atrophy proved to be the best imaging biomarker, which allowed the distinction between the groups of patients. The volumetric assessment of the corpus callosum proved to be a useful tool in discriminating between the patients with MCI vs. early AD.
The impact of missing trauma data on predicting massive transfusion
Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.
2013-01-01
INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514
An analysis of USSPACECOM's space surveillance network sensor tasking methodology
NASA Astrophysics Data System (ADS)
Berger, Jeff M.; Moles, Joseph B.; Wilsey, David G.
1992-12-01
This study provides the basis for the development of a cost/benefit assessment model to determine the effects of alterations to the Space Surveillance Network (SSN) on orbital element (OE) set accuracy. It provides a review of current methods used by NORAD and the SSN to gather and process observations, an alternative to the current Gabbard classification method, and the development of a model to determine the effects of observation rate and correction interval on OE set accuracy. The proposed classification scheme is based on satellite J2 perturbations. Specifically, classes were established based on mean motion, eccentricity, and inclination since J2 perturbation effects are functions of only these elements. Model development began by creating representative sensor observations using a highly accurate orbital propagation model. These observations were compared to predicted observations generated using the NORAD Simplified General Perturbation (SGP4) model and differentially corrected using a Bayes, sequential estimation, algorithm. A 10-run Monte Carlo analysis was performed using this model on 12 satellites using 16 different observation rate/correction interval combinations. An ANOVA and confidence interval analysis of the results show that this model does demonstrate the differences in steady state position error based on varying observation rate and correction interval.
The effect of call libraries and acoustic filters on the identification of bat echolocation.
Clement, Matthew J; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-09-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys.
The effect of call libraries and acoustic filters on the identification of bat echolocation
Clement, Matthew J; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-01-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys. PMID:25535563
The effect of call libraries and acoustic filters on the identification of bat echolocation
Clement, Matthew; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-01-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys.
SVM based colon polyps classifier in a wireless active stereo endoscope.
Ayoub, J; Granado, B; Mhanna, Y; Romain, O
2010-01-01
This work focuses on the recognition of three-dimensional colon polyps captured by an active stereo vision sensor. The detection algorithm consists of SVM classifier trained on robust feature descriptors. The study is related to Cyclope, this prototype sensor allows real time 3D object reconstruction and continues to be optimized technically to improve its classification task by differentiation between hyperplastic and adenomatous polyps. Experimental results were encouraging and show correct classification rate of approximately 97%. The work contains detailed statistics about the detection rate and the computing complexity. Inspired by intensity histogram, the work shows a new approach that extracts a set of features based on depth histogram and combines stereo measurement with SVM classifiers to correctly classify benign and malignant polyps.
Comparison of seven protocols to identify fecal contamination sources using Escherichia coli
Stoeckel, D.M.; Mathes, M.V.; Hyer, K.E.; Hagedorn, C.; Kator, H.; Lukasik, J.; O'Brien, T. L.; Fenger, T.W.; Samadpour, M.; Strickler, K.M.; Wiggins, B.A.
2004-01-01
Microbial source tracking (MST) uses various approaches to classify fecal-indicator microorganisms to source hosts. Reproducibility, accuracy, and robustness of seven phenotypic and genotypic MST protocols were evaluated by use of Escherichia coli from an eight-host library of known-source isolates and a separate, blinded challenge library. In reproducibility tests, measuring each protocol's ability to reclassify blinded replicates, only one (pulsed-field gel electrophoresis; PFGE) correctly classified all test replicates to host species; three protocols classified 48-62% correctly, and the remaining three classified fewer than 25% correctly. In accuracy tests, measuring each protocol's ability to correctly classify new isolates, ribotyping with EcoRI and PvuII approached 100% correct classification but only 6% of isolates were classified; four of the other six protocols (antibiotic resistance analysis, PFGE, and two repetitive-element PCR protocols) achieved better than random accuracy rates when 30-100% of challenge isolates were classified. In robustness tests, measuring each protocol's ability to recognize isolates from nonlibrary hosts, three protocols correctly classified 33-100% of isolates as "unknown origin," whereas four protocols classified all isolates to a source category. A relevance test, summarizing interpretations for a hypothetical water sample containing 30 challenge isolates, indicated that false-positive classifications would hinder interpretations for most protocols. Study results indicate that more representation in known-source libraries and better classification accuracy would be needed before field application. Thorough reliability assessment of classification results is crucial before and during application of MST protocols.
NASA Astrophysics Data System (ADS)
Ciany, Charles M.; Zurawski, William; Kerfoot, Ian
2001-10-01
The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.
Automated speech analysis applied to laryngeal disease categorization.
Gelzinis, A; Verikas, A; Bacauskiene, M
2008-07-01
The long-term goal of the work is a decision support system for diagnostics of laryngeal diseases. Colour images of vocal folds, a voice signal, and questionnaire data are the information sources to be used in the analysis. This paper is concerned with automated analysis of a voice signal applied to screening of laryngeal diseases. The effectiveness of 11 different feature sets in classification of voice recordings of the sustained phonation of the vowel sound /a/ into a healthy and two pathological classes, diffuse and nodular, is investigated. A k-NN classifier, SVM, and a committee build using various aggregation options are used for the classification. The study was made using the mixed gender database containing 312 voice recordings. The correct classification rate of 84.6% was achieved when using an SVM committee consisting of four members. The pitch and amplitude perturbation measures, cepstral energy features, autocorrelation features as well as linear prediction cosine transform coefficients were amongst the feature sets providing the best performance. In the case of two class classification, using recordings from 79 subjects representing the pathological and 69 the healthy class, the correct classification rate of 95.5% was obtained from a five member committee. Again the pitch and amplitude perturbation measures provided the best performance.
Huang, Y; Andueza, D; de Oliveira, L; Zawadzki, F; Prache, S
2015-11-01
Since consumers are showing increased interest in the origin and method of production of their food, it is important to be able to authenticate dietary history of animals by rapid and robust methods used in the ruminant products. Promising breakthroughs have been made in the use of spectroscopic methods on fat to discriminate pasture-fed and concentrate-fed lambs. However, questions remained on their discriminatory ability in more complex feeding conditions, such as concentrate-finishing after pasture-feeding. We compared the ability of visible reflectance spectroscopy (Vis RS, wavelength range: 400 to 700 nm) with that of visible-near-infrared reflectance spectroscopy (Vis-NIR RS, wavelength range: 400 to 2500 nm) to differentiate between carcasses of lambs reared with three feeding regimes, using partial least square discriminant analysis (PLS-DA) as a classification method. The sample set comprised perirenal fat of Romane male lambs fattened at pasture (P, n = 69), stall-fattened indoors on commercial concentrate and straw (S, n = 55) and finished indoors with concentrate and straw for 28 days after pasture-feeding (PS, n = 65). The overall correct classification rate was better for Vis-NIR RS than for Vis RS (99.0% v. 95.1%, P < 0.05). Vis-NIR RS allowed a correct classification rate of 98.6%, 100.0% and 98.5% for P, S and PS lambs, respectively, whereas Vis RS allowed a correct classification rate of 98.6%, 94.5% and 92.3% for P, S and PS lambs, respectively. This study suggests the likely implication of molecules absorbing light in the non-visible part of the Vis-NIR spectra (possibly fatty acids), together with carotenoid and haem pigments, in the discrimination of the three feeding regimes.
Classification bias in commercial business lists for retail food stores in the U.S.
Han, Euna; Powell, Lisa M; Zenk, Shannon N; Rimkus, Leah; Ohri-Vachaspati, Punam; Chaloupka, Frank J
2012-04-18
Aspects of the food environment such as the availability of different types of food stores have recently emerged as key modifiable factors that may contribute to the increased prevalence of obesity. Given that many of these studies have derived their results based on secondary datasets and the relationship of food stores with individual weight outcomes has been reported to vary by store type, it is important to understand the extent to which often-used secondary data correctly classify food stores. We evaluated the classification bias of food stores in Dun & Bradstreet (D&B) and InfoUSA commercial business lists. We performed a full census in 274 randomly selected census tracts in the Chicago metropolitan area and collected detailed store attributes inside stores for classification. Store attributes were compared by classification match status and store type. Systematic classification bias by census tract characteristics was assessed in multivariate regression. D&B had a higher classification match rate than InfoUSA for supermarkets and grocery stores, while InfoUSA was higher for convenience stores. Both lists were more likely to correctly classify large supermarkets, grocery stores, and convenience stores with more cash registers and different types of service counters (supermarkets and grocery stores only). The likelihood of a correct classification match for supermarkets and grocery stores did not vary systemically by tract characteristics whereas convenience stores were more likely to be misclassified in predominately Black tracts. Researches can rely on classification of food stores in commercial datasets for supermarkets and grocery stores whereas classifications for convenience and specialty food stores are subject to some systematic bias by neighborhood racial/ethnic composition.
Classification bias in commercial business lists for retail food stores in the U.S.
2012-01-01
Background Aspects of the food environment such as the availability of different types of food stores have recently emerged as key modifiable factors that may contribute to the increased prevalence of obesity. Given that many of these studies have derived their results based on secondary datasets and the relationship of food stores with individual weight outcomes has been reported to vary by store type, it is important to understand the extent to which often-used secondary data correctly classify food stores. We evaluated the classification bias of food stores in Dun & Bradstreet (D&B) and InfoUSA commercial business lists. Methods We performed a full census in 274 randomly selected census tracts in the Chicago metropolitan area and collected detailed store attributes inside stores for classification. Store attributes were compared by classification match status and store type. Systematic classification bias by census tract characteristics was assessed in multivariate regression. Results D&B had a higher classification match rate than InfoUSA for supermarkets and grocery stores, while InfoUSA was higher for convenience stores. Both lists were more likely to correctly classify large supermarkets, grocery stores, and convenience stores with more cash registers and different types of service counters (supermarkets and grocery stores only). The likelihood of a correct classification match for supermarkets and grocery stores did not vary systemically by tract characteristics whereas convenience stores were more likely to be misclassified in predominately Black tracts. Conclusion Researches can rely on classification of food stores in commercial datasets for supermarkets and grocery stores whereas classifications for convenience and specialty food stores are subject to some systematic bias by neighborhood racial/ethnic composition. PMID:22512874
SSVEP-BCI implementation for 37-40 Hz frequency range.
Müller, Sandra Mara Torres; Diez, Pablo F; Bastos-Filho, Teodiano Freire; Sarcinelli-Filho, Mário; Mut, Vicente; Laciar, Eric
2011-01-01
This work presents a Brain-Computer Interface (BCI) based on Steady State Visual Evoked Potentials (SSVEP), using higher stimulus frequencies (>30 Hz). Using a statistical test and a decision tree, the real-time EEG registers of six volunteers are analyzed, with the classification result updated each second. The BCI developed does not need any kind of settings or adjustments, which makes it more general. Offline results are presented, which corresponds to a correct classification rate of up to 99% and a Information Transfer Rate (ITR) of up to 114.2 bits/min.
Sevel, Landrew S; Boissoneault, Jeff; Letzen, Janelle E; Robinson, Michael E; Staud, Roland
2018-05-30
Chronic fatigue syndrome (CFS) is a disorder associated with fatigue, pain, and structural/functional abnormalities seen during magnetic resonance brain imaging (MRI). Therefore, we evaluated the performance of structural MRI (sMRI) abnormalities in the classification of CFS patients versus healthy controls and compared it to machine learning (ML) classification based upon self-report (SR). Participants included 18 CFS patients and 15 healthy controls (HC). All subjects underwent T1-weighted sMRI and provided visual analogue-scale ratings of fatigue, pain intensity, anxiety, depression, anger, and sleep quality. sMRI data were segmented using FreeSurfer and 61 regions based on functional and structural abnormalities previously reported in patients with CFS. Classification was performed in RapidMiner using a linear support vector machine and bootstrap optimism correction. We compared ML classifiers based on (1) 61 a priori sMRI regional estimates and (2) SR ratings. The sMRI model achieved 79.58% classification accuracy. The SR (accuracy = 95.95%) outperformed both sMRI models. Estimates from multiple brain areas related to cognition, emotion, and memory contributed strongly to group classification. This is the first ML-based group classification of CFS. Our findings suggest that sMRI abnormalities are useful for discriminating CFS patients from HC, but SR ratings remain most effective in classification tasks.
Kalegowda, Yogesh; Harmer, Sarah L
2013-01-08
Artificial neural network (ANN) and a hybrid principal component analysis-artificial neural network (PCA-ANN) classifiers have been successfully implemented for classification of static time-of-flight secondary ion mass spectrometry (ToF-SIMS) mass spectra collected from complex Cu-Fe sulphides (chalcopyrite, bornite, chalcocite and pyrite) at different flotation conditions. ANNs are very good pattern classifiers because of: their ability to learn and generalise patterns that are not linearly separable; their fault and noise tolerance capability; and high parallelism. In the first approach, fragments from the whole ToF-SIMS spectrum were used as input to the ANN, the model yielded high overall correct classification rates of 100% for feed samples, 88% for conditioned feed samples and 91% for Eh modified samples. In the second approach, the hybrid pattern classifier PCA-ANN was integrated. PCA is a very effective multivariate data analysis tool applied to enhance species features and reduce data dimensionality. Principal component (PC) scores which accounted for 95% of the raw spectral data variance, were used as input to the ANN, the model yielded high overall correct classification rates of 88% for conditioned feed samples and 95% for Eh modified samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Abbatangelo, Marco; Núñez-Carmona, Estefanía; Sberveglieri, Veronica; Zappa, Dario; Comini, Elisabetta; Sberveglieri, Giorgio
2018-05-18
Parmigiano Reggiano cheese is one of the most appreciated and consumed foods worldwide, especially in Italy, for its high content of nutrients and taste. However, these characteristics make this product subject to counterfeiting in different forms. In this study, a novel method based on an electronic nose has been developed to investigate the potentiality of this tool to distinguish rind percentages in grated Parmigiano Reggiano packages that should be lower than 18%. Different samples, in terms of percentage, seasoning and rind working process, were considered to tackle the problem at 360°. In parallel, GC-MS technique was used to give a name to the compounds that characterize Parmigiano and to relate them to sensors responses. Data analysis consisted of two stages: Multivariate analysis (PLS) and classification made in a hierarchical way with PLS-DA ad ANNs. Results were promising, in terms of correct classification of the samples. The correct classification rate (%) was higher for ANNs than PLS-DA, with correct identification approaching 100 percent.
Classification and prediction of pilot weather encounters: A discriminant function analysis.
O'Hare, David; Hunter, David R; Martinussen, Monica; Wiggins, Mark
2011-05-01
Flight into adverse weather continues to be a significant hazard for General Aviation (GA) pilots. Weather-related crashes have a significantly higher fatality rate than other GA crashes. Previous research has identified lack of situational awareness, risk perception, and risk tolerance as possible explanations for why pilots would continue into adverse weather. However, very little is known about the nature of these encounters or the differences between pilots who avoid adverse weather and those who do not. Visitors to a web site described an experience with adverse weather and completed a range of measures of personal characteristics. The resulting data from 364 pilots were carefully screened and subject to a discriminant function analysis. Two significant functions were found. The first, accounting for 69% of the variance, reflected measures of risk awareness and pilot judgment while the second differentiated pilots in terms of their experience levels. The variables measured in this study enabled us to correctly discriminate between the three groups of pilots considerably better (53% correct classifications) than would have been possible by chance (33% correct classifications). The implications of these findings for targeting safety interventions are discussed.
Gadermayr, M.; Liedlgruber, M.; Uhl, A.; Vécsei, A.
2013-01-01
Due to the optics used in endoscopes, a typical degradation observed in endoscopic images are barrel-type distortions. In this work we investigate the impact of methods used to correct such distortions in images on the classification accuracy in the context of automated celiac disease classification. For this purpose we compare various different distortion correction methods and apply them to endoscopic images, which are subsequently classified. Since the interpolation used in such methods is also assumed to have an influence on the resulting classification accuracies, we also investigate different interpolation methods and their impact on the classification performance. In order to be able to make solid statements about the benefit of distortion correction we use various different feature extraction methods used to obtain features for the classification. Our experiments show that it is not possible to make a clear statement about the usefulness of distortion correction methods in the context of an automated diagnosis of celiac disease. This is mainly due to the fact that an eventual benefit of distortion correction highly depends on the feature extraction method used for the classification. PMID:23981585
Correlation-based pattern recognition for implantable defibrillators.
Wilkins, J.
1996-01-01
An estimated 300,000 Americans die each year from cardiac arrhythmias. Historically, drug therapy or surgery were the only treatment options available for patients suffering from arrhythmias. Recently, implantable arrhythmia management devices have been developed. These devices allow abnormal cardiac rhythms to be sensed and corrected in vivo. Proper arrhythmia classification is critical to selecting the appropriate therapeutic intervention. The classification problem is made more challenging by the power/computation constraints imposed by the short battery life of implantable devices. Current devices utilize heart rate-based classification algorithms. Although easy to implement, rate-based approaches have unacceptably high error rates in distinguishing supraventricular tachycardia (SVT) from ventricular tachycardia (VT). Conventional morphology assessment techniques used in ECG analysis often require too much computation to be practical for implantable devices. In this paper, a computationally-efficient, arrhythmia classification architecture using correlation-based morphology assessment is presented. The architecture classifies individuals heart beats by assessing similarity between an incoming cardiac signal vector and a series of prestored class templates. A series of these beat classifications are used to make an overall rhythm assessment. The system makes use of several new results in the field of pattern recognition. The resulting system achieved excellent accuracy in discriminating SVT and VT. PMID:8947674
The Hispanic mortality advantage and ethnic misclassification on US death certificates.
Arias, Elizabeth; Eschbach, Karl; Schauman, William S; Backlund, Eric L; Sorlie, Paul D
2010-04-01
We tested the data artifact hypothesis regarding the Hispanic mortality advantage by investigating whether and to what degree this advantage is explained by Hispanic origin misclassification on US death certificates. We used the National Longitudinal Mortality Study, which links Current Population Survey records to death certificates for 1979 through 1998, to estimate the sensitivity, specificity, and net ascertainment of Hispanic ethnicity on death certificates compared with survey classifications. Using national vital statistics mortality data, we estimated Hispanic age-specific and age-adjusted death rates, which were uncorrected and corrected for death certificate misclassification, and produced death rate ratios comparing the Hispanic with the non-Hispanic White population. Hispanic origin reporting on death certificates in the United States is reasonably good. The net ascertainment of Hispanic origin is just 5% higher on survey records than on death certificates. Corrected age-adjusted death rates for Hispanics are lower than those for the non-Hispanic White population by close to 20%. The Hispanic mortality paradox is not explained by an incongruence between ethnic classification in vital registration and population data systems.
NASA Astrophysics Data System (ADS)
Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko
2015-01-01
Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.
Gold-standard for computer-assisted morphological sperm analysis.
Chang, Violeta; Garcia, Alejandra; Hitschfeld, Nancy; Härtel, Steffen
2017-04-01
Published algorithms for classification of human sperm heads are based on relatively small image databases that are not open to the public, and thus no direct comparison is available for competing methods. We describe a gold-standard for morphological sperm analysis (SCIAN-MorphoSpermGS), a dataset of sperm head images with expert-classification labels in one of the following classes: normal, tapered, pyriform, small or amorphous. This gold-standard is for evaluating and comparing known techniques and future improvements to present approaches for classification of human sperm heads for semen analysis. Although this paper does not provide a computational tool for morphological sperm analysis, we present a set of experiments for comparing sperm head description and classification common techniques. This classification base-line is aimed to be used as a reference for future improvements to present approaches for human sperm head classification. The gold-standard provides a label for each sperm head, which is achieved by majority voting among experts. The classification base-line compares four supervised learning methods (1- Nearest Neighbor, naive Bayes, decision trees and Support Vector Machine (SVM)) and three shape-based descriptors (Hu moments, Zernike moments and Fourier descriptors), reporting the accuracy and the true positive rate for each experiment. We used Fleiss' Kappa Coefficient to evaluate the inter-expert agreement and Fisher's exact test for inter-expert variability and statistical significant differences between descriptors and learning techniques. Our results confirm the high degree of inter-expert variability in the morphological sperm analysis. Regarding the classification base line, we show that none of the standard descriptors or classification approaches is best suitable for tackling the problem of sperm head classification. We discovered that the correct classification rate was highly variable when trying to discriminate among non-normal sperm heads. By using the Fourier descriptor and SVM, we achieved the best mean correct classification: only 49%. We conclude that the SCIAN-MorphoSpermGS will provide a standard tool for evaluation of characterization and classification approaches for human sperm heads. Indeed, there is a clear need for a specific shape-based descriptor for human sperm heads and a specific classification approach to tackle the problem of high variability within subcategories of abnormal sperm cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hu, Guanyu; Fang, Zhou; Liu, Bilin; Chen, Xinjun; Staples, Kevin; Chen, Yong
2018-04-01
The cephalopod beak is a vital hard structure with a stable configuration and has been widely used for the identification of cephalopod species. This study was conducted to determine the best standardization method for identifying different species by measuring 12 morphological variables of the beaks of Illex argentinus, Ommastrephes bartramii, and Dosidicus gigas that were collected by Chinese jigging vessels. To remove the effects of size, these morphometric variables were standardized using three methods. The average ratios of the upper beak morphological variables and upper crest length of O. bartramii and D. gigas were found to be greater than those of I. argentinus. However, for lower beaks, only the average of LRL (lower rostrum length)/ LCL (lower crest length), LRW (lower rostrum width)/ LCL, and LLWL (lower lateral wall length)/ LCL of O. bartramii and D. gigas were greater than those of I. argentinus. The ratios of beak morphological variables and crest length were found to be all significantly different among the three species ( P < 0.001). Among the three standardization methods, the correct classification rate of stepwise discriminant analysis (SDA) was the highest using the ratios of beak morphological variables and crest length. Compared with hood length, the correct classification rate was slightly higher when using beak variables standardized by crest length using an allometric model. The correct classification rate of the lower beak was also found to be greater than that of the upper beak. This study indicates that the ratios of beak morphological variables to crest length could be used for interspecies and intraspecies identification. Meanwhile, the lower beak variables were found to be more effective than upper beak variables in classifying beaks found in the stomachs of predators.
Using the regulation of accuracy to study performance when the correct answer is not known.
Luna, Karlos; Martín-Luengo, Beatriz
2017-08-01
We examined memory performance in multiple-choice questions when correct answers were not always present. How do participants answer when they are aware that the correct alternative may not be present? To answer this question we allowed participants to decide on the number of alternatives in their final answer (the plurality option), and whether they wanted to report or withhold their answer (report option). We also studied the memory benefits when both the plurality and the report options were available. In two experiments participants watched a crime and then answered questions with five alternatives. Half of the questions were presented with the correct alternative and half were not. Participants selected one alternative and rated confidence, then selected three alternatives and again rated confidence, and finally indicated whether they preferred the answer with one or with three alternatives (plurality option). Lastly, they decided whether to report or withhold the answer (report option). Results showed that participants' confidence in their selections was higher, that they chose more single answers, and that they preferred to report more often when the correct alternative was presented. We also attempted to classify a posteriori questions as either presented with or without the correct alternative from participants' selection. Classification was better than chance, and encouraging, but the forensic application of the classification technique is still limited since there was a large percentage of responses that were incorrectly classified. Our results also showed that the memory benefits of both plurality and report options overlap. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Hyperspectral image segmentation using a cooperative nonparametric approach
NASA Astrophysics Data System (ADS)
Taher, Akar; Chehdi, Kacem; Cariou, Claude
2013-10-01
In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.
Das, A.J.; Battles, J.J.; Stephenson, N.L.; van Mantgem, P.J.
2007-01-01
We examined mortality of Abies concolor (Gord. & Glend.) Lindl. (white fir) and Pinus lambertiana Dougl. (sugar pine) by developing logistic models using three growth indices obtained from tree rings: average growth, growth trend, and count of abrupt growth declines. For P. lambertiana, models with average growth, growth trend, and count of abrupt declines improved overall prediction (78.6% dead trees correctly classified, 83.7% live trees correctly classified) compared with a model with average recent growth alone (69.6% dead trees correctly classified, 67.3% live trees correctly classified). For A. concolor, counts of abrupt declines and longer time intervals improved overall classification (trees with DBH ???20 cm: 78.9% dead trees correctly classified and 76.7% live trees correctly classified vs. 64.9% dead trees correctly classified and 77.9% live trees correctly classified; trees with DBH <20 cm: 71.6% dead trees correctly classified and 71.0% live trees correctly classified vs. 67.2% dead trees correctly classified and 66.7% live trees correctly classified). In general, count of abrupt declines improved live-tree classification. External validation of A. concolor models showed that they functioned well at stands not used in model development, and the development of size-specific models demonstrated important differences in mortality risk between understory and canopy trees. Population-level mortality-risk models were developed for A. concolor and generated realistic mortality rates at two sites. Our results support the contention that a more comprehensive use of the growth record yields a more robust assessment of mortality risk. ?? 2007 NRC.
Atmospheric correction analysis on LANDSAT data over the Amazon region. [Manaus, Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dias, L. A. V.; Dossantos, J. R.; Formaggio, A. R.
1983-01-01
The Amazon Region natural resources were studied in two ways and compared. A LANDSAT scene and its attributes were selected, and a maximum likelihood classification was made. The scene was atmospherically corrected, taking into account Amazonic peculiarities revealed by (ground truth) of the same area, and the subsequent classification. Comparison shows that the classification improves with the atmospherically corrected images.
Probability interpretations of intraclass reliabilities.
Ellis, Jules L
2013-11-20
Research where many organizations are rated by different samples of individuals such as clients, patients, or employees frequently uses reliabilities computed from intraclass correlations. Consumers of statistical information, such as patients and policy makers, may not have sufficient background for deciding which levels of reliability are acceptable. It is shown that the reliability is related to various probabilities that may be easier to understand, for example, the proportion of organizations that will be classed significantly above (or below) the mean and the probability that an organization is classed correctly given that it is classed significantly above (or below) the mean. One can view these probabilities as the amount of information of the classification and the correctness of the classification. These probabilities have an inverse relationship: given a reliability, one can 'buy' correctness at the cost of informativeness and conversely. This article discusses how this can be used to make judgments about the required level of reliabilities. Copyright © 2013 John Wiley & Sons, Ltd.
Metabolomics for organic food authentication: Results from a long-term field study in carrots.
Cubero-Leon, Elena; De Rudder, Olivier; Maquet, Alain
2018-01-15
Increasing demand for organic products and their premium prices make them an attractive target for fraudulent malpractices. In this study, a large-scale comparative metabolomics approach was applied to investigate the effect of the agronomic production system on the metabolite composition of carrots and to build statistical models for prediction purposes. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA) was applied successfully to predict the origin of the agricultural system of the harvested carrots on the basis of features determined by liquid chromatography-mass spectrometry. When the training set used to build the OPLS-DA models contained samples representative of each harvest year, the models were able to classify unknown samples correctly (100% correct classification). If a harvest year was left out of the training sets and used for predictions, the correct classification rates achieved ranged from 76% to 100%. The results therefore highlight the potential of metabolomic fingerprinting for organic food authentication purposes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Progress toward the determination of correct classification rates in fire debris analysis.
Waddell, Erin E; Song, Emma T; Rinke, Caitlin N; Williams, Mary R; Sigman, Michael E
2013-07-01
Principal components analysis (PCA), linear discriminant analysis (LDA), and quadratic discriminant analysis (QDA) were used to develop a multistep classification procedure for determining the presence of ignitable liquid residue in fire debris and assigning any ignitable liquid residue present into the classes defined under the American Society for Testing and Materials (ASTM) E 1618-10 standard method. A multistep classification procedure was tested by cross-validation based on model data sets comprised of the time-averaged mass spectra (also referred to as total ion spectra) of commercial ignitable liquids and pyrolysis products from common building materials and household furnishings (referred to simply as substrates). Fire debris samples from laboratory-scale and field test burns were also used to test the model. The optimal model's true-positive rate was 81.3% for cross-validation samples and 70.9% for fire debris samples. The false-positive rate was 9.9% for cross-validation samples and 8.9% for fire debris samples. © 2013 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Zhang, Zhiming; de Wulf, Robert R.; van Coillie, Frieke M. B.; Verbeke, Lieven P. C.; de Clercq, Eva M.; Ou, Xiaokun
2011-01-01
Mapping of vegetation using remote sensing in mountainous areas is considerably hampered by topographic effects on the spectral response pattern. A variety of topographic normalization techniques have been proposed to correct these illumination effects due to topography. The purpose of this study was to compare six different topographic normalization methods (Cosine correction, Minnaert correction, C-correction, Sun-canopy-sensor correction, two-stage topographic normalization, and slope matching technique) for their effectiveness in enhancing vegetation classification in mountainous environments. Since most of the vegetation classes in the rugged terrain of the Lancang Watershed (China) did not feature a normal distribution, artificial neural networks (ANNs) were employed as a classifier. Comparing the ANN classifications, none of the topographic correction methods could significantly improve ETM+ image classification overall accuracy. Nevertheless, at the class level, the accuracy of pine forest could be increased by using topographically corrected images. On the contrary, oak forest and mixed forest accuracies were significantly decreased by using corrected images. The results also showed that none of the topographic normalization strategies was satisfactorily able to correct for the topographic effects in severely shadowed areas.
48 CFR 52.222-8 - Payrolls and Basic Records.
Code of Federal Regulations, 2011 CFR
2011-10-01
... social security number of each such worker, his or her correct classification, hourly rates of wages paid... information required to be maintained under paragraph (a) of this clause, except that full social security... employee's social security number). The required weekly payroll information may be submitted in any form...
48 CFR 52.222-8 - Payrolls and Basic Records.
Code of Federal Regulations, 2010 CFR
2010-10-01
... social security number of each such worker, his or her correct classification, hourly rates of wages paid... information required to be maintained under paragraph (a) of this clause, except that full social security... employee's social security number). The required weekly payroll information may be submitted in any form...
Ryan, D; Shephard, S; Kelly, F L
2016-09-01
This study investigates temporal stability in the scale microchemistry of brown trout Salmo trutta in feeder streams of a large heterogeneous lake catchment and rates of change after migration into the lake. Laser-ablation inductively coupled plasma mass spectrometry was used to quantify the elemental concentrations of Na, Mg, Mn, Cu, Zn, Ba and Sr in archived (1997-2002) scales of juvenile S. trutta collected from six major feeder streams of Lough Mask, County Mayo, Ireland. Water-element Ca ratios within these streams were determined for the fish sampling period and for a later period (2013-2015). Salmo trutta scale Sr and Ba concentrations were significantly (P < 0·05) correlated with stream water sample Sr:Ca and Ba:Ca ratios respectively from both periods, indicating multi-annual stability in scale and water-elemental signatures. Discriminant analysis of scale chemistries correctly classified 91% of sampled juvenile S. trutta to their stream of origin using a cross-validated classification model. This model was used to test whether assumed post-depositional change in scale element concentrations reduced correct natal stream classification of S. trutta in successive years after migration into Lough Mask. Fish residing in the lake for 1-3 years could be reliably classified to their most likely natal stream, but the probability of correct classification diminished strongly with longer lake residence. Use of scale chemistry to identify natal streams of lake S. trutta should focus on recent migrants, but may not require contemporary water chemistry data. © 2016 The Fisheries Society of the British Isles.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-13
... DEPARTMENT OF THE INTERIOR Bureau of Land Management [LLCAD09000.L14300000.ES0000; CACA- 051457] Correction for Notice of Realty Action; Recreation and Public Purposes Act Classification; California AGENCY: Bureau of Land Management, Interior. ACTION: Correction SUMMARY: This notice corrects a Notice of Realty...
A likelihood ratio model for the determination of the geographical origin of olive oil.
Własiuk, Patryk; Martyna, Agnieszka; Zadora, Grzegorz
2015-01-01
Food fraud or food adulteration may be of forensic interest for instance in the case of suspected deliberate mislabeling. On account of its potential health benefits and nutritional qualities, geographical origin determination of olive oil might be of special interest. The use of a likelihood ratio (LR) model has certain advantages in contrast to typical chemometric methods because the LR model takes into account the information about the sample rarity in a relevant population. Such properties are of particular interest to forensic scientists and therefore it has been the aim of this study to examine the issue of olive oil classification with the use of different LR models and their pertinence under selected data pre-processing methods (logarithm based data transformations) and feature selection technique. This was carried out on data describing 572 Italian olive oil samples characterised by the content of 8 fatty acids in the lipid fraction. Three classification problems related to three regions of Italy (South, North and Sardinia) have been considered with the use of LR models. The correct classification rate and empirical cross entropy were taken into account as a measure of performance of each model. The application of LR models in determining the geographical origin of olive oil has proven to be satisfactorily useful for the considered issues analysed in terms of many variants of data pre-processing since the rates of correct classifications were close to 100% and considerable reduction of information loss was observed. The work also presents a comparative study of the performance of the linear discriminant analysis in considered classification problems. An approach to the choice of the value of the smoothing parameter is highlighted for the kernel density estimation based LR models as well. Copyright © 2014 Elsevier B.V. All rights reserved.
Statistical sensor fusion of ECG data using automotive-grade sensors
NASA Astrophysics Data System (ADS)
Koenig, A.; Rehg, T.; Rasshofer, R.
2015-11-01
Driver states such as fatigue, stress, aggression, distraction or even medical emergencies continue to be yield to severe mistakes in driving and promote accidents. A pathway towards improving driver state assessment can be found in psycho-physiological measures to directly quantify the driver's state from physiological recordings. Although heart rate is a well-established physiological variable that reflects cognitive stress, obtaining heart rate contactless and reliably is a challenging task in an automotive environment. Our aim was to investigate, how sensory fusion of two automotive grade sensors would influence the accuracy of automatic classification of cognitive stress levels. We induced cognitive stress in subjects and estimated levels from their heart rate signals, acquired from automotive ready ECG sensors. Using signal quality indices and Kalman filters, we were able to decrease Root Mean Squared Error (RMSE) of heart rate recordings by 10 beats per minute. We then trained a neural network to classify the cognitive workload state of subjects from heart rate and compared classification performance for ground truth, the individual sensors and the fused heart rate signal. We obtained an increase of 5 % higher correct classification by fusing signals as compared to individual sensors, staying only 4 % below the maximally possible classification accuracy from ground truth. These results are a first step towards real world applications of psycho-physiological measurements in vehicle settings. Future implementations of driver state modeling will be able to draw from a larger pool of data sources, such as additional physiological values or vehicle related data, which can be expected to drive classification to significantly higher values.
Austin, Samuel H.; Nelms, David L.
2017-01-01
Climate change raises concern that risks of hydrological drought may be increasing. We estimate hydrological drought probabilities for rivers and streams in the United States (U.S.) using maximum likelihood logistic regression (MLLR). Streamflow data from winter months are used to estimate the chance of hydrological drought during summer months. Daily streamflow data collected from 9,144 stream gages from January 1, 1884 through January 9, 2014 provide hydrological drought streamflow probabilities for July, August, and September as functions of streamflows during October, November, December, January, and February, estimating outcomes 5-11 months ahead of their occurrence. Few drought prediction methods exploit temporal links among streamflows. We find MLLR modeling of drought streamflow probabilities exploits the explanatory power of temporally linked water flows. MLLR models with strong correct classification rates were produced for streams throughout the U.S. One ad hoc test of correct prediction rates of September 2013 hydrological droughts exceeded 90% correct classification. Some of the best-performing models coincide with areas of high concern including the West, the Midwest, Texas, the Southeast, and the Mid-Atlantic. Using hydrological drought MLLR probability estimates in a water management context can inform understanding of drought streamflow conditions, provide warning of future drought conditions, and aid water management decision making.
Meltzer, H Y; Matsubara, S; Lee, J C
1989-10-01
The pKi values of 13 reference typical and 7 reference atypical antipsychotic drugs (APDs) for rat striatal dopamine D-1 and D-2 receptor binding sites and cortical serotonin (5-HT2) receptor binding sites were determined. The atypical antipsychotics had significantly lower pKi values for the D-2 but not 5-HT2 binding sites. There was a trend for a lower pKi value for the D-1 binding site for the atypical APD. The 5-HT2 and D-1 pKi values were correlated for the typical APD whereas the 5-HT2 and D-2 pKi values were correlated for the atypical APD. A stepwise discriminant function analysis to determine the independent contribution of each pKi value for a given binding site to the classification as a typical or atypical APD entered the D-2 pKi value first, followed by the 5-HT2 pKi value. The D-1 pKi value was not entered. A discriminant function analysis correctly classified 19 of 20 of these compounds plus 14 of 17 additional test compounds as typical or atypical APD for an overall correct classification rate of 89.2%. The major contributors to the discriminant function were the D-2 and 5-HT2 pKi values. A cluster analysis based only on the 5-HT2/D2 ratio grouped 15 of 17 atypical + one typical APD in one cluster and 19 of 20 typical + two atypical APDs in a second cluster, for an overall correct classification rate of 91.9%. When the stepwise discriminant function was repeated for all 37 compounds, only the D-2 and 5-HT2 pKi values were entered into the discriminant function.(ABSTRACT TRUNCATED AT 250 WORDS)
Classification of weld defect based on information fusion technology for radiographic testing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Hongquan; Liang, Zeming, E-mail: heavenlzm@126.com; Gao, Jianmin
Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster–Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defectmore » feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.« less
Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying
2016-03-01
Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.
Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei
2012-10-01
To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors' classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors' automatic classification and manual segmentation were 91.6% ± 2.0%. A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution.
Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei
2012-01-01
Purpose: To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. Methods: The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors’ classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. Results: The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors’ automatic classification and manual segmentation were 91.6% ± 2.0%. Conclusions: A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution. PMID:23039675
Benchmark data on the separability among crops in the southern San Joaquin Valley of California
NASA Technical Reports Server (NTRS)
Morse, A.; Card, D. H.
1984-01-01
Landsat MSS data were input to a discriminant analysis of 21 crops on each of eight dates in 1979 using a total of 4,142 fields in southern Fresno County, California. The 21 crops, which together account for over 70 percent of the agricultural acreage in the southern San Joaquin Valley, were analyzed to quantify the spectral separability, defined as omission error, between all pairs of crops. On each date the fields were segregated into six groups based on the mean value of the MSS7/MSS5 ratio, which is correlated with green biomass. Discriminant analysis was run on each group on each date. The resulting contingency tables offer information that can be profitably used in conjunction with crop calendars to pick the best dates for a classification. The tables show expected percent correct classification and error rates for all the crops. The patterns in the contingency tables show that the percent correct classification for crops generally increases with the amount of greenness in the fields being classified. However, there are exceptions to this general rule, notably grain.
VizieR Online Data Catalog: Diffuse ionized gas database DIGEDA (Flores-Fajardo+ 2009)
NASA Astrophysics Data System (ADS)
Flores-Fajardo, N.; Morisset, C.; Binette, L.
2009-09-01
DIGEDA is a comprehensive database comprising 1061 DIG and HII region spectroscopic observations of 29 different galaxies (25 spiral galaxies and 4 irregulars) from 18 bibliographic references. This survey contains galaxies with significant spread in star formation rates, Halpha luminosities, distances, disk inclinations, slit positions and slit orientations. The 1061 observations obtained from these references were extracted by digitalization of published figures or tables. The data were subsequently normalized and incorporated in DIGEDA. This resulted in a table of 26 columns containing 1061 data lines or records (DIGEDA.dat file). We have not performed any reddening by dust correction or for the presence of underlying absorption lines, although we did use the reddening corrected ratios when made available by the authors. Missing entries are represented by (-1) in the corresponding data field. In DIGEDA the observed areas are classificated in three possible emission region types: HII region, transition zones or DIG. When this classification was not reported (no matter the criterion) for the authors, we introduce our own classification taking into account the value of |z| as described in the paper. (4 data files).
Johnson, LeeAnn K; Brown, Mary B; Carruthers, Ethan A; Ferguson, John A; Dombek, Priscilla E; Sadowsky, Michael J
2004-08-01
A horizontal, fluorophore-enhanced, repetitive extragenic palindromic-PCR (rep-PCR) DNA fingerprinting technique (HFERP) was developed and evaluated as a means to differentiate human from animal sources of Escherichia coli. Box A1R primers and PCR were used to generate 2,466 rep-PCR and 1,531 HFERP DNA fingerprints from E. coli strains isolated from fecal material from known human and 12 animal sources: dogs, cats, horses, deer, geese, ducks, chickens, turkeys, cows, pigs, goats, and sheep. HFERP DNA fingerprinting reduced within-gel grouping of DNA fingerprints and improved alignment of DNA fingerprints between gels, relative to that achieved using rep-PCR DNA fingerprinting. Jackknife analysis of the complete rep-PCR DNA fingerprint library, done using Pearson's product-moment correlation coefficient, indicated that animal and human isolates were assigned to the correct source groups with an 82.2% average rate of correct classification. However, when only unique isolates were examined, isolates from a single animal having a unique DNA fingerprint, Jackknife analysis showed that isolates were assigned to the correct source groups with a 60.5% average rate of correct classification. The percentages of correctly classified isolates were about 15 and 17% greater for rep-PCR and HFERP, respectively, when analyses were done using the curve-based Pearson's product-moment correlation coefficient, rather than the band-based Jaccard algorithm. Rarefaction analysis indicated that, despite the relatively large size of the known-source database, genetic diversity in E. coli was very great and is most likely accounting for our inability to correctly classify many environmental E. coli isolates. Our data indicate that removal of duplicate genotypes within DNA fingerprint libraries, increased database size, proper methods of statistical analysis, and correct alignment of band data within and between gels improve the accuracy of microbial source tracking methods.
Weight-elimination neural networks applied to coronary surgery mortality prediction.
Ennett, Colleen M; Frize, Monique
2003-06-01
The objective was to assess the effectiveness of the weight-elimination cost function in improving classification performance of artificial neural networks (ANNs) and to observe how changing the a priori distribution of the training set affects network performance. Backpropagation feedforward ANNs with and without weight-elimination estimated mortality for coronary artery surgery patients. The ANNs were trained and tested on cases with 32 input variables describing the patient's medical history; the output variable was in-hospital mortality (mortality rates: training 3.7%, test 3.8%). Artificial training sets with mortality rates of 20%, 50%, and 80% were created to observe the impact of training with a higher-than-normal prevalence. When the results were averaged, weight-elimination networks achieved higher sensitivity rates than those without weight-elimination. Networks trained on higher-than-normal prevalence achieved higher sensitivity rates at the cost of lower specificity and correct classification. The weight-elimination cost function can improve the classification performance when the network is trained with a higher-than-normal prevalence. A network trained with a moderately high artificial mortality rate (artificial mortality rate of 20%) can improve the sensitivity of the model without significantly affecting other aspects of the model's performance. The ANN mortality model achieved comparable performance as additive and statistical models for coronary surgery mortality estimation in the literature.
Milburn, Trelani F; Lonigan, Christopher J; Allan, Darcey M; Phillips, Beth M
2017-04-01
To investigate approaches for identifying young children who may be at risk for later reading-related learning disabilities, this study compared the use of four contemporary methods of indexing learning disability (LD) with older children (i.e., IQ-achievement discrepancy, low achievement, low growth, and dual-discrepancy) to determine risk status with a large sample of 1,011 preschoolers. These children were classified as at risk or not using each method across three early-literacy skills (i.e., language, phonological awareness, print knowledge) and at three levels of severity (i.e., 5th, 10th, 25th percentiles). Chance-corrected affected-status agreement (CCASA) indicated poor agreement among methods with rates of agreement generally decreasing with greater levels of severity for both single- and two-measure classification, and agreement rates were lower for two-measure classification than for single-measure classification. These low rates of agreement between conventional methods of identifying children at risk for LD represent a significant impediment for identification and intervention for young children considered at-risk.
Milburn, Trelani F.; Lonigan, Christopher J.; Allan, Darcey M.; Phillips, Beth M.
2017-01-01
To investigate approaches for identifying young children who may be at risk for later reading-related learning disabilities, this study compared the use of four contemporary methods of indexing learning disability (LD) with older children (i.e., IQ-achievement discrepancy, low achievement, low growth, and dual-discrepancy) to determine risk status with a large sample of 1,011 preschoolers. These children were classified as at risk or not using each method across three early-literacy skills (i.e., language, phonological awareness, print knowledge) and at three levels of severity (i.e., 5th, 10th, 25th percentiles). Chance-corrected affected-status agreement (CCASA) indicated poor agreement among methods with rates of agreement generally decreasing with greater levels of severity for both single- and two-measure classification, and agreement rates were lower for two-measure classification than for single-measure classification. These low rates of agreement between conventional methods of identifying children at risk for LD represent a significant impediment for identification and intervention for young children considered at-risk. PMID:28670102
ERIC Educational Resources Information Center
Furey, William M.; Marcotte, Amanda M.; Hintze, John M.; Shackett, Caroline M.
2016-01-01
The study presents a critical analysis of written expression curriculum-based measurement (WE-CBM) metrics derived from 3- and 10-min test lengths. Criterion validity and classification accuracy were examined for Total Words Written (TWW), Correct Writing Sequences (CWS), Percent Correct Writing Sequences (%CWS), and Correct Minus Incorrect…
Austin, Peter C; Lee, Douglas S
2011-01-01
Purpose: Classification trees are increasingly being used to classifying patients according to the presence or absence of a disease or health outcome. A limitation of classification trees is their limited predictive accuracy. In the data-mining and machine learning literature, boosting has been developed to improve classification. Boosting with classification trees iteratively grows classification trees in a sequence of reweighted datasets. In a given iteration, subjects that were misclassified in the previous iteration are weighted more highly than subjects that were correctly classified. Classifications from each of the classification trees in the sequence are combined through a weighted majority vote to produce a final classification. The authors' objective was to examine whether boosting improved the accuracy of classification trees for predicting outcomes in cardiovascular patients. Methods: We examined the utility of boosting classification trees for classifying 30-day mortality outcomes in patients hospitalized with either acute myocardial infarction or congestive heart failure. Results: Improvements in the misclassification rate using boosted classification trees were at best minor compared to when conventional classification trees were used. Minor to modest improvements to sensitivity were observed, with only a negligible reduction in specificity. For predicting cardiovascular mortality, boosted classification trees had high specificity, but low sensitivity. Conclusions: Gains in predictive accuracy for predicting cardiovascular outcomes were less impressive than gains in performance observed in the data mining literature. PMID:22254181
Ruoff, Kaspar; Karoui, Romdhane; Dufour, Eric; Luginbühl, Werner; Bosset, Jacques-Olivier; Bogdanov, Stefan; Amado, Renato
2005-03-09
The potential of front-face fluorescence spectroscopy for the authentication of unifloral and polyfloral honey types (n = 57 samples) previously classified using traditional methods such as chemical, pollen, and sensory analysis was evaluated. Emission spectra were recorded between 280 and 480 nm (excit: 250 nm), 305 and 500 nm (excit: 290 nm), and 380 and 600 nm (excit: 373 nm) directly on honey samples. In addition, excitation spectra (290-440 nm) were recorded with the emission measured at 450 nm. A total of four different spectral data sets were considered for data analysis. After normalization of the spectra, chemometric evaluation of the spectral data was carried out using principal component analysis (PCA) and linear discriminant analysis (LDA). The rate of correct classification ranged from 36% to 100% by using single spectral data sets (250, 290, 373, 450 nm) and from 73% to 100% by combining these four data sets. For alpine polyfloral honey and the unifloral varieties investigated (acacia, alpine rose, honeydew, chestnut, and rape), correct classification ranged from 96% to 100%. This preliminary study indicates that front-face fluorescence spectroscopy is a promising technique for the authentication of the botanical origin of honey. It is nondestructive, rapid, easy to use, and inexpensive. The use of additional excitation wavelengths between 320 and 440 nm could increase the correct classification of the less characteristic fluorescent varieties.
A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-01-01
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-06-16
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.
Remembering Left–Right Orientation of Pictures
Bartlett, James C.; Gernsbacher, Morton Ann; Till, Robert E.
2015-01-01
In a study of recognition memory for pictures, we observed an asymmetry in classifying test items as “same” versus “different” in left–right orientation: Identical copies of previously viewed items were classified more accurately than left–right reversals of those items. Response bias could not explain this asymmetry, and, moreover, correct “same” and “different” classifications were independently manipulable: Whereas repetition of input pictures (one vs. two presentations) affected primarily correct “same” classifications, retention interval (3 hr vs. 1 week) affected primarily correct “different” classifications. In addition, repetition but not retention interval affected judgments that previously seen pictures (both identical and reversed) were “old”. These and additional findings supported a dual-process hypothesis that links “same” classifications to high familiarity, and “different” classifications to conscious sampling of images of previously viewed pictures. PMID:2949051
Stoeger, Angela S.; Zeppelzauer, Matthias; Baotic, Anton
2015-01-01
Animal vocal signals are increasingly used to monitor wildlife populations and to obtain estimates of species occurrence and abundance. In the future, acoustic monitoring should function not only to detect animals, but also to extract detailed information about populations by discriminating sexes, age groups, social or kin groups, and potentially individuals. Here we show that it is possible to estimate age groups of African elephants (Loxodonta africana) based on acoustic parameters extracted from rumbles recorded under field conditions in a National Park in South Africa. Statistical models reached up to 70 % correct classification to four age groups (infants, calves, juveniles, adults) and 95 % correct classification when categorising into two groups (infants/calves lumped into one group versus adults). The models revealed that parameters representing absolute frequency values have the most discriminative power. Comparable classification results were obtained by fully automated classification of rumbles by high-dimensional features that represent the entire spectral envelope, such as MFCC (75 % correct classification) and GFCC (74 % correct classification). The reported results and methods provide the scientific foundation for a future system that could potentially automatically estimate the demography of an acoustically monitored elephant group or population. PMID:25821348
Tan, Jin; Li, Rong; Jiang, Zi-Tao
2015-10-01
We report an application of data fusion for chemometric classification of 135 canned samples of Chinese lager beers by manufacturer based on the combination of fluorescence, UV and visible spectroscopies. Right-angle synchronous fluorescence spectra (SFS) at three wavelength difference Δλ=30, 60 and 80 nm and visible spectra in the range 380-700 nm of undiluted beers were recorded. UV spectra in the range 240-400 nm of diluted beers were measured. A classification model was built using principal component analysis (PCA) and linear discriminant analysis (LDA). LDA with cross-validation showed that the data fusion could achieve 78.5-86.7% correct classification (sensitivity), while those rates using individual spectroscopies ranged from 42.2% to 70.4%. The results demonstrated that the fluorescence, UV and visible spectroscopies complemented each other, yielding higher synergic effect. Copyright © 2015 Elsevier Ltd. All rights reserved.
Superiority of artificial neural networks for a genetic classification procedure.
Sant'Anna, I C; Tomaz, R S; Silva, G N; Nascimento, M; Bhering, L L; Cruz, C D
2015-08-19
The correct classification of individuals is extremely important for the preservation of genetic variability and for maximization of yield in breeding programs using phenotypic traits and genetic markers. The Fisher and Anderson discriminant functions are commonly used multivariate statistical techniques for these situations, which allow for the allocation of an initially unknown individual to predefined groups. However, for higher levels of similarity, such as those found in backcrossed populations, these methods have proven to be inefficient. Recently, much research has been devoted to developing a new paradigm of computing known as artificial neural networks (ANNs), which can be used to solve many statistical problems, including classification problems. The aim of this study was to evaluate the feasibility of ANNs as an evaluation technique of genetic diversity by comparing their performance with that of traditional methods. The discriminant functions were equally ineffective in discriminating the populations, with error rates of 23-82%, thereby preventing the correct discrimination of individuals between populations. The ANN was effective in classifying populations with low and high differentiation, such as those derived from a genetic design established from backcrosses, even in cases of low differentiation of the data sets. The ANN appears to be a promising technique to solve classification problems, since the number of individuals classified incorrectly by the ANN was always lower than that of the discriminant functions. We envisage the potential relevant application of this improved procedure in the genomic classification of markers to distinguish between breeds and accessions.
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6∼8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3∼5 pattern classes considering the trade-off between time consumption and classification rate. PMID:22736979
Application of Wavelet Transform for PDZ Domain Classification
Daqrouq, Khaled; Alhmouz, Rami; Balamesh, Ahmed; Memic, Adnan
2015-01-01
PDZ domains have been identified as part of an array of signaling proteins that are often unrelated, except for the well-conserved structural PDZ domain they contain. These domains have been linked to many disease processes including common Avian influenza, as well as very rare conditions such as Fraser and Usher syndromes. Historically, based on the interactions and the nature of bonds they form, PDZ domains have most often been classified into one of three classes (class I, class II and others - class III), that is directly dependent on their binding partner. In this study, we report on three unique feature extraction approaches based on the bigram and trigram occurrence and existence rearrangements within the domain's primary amino acid sequences in assisting PDZ domain classification. Wavelet packet transform (WPT) and Shannon entropy denoted by wavelet entropy (WE) feature extraction methods were proposed. Using 115 unique human and mouse PDZ domains, the existence rearrangement approach yielded a high recognition rate (78.34%), which outperformed our occurrence rearrangements based method. The recognition rate was (81.41%) with validation technique. The method reported for PDZ domain classification from primary sequences proved to be an encouraging approach for obtaining consistent classification results. We anticipate that by increasing the database size, we can further improve feature extraction and correct classification. PMID:25860375
Gan, Heng-Hui; Soukoulis, Christos; Fisk, Ian
2014-03-01
In the present work, we have evaluated for first time the feasibility of APCI-MS volatile compound fingerprinting in conjunction with chemometrics (PLS-DA) as a new strategy for rapid and non-destructive food classification. For this purpose 202 clarified monovarietal juices extracted from apples differing in their botanical and geographical origin were used for evaluation of the performance of APCI-MS as a classification tool. For an independent test set PLS-DA analyses of pre-treated spectral data gave 100% and 94.2% correct classification rate for the classification by cultivar and geographical origin, respectively. Moreover, PLS-DA analysis of APCI-MS in conjunction with GC-MS data revealed that masses within the spectral ACPI-MS data set were related with parent ions or fragments of alkyesters, carbonyl compounds (hexanal, trans-2-hexenal) and alcohols (1-hexanol, 1-butanol, cis-3-hexenol) and had significant discriminating power both in terms of cultivar and geographical origin. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Yao, C.; Zhang, Y.; Zhang, Y.; Liu, H.
2017-09-01
With the rapid development of Precision Agriculture (PA) promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN). For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.
Ovarian Cancer Incidence Corrected for Oophorectomy
Baldwin, Lauren A.; Chen, Quan; Tucker, Thomas C.; White, Connie G.; Ore, Robert N.; Huang, Bin
2017-01-01
Current reported incidence rates for ovarian cancer may significantly underestimate the true rate because of the inclusion of women in the calculations who are not at risk for ovarian cancer due to prior benign salpingo-oophorectomy (SO). We have considered prior SO to more realistically estimate risk for ovarian cancer. Kentucky Health Claims Data, International Classification of Disease 9 (ICD-9) codes, Current Procedure Terminology (CPT) codes, and Kentucky Behavioral Risk Factor Surveillance System (BRFSS) Data were used to identify women who have undergone SO in Kentucky, and these women were removed from the at-risk pool in order to re-assess incidence rates to more accurately represent ovarian cancer risk. The protective effect of SO on the population was determined on an annual basis for ages 5–80+ using data from the years 2009–2013. The corrected age-adjusted rates of ovarian cancer that considered SO ranged from 33% to 67% higher than age-adjusted rates from the standard population. Correction of incidence rates for ovarian cancer by accounting for women with prior SO gives a better understanding of risk for this disease faced by women. The rates of ovarian cancer were substantially higher when SO was taken into consideration than estimates from the standard population. PMID:28368298
Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Packer, Craig
2016-06-01
Citizen science has the potential to expand the scope and scale of research in ecology and conservation, but many professional researchers remain skeptical of data produced by nonexperts. We devised an approach for producing accurate, reliable data from untrained, nonexpert volunteers. On the citizen science website www.snapshotserengeti.org, more than 28,000 volunteers classified 1.51 million images taken in a large-scale camera-trap survey in Serengeti National Park, Tanzania. Each image was circulated to, on average, 27 volunteers, and their classifications were aggregated using a simple plurality algorithm. We validated the aggregated answers against a data set of 3829 images verified by experts and calculated 3 certainty metrics-level of agreement among classifications (evenness), fraction of classifications supporting the aggregated answer (fraction support), and fraction of classifiers who reported "nothing here" for an image that was ultimately classified as containing an animal (fraction blank)-to measure confidence that an aggregated answer was correct. Overall, aggregated volunteer answers agreed with the expert-verified data on 98% of images, but accuracy differed by species commonness such that rare species had higher rates of false positives and false negatives. Easily calculated analysis of variance and post-hoc Tukey tests indicated that the certainty metrics were significant indicators of whether each image was correctly classified or classifiable. Thus, the certainty metrics can be used to identify images for expert review. Bootstrapping analyses further indicated that 90% of images were correctly classified with just 5 volunteers per image. Species classifications based on the plurality vote of multiple citizen scientists can provide a reliable foundation for large-scale monitoring of African wildlife. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Kosmala, Margaret; Lintott, Chris; Packer, Craig
2016-01-01
Abstract Citizen science has the potential to expand the scope and scale of research in ecology and conservation, but many professional researchers remain skeptical of data produced by nonexperts. We devised an approach for producing accurate, reliable data from untrained, nonexpert volunteers. On the citizen science website www.snapshotserengeti.org, more than 28,000 volunteers classified 1.51 million images taken in a large‐scale camera‐trap survey in Serengeti National Park, Tanzania. Each image was circulated to, on average, 27 volunteers, and their classifications were aggregated using a simple plurality algorithm. We validated the aggregated answers against a data set of 3829 images verified by experts and calculated 3 certainty metrics—level of agreement among classifications (evenness), fraction of classifications supporting the aggregated answer (fraction support), and fraction of classifiers who reported “nothing here” for an image that was ultimately classified as containing an animal (fraction blank)—to measure confidence that an aggregated answer was correct. Overall, aggregated volunteer answers agreed with the expert‐verified data on 98% of images, but accuracy differed by species commonness such that rare species had higher rates of false positives and false negatives. Easily calculated analysis of variance and post‐hoc Tukey tests indicated that the certainty metrics were significant indicators of whether each image was correctly classified or classifiable. Thus, the certainty metrics can be used to identify images for expert review. Bootstrapping analyses further indicated that 90% of images were correctly classified with just 5 volunteers per image. Species classifications based on the plurality vote of multiple citizen scientists can provide a reliable foundation for large‐scale monitoring of African wildlife. PMID:27111678
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose
Rahman, Mohammad Mizanur; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse
2017-01-01
Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms. PMID:28895910
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose.
Rahman, Mohammad Mizanur; Charoenlarpnopparut, Chalie; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse
2017-09-12
Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k -nearest neighbor ( k -NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.
Adaptive sleep-wake discrimination for wearable devices.
Karlen, Walter; Floreano, Dario
2011-04-01
Sleep/wake classification systems that rely on physiological signals suffer from intersubject differences that make accurate classification with a single, subject-independent model difficult. To overcome the limitations of intersubject variability, we suggest a novel online adaptation technique that updates the sleep/wake classifier in real time. The objective of the present study was to evaluate the performance of a newly developed adaptive classification algorithm that was embedded on a wearable sleep/wake classification system called SleePic. The algorithm processed ECG and respiratory effort signals for the classification task and applied behavioral measurements (obtained from accelerometer and press-button data) for the automatic adaptation task. When trained as a subject-independent classifier algorithm, the SleePic device was only able to correctly classify 74.94 ± 6.76% of the human-rated sleep/wake data. By using the suggested automatic adaptation method, the mean classification accuracy could be significantly improved to 92.98 ± 3.19%. A subject-independent classifier based on activity data only showed a comparable accuracy of 90.44 ± 3.57%. We demonstrated that subject-independent models used for online sleep-wake classification can successfully be adapted to previously unseen subjects without the intervention of human experts or off-line calibration.
Contribution of non-negative matrix factorization to the classification of remote sensing images
NASA Astrophysics Data System (ADS)
Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.
2008-10-01
Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.
Das, Arpita; Bhattacharya, Mahua
2011-01-01
In the present work, authors have developed a treatment planning system implementing genetic based neuro-fuzzy approaches for accurate analysis of shape and margin of tumor masses appearing in breast using digital mammogram. It is obvious that a complicated structure invites the problem of over learning and misclassification. In proposed methodology, genetic algorithm (GA) has been used for searching of effective input feature vectors combined with adaptive neuro-fuzzy model for final classification of different boundaries of tumor masses. The study involves 200 digitized mammograms from MIAS and other databases and has shown 86% correct classification rate.
Acoustic target detection and classification using neural networks
NASA Technical Reports Server (NTRS)
Robertson, James A.; Conlon, Mark
1993-01-01
A neural network approach to the classification of acoustic emissions of ground vehicles and helicopters is demonstrated. Data collected during the Joint Acoustic Propagation Experiment conducted in July of l991 at White Sands Missile Range, New Mexico was used to train a classifier to distinguish between the spectrums of a UH-1, M60, M1 and M114. An output node was also included that would recognize background (i.e. no target) data. Analysis revealed specific hidden nodes responding to the features input into the classifier. Initial results using the neural network were encouraging with high correct identification rates accompanied by high levels of confidence.
Diagnosis of Chronic Kidney Disease Based on Support Vector Machine by Feature Selection Methods.
Polat, Huseyin; Danaei Mehr, Homay; Cetin, Aydin
2017-04-01
As Chronic Kidney Disease progresses slowly, early detection and effective treatment are the only cure to reduce the mortality rate. Machine learning techniques are gaining significance in medical diagnosis because of their classification ability with high accuracy rates. The accuracy of classification algorithms depend on the use of correct feature selection algorithms to reduce the dimension of datasets. In this study, Support Vector Machine classification algorithm was used to diagnose Chronic Kidney Disease. To diagnose the Chronic Kidney Disease, two essential types of feature selection methods namely, wrapper and filter approaches were chosen to reduce the dimension of Chronic Kidney Disease dataset. In wrapper approach, classifier subset evaluator with greedy stepwise search engine and wrapper subset evaluator with the Best First search engine were used. In filter approach, correlation feature selection subset evaluator with greedy stepwise search engine and filtered subset evaluator with the Best First search engine were used. The results showed that the Support Vector Machine classifier by using filtered subset evaluator with the Best First search engine feature selection method has higher accuracy rate (98.5%) in the diagnosis of Chronic Kidney Disease compared to other selected methods.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-29
... DEPARTMENT OF LABOR Office of the Secretary Agency Information Collection Activities; Submission for OMB Review; Comment Request; Worker Classification Survey; Correction ACTION: Notice; correction... titled, ``Worker Classification Survey,'' to the Office of Management and Budget for review and approval...
Classification of the Correct Quranic Letters Pronunciation of Male and Female Reciters
NASA Astrophysics Data System (ADS)
Khairuddin, Safiah; Ahmad, Salmiah; Embong, Abdul Halim; Nur Wahidah Nik Hashim, Nik; Altamas, Tareq M. K.; Nuratikah Syd Badaruddin, Syarifah; Shahbudin Hassan, Surul
2017-11-01
Recitation of the Holy Quran with the correct Tajweed is essential for every Muslim. Islam has encouraged Quranic education since early age as the recitation of the Quran correctly will represent the correct meaning of the words of Allah. It is important to recite the Quranic verses according to its characteristics (sifaat) and from its point of articulations (makhraj). This paper presents the identification and classification analysis of Quranic letters pronunciation for both male and female reciters, to obtain the unique representation of each letter by male as compared to female expert reciters. Linear Discriminant Analysis (LDA) was used as the classifier to classify the data with Formants and Power Spectral Density (PSD) as the acoustic features. The result shows that linear classifier of PSD with band 1 and band 2 power spectral combinations gives a high percentage of classification accuracy for most of the Quranic letters. It is also shown that the pronunciation by male reciters gives better result in the classification of the Quranic letters.
NASA Astrophysics Data System (ADS)
O'Carroll, Jack P. J.; Kennedy, Robert; Ren, Lei; Nash, Stephen; Hartnett, Michael; Brown, Colin
2017-10-01
The INFOMAR (Integrated Mapping For the Sustainable Development of Ireland's Marine Resource) initiative has acoustically mapped and classified a significant proportion of Ireland's Exclusive Economic Zone (EEZ), and is likely to be an important tool in Ireland's efforts to meet the criteria of the MSFD. In this study, open source and relic data were used in combination with new grab survey data to model EUNIS level 4 biotope distributions in Galway Bay, Ireland. The correct prediction rates of two artificial neural networks (ANNs) were compared to assess the effectiveness of acoustic sediment classifications versus sediments that were visually classified by an expert in the field as predictor variables. To test for autocorrelation between predictor variables the RELATE routine with Spearman rank correlation method was used. Optimal models were derived by iteratively removing predictor variables and comparing the correct prediction rates of each model. The models with the highest correct prediction rates were chosen as optimal. The optimal models each used a combination of salinity (binary; 0 = polyhaline and 1 = euhaline), proximity to reef (binary; 0 = within 50 m and 1 = outside 50 m), depth (continuous; metres) and a sediment descriptor (acoustic or observed) as predictor variables. As the status of benthic habitats is required to be assessed under the MSFD the Ecological Status (ES) of the subtidal sediments of Galway Bay was also assessed using the Infaunal Quality Index. The ANN that used observed sediment classes as predictor variables could correctly predict the distribution of biotopes 67% of the time, compared to 63% for the ANN using acoustic sediment classes. Acoustic sediment ANN predictions were affected by local sediment heterogeneity, and the lack of a mixed sediment class. The all-round poor performance of ANNs is likely to be a result of the temporally variable and sparsely distributed data within the study area.
Assessment of statistical methods used in library-based approaches to microbial source tracking.
Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D
2003-12-01
Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 9 2013-01-01 2013-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the effective...
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 10 2014-01-01 2014-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the effective...
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 9 2012-01-01 2012-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the effective...
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the effective...
Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng
2016-01-01
Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555
Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng
2016-01-01
Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.
Large-scale optimization-based classification models in medicine and biology.
Lee, Eva K
2007-06-01
We present novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule); and (5) successive multi-stage classification capability to handle data points placed in the reserved-judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multi-group prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80 to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.
Effective classification of the prevalence of Schistosoma mansoni.
Mitchell, Shira A; Pagano, Marcello
2012-12-01
To present an effective classification method based on the prevalence of Schistosoma mansoni in the community. We created decision rules (defined by cut-offs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs while maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the De Vlas model for worm and egg distributions. We compare the proposed method with the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification. Modelling varying sensitivity lowers the lower cut-off more significantly than the upper cut-off, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%. Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method. © 2012 Blackwell Publishing Ltd.
A novel fruit shape classification method based on multi-scale analysis
NASA Astrophysics Data System (ADS)
Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin
2005-11-01
Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
Queering the Catalog: Queer Theory and the Politics of Correction
ERIC Educational Resources Information Center
Drabinski, Emily
2013-01-01
Critiques of hegemonic library classification structures and controlled vocabularies have a rich history in information studies. This project has pointed out the trouble with classification and cataloging decisions that are framed as objective and neutral but are always ideological and worked to correct bias in library structures. Viewing…
Gender classification from video under challenging operating conditions
NASA Astrophysics Data System (ADS)
Mendoza-Schrock, Olga; Dong, Guozhu
2014-06-01
The literature is abundant with papers on gender classification research. However the majority of such research is based on the assumption that there is enough resolution so that the subject's face can be resolved. Hence the majority of the research is actually in the face recognition and facial feature area. A gap exists for gender classification under challenging operating conditions—different seasonal conditions, different clothing, etc.—and when the subject's face cannot be resolved due to lack of resolution. The Seasonal Weather and Gender (SWAG) Database is a novel database that contains subjects walking through a scene under operating conditions that span a calendar year. This paper exploits a subset of that database—the SWAG One dataset—using data mining techniques, traditional classifiers (ex. Naïve Bayes, Support Vector Machine, etc.) and traditional (canny edge detection, etc.) and non-traditional (height/width ratios, etc.) feature extractors to achieve high correct gender classification rates (greater than 85%). Another novelty includes exploiting frame differentials.
Age group classification and gender detection based on forced expiratory spirometry.
Cosgun, Sema; Ozbek, I Yucel
2015-08-01
This paper investigates the utility of forced expiratory spirometry (FES) test with efficient machine learning algorithms for the purpose of gender detection and age group classification. The proposed method has three main stages: feature extraction, training of the models and detection. In the first stage, some features are extracted from volume-time curve and expiratory flow-volume loop obtained from FES test. In the second stage, the probabilistic models for each gender and age group are constructed by training Gaussian mixture models (GMMs) and Support vector machine (SVM) algorithm. In the final stage, the gender (or age group) of test subject is estimated by using the trained GMM (or SVM) model. Experiments have been evaluated on a large database from 4571 subjects. The experimental results show that average correct classification rate performance of both GMM and SVM methods based on the FES test is more than 99.3 % and 96.8 % for gender and age group classification, respectively.
Moore, D F; Harwood, V J; Ferguson, D M; Lukasik, J; Hannah, P; Getrich, M; Brownell, M
2005-01-01
The accuracy of ribotyping and antibiotic resistance analysis (ARA) for prediction of sources of faecal bacterial pollution in an urban southern California watershed was determined using blinded proficiency samples. Antibiotic resistance patterns and HindIII ribotypes of Escherichia coli (n = 997), and antibiotic resistance patterns of Enterococcus spp. (n = 3657) were used to construct libraries from sewage samples and from faeces of seagulls, dogs, cats, horses and humans within the watershed. The three libraries were analysed to determine the accuracy of host source prediction. The internal accuracy of the libraries (average rate of correct classification, ARCC) with six source categories was 44% for E. coli ARA, 69% for E. coli ribotyping and 48% for Enterococcus ARA. Each library's predictive ability towards isolates that were not part of the library was determined using a blinded proficiency panel of 97 E. coli and 99 Enterococcus isolates. Twenty-eight per cent (by ARA) and 27% (by ribotyping) of the E. coli proficiency isolates were assigned to the correct source category. Sixteen per cent were assigned to the same source category by both methods, and 6% were assigned to the correct category. Addition of 2480 E. coli isolates to the ARA library did not improve the ARCC or proficiency accuracy. In contrast, 45% of Enterococcus proficiency isolates were correctly identified by ARA. None of the methods performed well enough on the proficiency panel to be judged ready for application to environmental samples. Most microbial source tracking (MST) studies published have demonstrated library accuracy solely by the internal ARCC measurement. Low rates of correct classification for E. coli proficiency isolates compared with the ARCCs of the libraries indicate that testing of bacteria from samples that are not represented in the library, such as blinded proficiency samples, is necessary to accurately measure predictive ability. The library-based MST methods used in this study may not be suited for determination of the source(s) of faecal pollution in large, urban watersheds.
NASA Astrophysics Data System (ADS)
Sun, Yankui; Li, Shan; Sun, Zhongyang
2017-01-01
We propose a framework for automated detection of dry age-related macular degeneration (AMD) and diabetic macular edema (DME) from retina optical coherence tomography (OCT) images, based on sparse coding and dictionary learning. The study aims to improve the classification performance of state-of-the-art methods. First, our method presents a general approach to automatically align and crop retina regions; then it obtains global representations of images by using sparse coding and a spatial pyramid; finally, a multiclass linear support vector machine classifier is employed for classification. We apply two datasets for validating our algorithm: Duke spectral domain OCT (SD-OCT) dataset, consisting of volumetric scans acquired from 45 subjects-15 normal subjects, 15 AMD patients, and 15 DME patients; and clinical SD-OCT dataset, consisting of 678 OCT retina scans acquired from clinics in Beijing-168, 297, and 213 OCT images for AMD, DME, and normal retinas, respectively. For the former dataset, our classifier correctly identifies 100%, 100%, and 93.33% of the volumes with DME, AMD, and normal subjects, respectively, and thus performs much better than the conventional method; for the latter dataset, our classifier leads to a correct classification rate of 99.67%, 99.67%, and 100.00% for DME, AMD, and normal images, respectively.
Sun, Yankui; Li, Shan; Sun, Zhongyang
2017-01-01
We propose a framework for automated detection of dry age-related macular degeneration (AMD) and diabetic macular edema (DME) from retina optical coherence tomography (OCT) images, based on sparse coding and dictionary learning. The study aims to improve the classification performance of state-of-the-art methods. First, our method presents a general approach to automatically align and crop retina regions; then it obtains global representations of images by using sparse coding and a spatial pyramid; finally, a multiclass linear support vector machine classifier is employed for classification. We apply two datasets for validating our algorithm: Duke spectral domain OCT (SD-OCT) dataset, consisting of volumetric scans acquired from 45 subjects—15 normal subjects, 15 AMD patients, and 15 DME patients; and clinical SD-OCT dataset, consisting of 678 OCT retina scans acquired from clinics in Beijing—168, 297, and 213 OCT images for AMD, DME, and normal retinas, respectively. For the former dataset, our classifier correctly identifies 100%, 100%, and 93.33% of the volumes with DME, AMD, and normal subjects, respectively, and thus performs much better than the conventional method; for the latter dataset, our classifier leads to a correct classification rate of 99.67%, 99.67%, and 100.00% for DME, AMD, and normal images, respectively.
Zu, Qin; Zhao, Chun-Jiang; Deng, Wei; Wang, Xiu
2013-05-01
The automatic identification of weeds forms the basis for precision spraying of crops infest. The canopy spectral reflectance within the 350-2 500 nm band of two strains of cabbages and five kinds of weeds such as barnyard grass, setaria, crabgrass, goosegrass and pigweed was acquired by ASD spectrometer. According to the spectral curve characteristics, the data in different bands were compressed with different levels to improve the operation efficiency. Firstly, the spectrum was denoised in accordance with the different order of multiple scattering correction (MSC) method and Savitzky-Golay (SG) convolution smoothing method set by different parameters, then the model was built by combining the principal component analysis (PCA) method to extract principal components, finally all kinds of plants were classified by using the soft independent modeling of class analogy (SIMCA) taxonomy and the classification results were compared. The tests results indicate that after the pretreatment of the spectral data with the method of the combination of MSC and SG set with 3rd order, 5th degree polynomial, 21 smoothing points, and the top 10 principal components extraction using PCA as a classification model input variable, 100% correct classification rate was achieved, and it is able to identify cabbage and several kinds of common weeds quickly and nondestructively.
Using phase for radar scatterer classification
NASA Astrophysics Data System (ADS)
Moore, Linda J.; Rigling, Brian D.; Penno, Robert P.; Zelnio, Edmund G.
2017-04-01
Traditional synthetic aperture radar (SAR) systems tend to discard phase information of formed complex radar imagery prior to automatic target recognition (ATR). This practice has historically been driven by available hardware storage, processing capabilities, and data link capacity. Recent advances in high performance computing (HPC) have enabled extremely dense storage and processing solutions. Therefore, previous motives for discarding radar phase information in ATR applications have been mitigated. First, we characterize the value of phase in one-dimensional (1-D) radar range profiles with respect to the ability to correctly estimate target features, which are currently employed in ATR algorithms for target discrimination. These features correspond to physical characteristics of targets through radio frequency (RF) scattering phenomenology. Physics-based electromagnetic scattering models developed from the geometrical theory of diffraction are utilized for the information analysis presented here. Information is quantified by the error of target parameter estimates from noisy radar signals when phase is either retained or discarded. Operating conditions (OCs) of signal-tonoise ratio (SNR) and bandwidth are considered. Second, we investigate the value of phase in 1-D radar returns with respect to the ability to correctly classify canonical targets. Classification performance is evaluated via logistic regression for three targets (sphere, plate, tophat). Phase information is demonstrated to improve radar target classification rates, particularly at low SNRs and low bandwidths.
Characteristics of Forests in Western Sayani Mountains, Siberia from SAR Data
NASA Technical Reports Server (NTRS)
Ranson, K. Jon; Sun, Guoqing; Kharuk, V. I.; Kovacs, Katalin
1998-01-01
This paper investigated the possibility of using spaceborne radar data to map forest types and logging in the mountainous Western Sayani area in Siberia. L and C band HH, HV, and VV polarized images from the Shuttle Imaging Radar-C instrument were used in the study. Techniques to reduce topographic effects in the radar images were investigated. These included radiometric correction using illumination angle inferred from a digital elevation model, and reducing apparent effects of topography through band ratios. Forest classification was performed after terrain correction utilizing typical supervised techniques and principal component analyses. An ancillary data set of local elevations was also used to improve the forest classification. Map accuracy for each technique was estimated for training sites based on Russian forestry maps, satellite imagery and field measurements. The results indicate that it is necessary to correct for topography when attempting to classify forests in mountainous terrain. Radiometric correction based on a DEM (Digital Elevation Model) improved classification results but required reducing the SAR (Synthetic Aperture Radar) resolution to match the DEM. Using ratios of SAR channels that include cross-polarization improved classification and
NASA Astrophysics Data System (ADS)
Capuano, Rosamaria; Santonico, Marco; Pennazza, Giorgio; Ghezzi, Silvia; Martinelli, Eugenio; Roscioni, Claudio; Lucantoni, Gabriele; Galluccio, Giovanni; Paolesse, Roberto; di Natale, Corrado; D'Amico, Arnaldo
2015-11-01
Results collected in more than 20 years of studies suggest a relationship between the volatile organic compounds exhaled in breath and lung cancer. However, the origin of these compounds is still not completely elucidated. In spite of the simplistic vision that cancerous tissues in lungs directly emit the volatile metabolites into the airways, some papers point out that metabolites are collected by the blood and then exchanged at the air-blood interface in the lung. To shed light on this subject we performed an experiment collecting both the breath and the air inside both the lungs with a modified bronchoscopic probe. The samples were measured with a gas chromatography-mass spectrometer (GC-MS) and an electronic nose. We found that the diagnostic capability of the electronic nose does not depend on the presence of cancer in the sampled lung, reaching in both cases an above 90% correct classification rate between cancer and non-cancer samples. On the other hand, multivariate analysis of GC-MS achieved a correct classification rate between the two lungs of only 76%. GC-MS analysis of breath and air sampled from the lungs demonstrates a substantial preservation of the VOCs pattern from inside the lung to the exhaled breath.
Bullet trajectory predicts the need for damage control: an artificial neural network model.
Hirshberg, Asher; Wall, Matthew J; Mattox, Kenneth L
2002-05-01
Effective use of damage control in trauma hinges on an early decision to use it. Bullet trajectory has never been studied as a marker for damage control. We hypothesize that this decision can be predicted by an artificial neural network (ANN) model based on the bullet trajectory and the patient's blood pressure. A multilayer perceptron ANN predictive model was developed from a data set of 312 patients with single abdominal gunshot injuries. Input variables were the bullet path, trajectory patterns, and admission systolic pressure. The output variable was either a damage control laparotomy or intraoperative death. The best performing ANN was implemented on prospectively collected data from 34 patients. The model achieved a correct classification rate of 0.96 and area under the receiver operating characteristic curve of 0.94. External validation showed the model to have a sensitivity of 88% and specificity of 96%. Model implementation on the prospectively collected data had a correct classification rate of 0.91. Sensitivity analysis showed that systolic pressure, bullet path across the midline, and trajectory involving the right upper quadrant were the three most important input variables. Bullet trajectory is an important, hitherto unrecognized, factor that should be incorporated into the decision to use damage control.
Classification and disease prediction via mathematical programming
NASA Astrophysics Data System (ADS)
Lee, Eva K.; Wu, Tsung-Lin
2007-11-01
In this chapter, we present classification models based on mathematical programming approaches. We first provide an overview on various mathematical programming approaches, including linear programming, mixed integer programming, nonlinear programming and support vector machines. Next, we present our effort of novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule) and (5) successive multi-stage classification capability to handle data points placed in the reserved judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multigroup prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; multistage discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80% to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.
HIV classification using the coalescent theory
Bulla, Ingo; Schultz, Anne-Kathrin; Schreiber, Fabian; Zhang, Ming; Leitner, Thomas; Korber, Bette; Morgenstern, Burkhard; Stanke, Mario
2010-01-01
Motivation: Existing coalescent models and phylogenetic tools based on them are not designed for studying the genealogy of sequences like those of HIV, since in HIV recombinants with multiple cross-over points between the parental strains frequently arise. Hence, ambiguous cases in the classification of HIV sequences into subtypes and circulating recombinant forms (CRFs) have been treated with ad hoc methods in lack of tools based on a comprehensive coalescent model accounting for complex recombination patterns. Results: We developed the program ARGUS that scores classifications of sequences into subtypes and recombinant forms. It reconstructs ancestral recombination graphs (ARGs) that reflect the genealogy of the input sequences given a classification hypothesis. An ARG with maximal probability is approximated using a Markov chain Monte Carlo approach. ARGUS was able to distinguish the correct classification with a low error rate from plausible alternative classifications in simulation studies with realistic parameters. We applied our algorithm to decide between two recently debated alternatives in the classification of CRF02 of HIV-1 and find that CRF02 is indeed a recombinant of Subtypes A and G. Availability: ARGUS is implemented in C++ and the source code is available at http://gobics.de/software Contact: ibulla@uni-goettingen.de Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:20400454
5 CFR 511.703 - Retroactive effective date.
Code of Federal Regulations, 2011 CFR
2011-01-01
... CLASSIFICATION UNDER THE GENERAL SCHEDULE Effective Dates of Position Classification Actions or Decisions § 511... if the employee is wrongfully demoted. (b) Downgrading. (1) The effective date of a classification appellate certificate or agency appellate decision can be retroactive only if it corrects a classification...
A Sieving ANN for Emotion-Based Movie Clip Classification
NASA Astrophysics Data System (ADS)
Watanapa, Saowaluk C.; Thipakorn, Bundit; Charoenkitkarn, Nipon
Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.
A Novel Segment-Based Approach for Improving Classification Performance of Transport Mode Detection.
Guvensan, M Amac; Dusun, Burak; Can, Baris; Turkmen, H Irem
2017-12-30
Transportation planning and solutions have an enormous impact on city life. To minimize the transport duration, urban planners should understand and elaborate the mobility of a city. Thus, researchers look toward monitoring people's daily activities including transportation types and duration by taking advantage of individual's smartphones. This paper introduces a novel segment-based transport mode detection architecture in order to improve the results of traditional classification algorithms in the literature. The proposed post-processing algorithm, namely the Healing algorithm, aims to correct the misclassification results of machine learning-based solutions. Our real-life test results show that the Healing algorithm could achieve up to 40% improvement of the classification results. As a result, the implemented mobile application could predict eight classes including stationary, walking, car, bus, tram, train, metro and ferry with a success rate of 95% thanks to the proposed multi-tier architecture and Healing algorithm.
Wang, Kun; Jiang, Tianzi; Liang, Meng; Wang, Liang; Tian, Lixia; Zhang, Xinqing; Li, Kuncheng; Liu, Zhening
2006-01-01
In this work, we proposed a discriminative model of Alzheimer's disease (AD) on the basis of multivariate pattern classification and functional magnetic resonance imaging (fMRI). This model used the correlation/anti-correlation coefficients of two intrinsically anti-correlated networks in resting brains, which have been suggested by two recent studies, as the feature of classification. Pseudo-Fisher Linear Discriminative Analysis (pFLDA) was then performed on the feature space and a linear classifier was generated. Using leave-one-out (LOO) cross validation, our results showed a correct classification rate of 83%. We also compared the proposed model with another one based on the whole brain functional connectivity. Our proposed model outperformed the other one significantly, and this implied that the two intrinsically anti-correlated networks may be a more susceptible part of the whole brain network in the early stage of AD.
NASA Astrophysics Data System (ADS)
Liu, J.; Lan, T.; Qin, H.
2017-10-01
Traditional data cleaning identifies dirty data by classifying original data sequences, which is a class-imbalanced problem since the proportion of incorrect data is much less than the proportion of correct ones for most diagnostic systems in Magnetic Confinement Fusion (MCF) devices. When using machine learning algorithms to classify diagnostic data based on class-imbalanced training set, most classifiers are biased towards the major class and show very poor classification rates on the minor class. By transforming the direct classification problem about original data sequences into a classification problem about the physical similarity between data sequences, the class-balanced effect of Time-Domain Global Similarity (TDGS) method on training set structure is investigated in this paper. Meanwhile, the impact of improved training set structure on data cleaning performance of TDGS method is demonstrated with an application example in EAST POlarimetry-INTerferometry (POINT) system.
NASA Astrophysics Data System (ADS)
Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y.; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J.; Adatia, Ian
2016-09-01
We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral.
Identification of Terrestrial Reflectance From Remote Sensing
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel; Nolf, Scott R.; Stacy, Kathryn (Technical Monitor)
2000-01-01
Correcting for atmospheric effects is an essential part of surface-reflectance recovery from radiance measurements. Model-based atmospheric correction techniques enable an accurate identification and classification of terrestrial reflectances from multi-spectral imagery. Successful and efficient removal of atmospheric effects from remote-sensing data is a key factor in the success of Earth observation missions. This report assesses the performance, robustness and sensitivity of two atmospheric-correction and reflectance-recovery techniques as part of an end-to-end simulation of hyper-spectral acquisition, identification and classification.
Hunskaar, Steinar
2011-01-01
Background The use of nurses for telephone-based triage in out-of-hours services is increasing in several countries. No investigations have been carried out in Norway into the quality of decisions made by nurses regarding our priority degree system. There are three levels: acute, urgent and non-urgent. Methods Nurses working in seven casualty clinics in out-of-hours districts in Norway (The Watchtowers) were all invited to participate in a study to assess priority grade on 20 written medical scenarios validated by an expert group. 83 nurses (response rate 76%) participated in the study. A one-out-of-five sample of the nurses assessed the same written cases after 3 months (n=18, response rate 90%) as a test–retest assessment. Results Among the acute, urgent and non-urgent scenarios, 82%, 74% and 81% were correctly classified according to national guidelines. There were significant differences in the proportion of correct classifications among the casualty clinics, but neither employment percentage nor profession or work experience affected the triage decision. The mean intraobserver variability measured by the Cohen kappa was 0.61 (CI 0.52 to 0.70), and there were significant differences in kappa with employment percentage. Casualty clinics and work experience did not affect intrarater agreement. Conclusion Correct classification of acute and non-urgent cases among nurses was quite high. Work experience and employment percentage did not affect triage decision. The intrarater agreement was good and about the same as in previous studies performed in other countries. Kappa increased significantly with increasing employment percentage. PMID:21262792
Hansen, Elisabeth Holm; Hunskaar, Steinar
2011-05-01
The use of nurses for telephone-based triage in out-of-hours services is increasing in several countries. No investigations have been carried out in Norway into the quality of decisions made by nurses regarding our priority degree system. There are three levels: acute, urgent and non-urgent. Nurses working in seven casualty clinics in out-of-hours districts in Norway (The Watchtowers) were all invited to participate in a study to assess priority grade on 20 written medical scenarios validated by an expert group. 83 nurses (response rate 76%) participated in the study. A one-out-of-five sample of the nurses assessed the same written cases after 3 months (n = 18, response rate 90%) as a test-retest assessment. Among the acute, urgent and non-urgent scenarios, 82%, 74% and 81% were correctly classified according to national guidelines. There were significant differences in the proportion of correct classifications among the casualty clinics, but neither employment percentage nor profession or work experience affected the triage decision. The mean intraobserver variability measured by the Cohen kappa was 0.61 (CI 0.52 to 0.70), and there were significant differences in kappa with employment percentage. Casualty clinics and work experience did not affect intrarater agreement. Correct classification of acute and non-urgent cases among nurses was quite high. Work experience and employment percentage did not affect triage decision. The intrarater agreement was good and about the same as in previous studies performed in other countries. Kappa increased significantly with increasing employment percentage.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-08
...] RIN 1615-AB76 Commonwealth of the Northern Mariana Islands Transitional Worker Classification... Transitional Worker Classification. In that rule, we had sought to modify the title of a paragraph, but... the final rule Commonwealth of the Northern Mariana Islands Transitional Worker Classification...
NASA Astrophysics Data System (ADS)
Schmalz, M.; Ritter, G.; Key, R.
Accurate and computationally efficient spectral signature classification is a crucial step in the nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications using linear mixing models, signature classification accuracy depends on accurate spectral endmember discrimination [1]. If the endmembers cannot be classified correctly, then the signatures cannot be classified correctly, and object recognition from hyperspectral data will be inaccurate. In practice, the number of endmembers accurately classified often depends linearly on the number of inputs. This can lead to potentially severe classification errors in the presence of noise or densely interleaved signatures. In this paper, we present an comparison of emerging technologies for nonimaging spectral signature classfication based on a highly accurate, efficient search engine called Tabular Nearest Neighbor Encoding (TNE) [3,4] and a neural network technology called Morphological Neural Networks (MNNs) [5]. Based on prior results, TNE can optimize its classifier performance to track input nonergodicities, as well as yield measures of confidence or caution for evaluation of classification results. Unlike neural networks, TNE does not have a hidden intermediate data structure (e.g., the neural net weight matrix). Instead, TNE generates and exploits a user-accessible data structure called the agreement map (AM), which can be manipulated by Boolean logic operations to effect accurate classifier refinement algorithms. The open architecture and programmability of TNE's agreement map processing allows a TNE programmer or user to determine classification accuracy, as well as characterize in detail the signatures for which TNE did not obtain classification matches, and why such mis-matches occurred. In this study, we will compare TNE and MNN based endmember classification, using performance metrics such as probability of correct classification (Pd) and rate of false detections (Rfa). As proof of principle, we analyze classification of multiple closely spaced signatures from a NASA database of space material signatures. Additional analysis pertains to computational complexity and noise sensitivity, which are superior to Bayesian techniques based on classical neural networks. [1] Winter, M.E. "Fast autonomous spectral end-member determination in hyperspectral data," in Proceedings of the 13th International Conference On Applied Geologic Remote Sensing, Vancouver, B.C., Canada, pp. 337-44 (1999). [2] N. Keshava, "A survey of spectral unmixing algorithms," Lincoln Laboratory Journal 14:55-78 (2003). [3] Key, G., M.S. SCHMALZ, F.M. Caimi, and G.X. Ritter. "Performance analysis of tabular nearest neighbor encoding algorithm for joint compression and ATR", in Proceedings SPIE 3814:115-126 (1999). [4] Schmalz, M.S. and G. Key. "Algorithms for hyperspectral signature classification in unresolved object detection using tabular nearest neighbor encoding" in Proceedings of the 2007 AMOS Conference, Maui HI (2007). [5] Ritter, G.X., G. Urcid, and M.S. Schmalz. "Autonomous single-pass endmember approximation using lattice auto-associative memories", Neurocomputing (Elsevier), accepted (June 2008).
Network-based high level data classification.
Silva, Thiago Christiano; Zhao, Liang
2012-06-01
Traditional supervised data classification considers only physical features (e.g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.
Evaluation of thyroid tissue by Raman spectroscopy
NASA Astrophysics Data System (ADS)
Teixeira, C. S. B.; Bitar, R. A.; Santos, A. B. O.; Kulcsar, M. A. V.; Friguglietti, C. U. M.; Martinho, H. S.; da Costa, R. B.; Martin, A. A.
2010-02-01
Thyroid gland is a small gland in the neck consisting of two lobes connected by an isthmus. Thyroid's main function is to produce the hormones thyroxine (T4), triiodothyronine (T3) and calcitonin. Thyroid disorders can disturb the production of these hormones, which will affect numerous processes within the body such as: regulating metabolism and increasing utilization of cholesterol, fats, proteins, and carbohydrates. The gland itself can also be injured; for example, neoplasias, which have been considered the most important, causing damage of to the gland and are difficult to diagnose. There are several types of thyroid cancer: Papillary, Follicular, Medullary, and Anaplastic. The occurrence rate, in general is between 4 and 7%; which is on the increase (30%), probably due to new technology that is able to find small thyroid cancers that may not have been found previously. The most common method used for thyroid diagnoses are: anamnesis, ultrasonography, and laboratory exams (Fine Needle Aspiration Biopsy- FNAB). However, the sensitivity of those test are rather poor, with a high rate of false-negative results, therefore there is an urgent need to develop new diagnostic techniques. Raman spectroscopy has been presented as a valuable tool for cancer diagnosis in many different tissues. In this work, 27 fragments of the thyroid were collected from 18 patients, comprising the following histologic groups: goitre adjacent tissue, goitre nodular tissue, follicular adenoma, follicular carcinoma, and papillary carcinoma. Spectral collection was done with a commercial FTRaman Spectrometer (Bruker RFS100/S) using a 1064 nm laser excitation and Ge detector. Principal Component Analysis, Cluster Analysis, and Linear Discriminant Analysis with cross-validation were applied as spectral classification algorithm. Comparing the goitre adjacent tissue with the goitre nodular region, an index of 58.3% of correct classification was obtained. Between goitre (nodular region and adjacent tissue) and papillary carcinoma, the index of correct classification was 64.9%, and the classification between benign tissues (goitre and follicular adenoma) and malignant tissues (papillary and follicular carcinomas), the index was 72.5%.
Wang, Zhengfang; Chen, Pei; Yu, Liangli; Harrington, Peter de B.
2013-01-01
Basil plants cultivated by organic and conventional farming practices were accurately classified by pattern recognition of gas chromatography/mass spectrometry (GC/MS) data. A novel extraction procedure was devised to extract characteristic compounds from ground basil powders. Two in-house fuzzy classifiers, i.e., the fuzzy rule-building expert system (FuRES) and the fuzzy optimal associative memory (FOAM) for the first time, were used to build classification models. Two crisp classifiers, i.e., soft independent modeling by class analogy (SIMCA) and the partial least-squares discriminant analysis (PLS-DA), were used as control methods. Prior to data processing, baseline correction and retention time alignment were performed. Classifiers were built with the two-way data sets, the total ion chromatogram representation of data sets, and the total mass spectrum representation of data sets, separately. Bootstrapped Latin partition (BLP) was used as an unbiased evaluation of the classifiers. By using two-way data sets, average classification rates with FuRES, FOAM, SIMCA, and PLS-DA were 100 ± 0%, 94.4 ± 0.4%, 93.3 ± 0.4%, and 100 ± 0%, respectively, for 100 independent evaluations. The established classifiers were used to classify a new validation set collected 2.5 months later with no parametric changes except that the training set and validation set were individually mean-centered. For the new two-way validation set, classification rates with FuRES, FOAM, SIMCA, and PLS-DA were 100%, 83%, 97%, and 100%, respectively. Thereby, the GC/MS analysis was demonstrated as a viable approach for organic basil authentication. It is the first time that a FOAM has been applied to classification. A novel baseline correction method was used also for the first time. The FuRES and the FOAM are demonstrated as powerful tools for modeling and classifying GC/MS data of complex samples and the data pretreatments are demonstrated to be useful to improve the performance of classifiers. PMID:23398171
A Model Assessment and Classification System for Men and Women in Correctional Institutions.
ERIC Educational Resources Information Center
Hellervik, Lowell W.; And Others
The report describes a manpower assessment and classification system for criminal offenders directed towards making practical training and job classification decisions. The model is not concerned with custody classifications except as they affect occupational/training possibilities. The model combines traditional procedures of vocational…
12 CFR 702.101 - Measures and effective date of net worth classification.
Code of Federal Regulations, 2011 CFR
2011-01-01
... classification. 702.101 Section 702.101 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.101 Measures and effective date of net worth classification. (a) Net worth measures. For purposes of this part, a credit union...
12 CFR 702.101 - Measures and effective date of net worth classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... classification. 702.101 Section 702.101 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.101 Measures and effective date of net worth classification. (a) Net worth measures. For purposes of this part, a credit union...
12 CFR 1229.3 - Criteria for a Bank's capital classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Criteria for a Bank's capital classification... CLASSIFICATIONS AND PROMPT CORRECTIVE ACTION Federal Home Loan Banks § 1229.3 Criteria for a Bank's capital classification. (a) Adequately capitalized. Except where the Director has exercised authority to reclassify a...
Lopes, Julio Cesar Dias; Dos Santos, Fábio Mendes; Martins-José, Andrelly; Augustyns, Koen; De Winter, Hans
2017-01-01
A new metric for the evaluation of model performance in the field of virtual screening and quantitative structure-activity relationship applications is described. This metric has been termed the power metric and is defined as the fraction of the true positive rate divided by the sum of the true positive and false positive rates, for a given cutoff threshold. The performance of this metric is compared with alternative metrics such as the enrichment factor, the relative enrichment factor, the receiver operating curve enrichment factor, the correct classification rate, Matthews correlation coefficient and Cohen's kappa coefficient. The performance of this new metric is found to be quite robust with respect to variations in the applied cutoff threshold and ratio of the number of active compounds to the total number of compounds, and at the same time being sensitive to variations in model quality. It possesses the correct characteristics for its application in early-recognition virtual screening problems.
NASA Technical Reports Server (NTRS)
Card, Don H.; Strong, Laurence L.
1989-01-01
An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.
Theoretical Interpretation of the Fluorescence Spectra of Toluene and P- Cresol
1994-07-01
NUMBER OF PAGES Toluene Geometrica 25 p-Cresol Fluorescence Is. PRICE CODE Spectra 17. SECURITY CLASSIFICATION 13. SECURITY CLASSIFICATION 19...State Frequencies of Toluene ................ 19 6 Computed and exp" Ground State Frequencies of p-Cresol ............... 20 7 Correction Factors for...Computed Ground State Vibrational Frequencies ....... 21 8 Computed and Corrected Excited State Frequencies of Toluene ............. 22 9 Computed and
Chance-corrected classification for use in discriminant analysis: Ecological applications
Titus, K.; Mosher, J.A.; Williams, B.K.
1984-01-01
A method for evaluating the classification table from a discriminant analysis is described. The statistic, kappa, is useful to ecologists in that it removes the effects of chance. It is useful even with equal group sample sizes although the need for a chance-corrected measure of prediction becomes greater with more dissimilar group sample sizes. Examples are presented.
Exploring "psychic transparency" during pregnancy: a mixed-methods approach.
Oriol, Cécile; Tordjman, Sylvie; Dayan, Jacques; Poulain, Patrice; Rosenblum, Ouriel; Falissard, Bruno; Dindoyal, Asha; Naudet, Florian
2016-08-12
Psychic transparency is described as a psychic crisis occurring during pregnancy. The objective was to test if it was clinically detectable. Seven primiparous and seven nulliparous subjects were recorded during 5 min of spontaneous speech about their dreams. 25 raters from five groups (psychoanalysts, psychiatrists, general practitioners, pregnant women and medical students) listened to the audiotapes. They were asked to rate the probability of the women being pregnant or not. Their ability to discriminate the primiparous women was tested. The probability of being identified correctly or not was calculated for each woman. A qualitative analysis of the speech samples was performed. No group of rater was able to correctly classify pregnant and non-pregnant women. However, the raters' choices were not completely random. The wish to be pregnant or to have a baby could be linked to a primiparous classification whereas job priorities could be linked to a nulliparous classification. It was not possible to detect Psychic transparency in this study. The wish for a child might be easier to identify. In addition, the raters' choices seemed to be connected to social representations of motherhood.
NASA Astrophysics Data System (ADS)
Teutsch, Michael; Saur, Günter
2011-11-01
Spaceborne SAR imagery offers high capability for wide-ranging maritime surveillance especially in situations, where AIS (Automatic Identification System) data is not available. Therefore, maritime objects have to be detected and optional information such as size, orientation, or object/ship class is desired. In recent research work, we proposed a SAR processing chain consisting of pre-processing, detection, segmentation, and classification for single-polarimetric (HH) TerraSAR-X StripMap images to finally assign detection hypotheses to class "clutter", "non-ship", "unstructured ship", or "ship structure 1" (bulk carrier appearance) respectively "ship structure 2" (oil tanker appearance). In this work, we extend the existing processing chain and are now able to handle full-polarimetric (HH, HV, VH, VV) TerraSAR-X data. With the possibility of better noise suppression using the different polarizations, we slightly improve both the segmentation and the classification process. In several experiments we demonstrate the potential benefit for segmentation and classification. Precision of size and orientation estimation as well as correct classification rates are calculated individually for single- and quad-polarization and compared to each other.
[High complication rate after surgical treatment of ankle fractures].
Bjørslev, Naja; Ebskov, Lars; Lind, Marianne; Mersø, Camilla
2014-08-04
The purpose of this study was to determine the quality and re-operation rate of the surgical treatment of ankle fractures at a large university hospital. X-rays and patient records of 137 patients surgically treated for ankle fractures were analyzed for: 1) correct classification according to Lauge-Hansen, 2) if congruity of the ankle joint was achieved, 3) selection and placement of the hardware, and 4) the surgeon's level of education. Totally 32 of 137 did not receive an optimal treatment, 11 were re-operated. There was no clear correlation between incorrect operation and the surgeon's level of education.
NASA Astrophysics Data System (ADS)
Chen, Dan; Guo, Lin-yuan; Wang, Chen-hao; Ke, Xi-zheng
2017-07-01
Equalization can compensate channel distortion caused by channel multipath effects, and effectively improve convergent of modulation constellation diagram in optical wireless system. In this paper, the subspace blind equalization algorithm is used to preprocess M-ary phase shift keying (MPSK) subcarrier modulation signal in receiver. Mountain clustering is adopted to get the clustering centers of MPSK modulation constellation diagram, and the modulation order is automatically identified through the k-nearest neighbor (KNN) classifier. The experiment has been done under four different weather conditions. Experimental results show that the convergent of constellation diagram is improved effectively after using the subspace blind equalization algorithm, which means that the accuracy of modulation recognition is increased. The correct recognition rate of 16PSK can be up to 85% in any kind of weather condition which is mentioned in paper. Meanwhile, the correct recognition rate is the highest in cloudy and the lowest in heavy rain condition.
New Features for Neuron Classification.
Hernández-Pérez, Leonardo A; Delgado-Castillo, Duniel; Martín-Pérez, Rainer; Orozco-Morales, Rubén; Lorenzo-Ginori, Juan V
2018-04-28
This paper addresses the problem of obtaining new neuron features capable of improving results of neuron classification. Most studies on neuron classification using morphological features have been based on Euclidean geometry. Here three one-dimensional (1D) time series are derived from the three-dimensional (3D) structure of neuron instead, and afterwards a spatial time series is finally constructed from which the features are calculated. Digitally reconstructed neurons were separated into control and pathological sets, which are related to three categories of alterations caused by epilepsy, Alzheimer's disease (long and local projections), and ischemia. These neuron sets were then subjected to supervised classification and the results were compared considering three sets of features: morphological, features obtained from the time series and a combination of both. The best results were obtained using features from the time series, which outperformed the classification using only morphological features, showing higher correct classification rates with differences of 5.15, 3.75, 5.33% for epilepsy and Alzheimer's disease (long and local projections) respectively. The morphological features were better for the ischemia set with a difference of 3.05%. Features like variance, Spearman auto-correlation, partial auto-correlation, mutual information, local minima and maxima, all related to the time series, exhibited the best performance. Also we compared different evaluators, among which ReliefF was the best ranked.
A random forest model based classification scheme for neonatal amplitude-integrated EEG.
Chen, Weiting; Wang, Yu; Cao, Guitao; Chen, Guoqiang; Gu, Qiufang
2014-01-01
Modern medical advances have greatly increased the survival rate of infants, while they remain in the higher risk group for neurological problems later in life. For the infants with encephalopathy or seizures, identification of the extent of brain injury is clinically challenging. Continuous amplitude-integrated electroencephalography (aEEG) monitoring offers a possibility to directly monitor the brain functional state of the newborns over hours, and has seen an increasing application in neonatal intensive care units (NICUs). This paper presents a novel combined feature set of aEEG and applies random forest (RF) method to classify aEEG tracings. To that end, a series of experiments were conducted on 282 aEEG tracing cases (209 normal and 73 abnormal ones). Basic features, statistic features and segmentation features were extracted from both the tracing as a whole and the segmented recordings, and then form a combined feature set. All the features were sent to a classifier afterwards. The significance of feature, the data segmentation, the optimization of RF parameters, and the problem of imbalanced datasets were examined through experiments. Experiments were also done to evaluate the performance of RF on aEEG signal classifying, compared with several other widely used classifiers including SVM-Linear, SVM-RBF, ANN, Decision Tree (DT), Logistic Regression(LR), ML, and LDA. The combined feature set can better characterize aEEG signals, compared with basic features, statistic features and segmentation features respectively. With the combined feature set, the proposed RF-based aEEG classification system achieved a correct rate of 92.52% and a high F1-score of 95.26%. Among all of the seven classifiers examined in our work, the RF method got the highest correct rate, sensitivity, specificity, and F1-score, which means that RF outperforms all of the other classifiers considered here. The results show that the proposed RF-based aEEG classification system with the combined feature set is efficient and helpful to better detect the brain disorders in newborns.
Colliver, Jessica; Wang, Allan; Joss, Brendan; Ebert, Jay; Koh, Eamon; Breidahl, William; Ackland, Timothy
2016-04-01
This study investigated if patients with an intact tendon repair or partial-thickness retear early after rotator cuff repair display differences in clinical evaluations and whether early tendon healing can be predicted using these assessments. We prospectively evaluated 60 patients at 16 weeks after arthroscopic supraspinatus repair. Evaluation included the Oxford Shoulder Score, 11-item version of the Disabilities of the Arm, Shoulder and Hand, visual analog scale for pain, 12-item Short Form Health Survey, isokinetic strength, and magnetic resonance imaging (MRI). Independent t tests investigated clinical differences in patients based on the Sugaya MRI rotator cuff classification system (grades 1, 2, or 3). Discriminant analysis determined whether intact repairs (Sugaya grade 1) and partial-thickness retears (Sugaya grades 2 and 3) could be predicted. No differences (P < .05) existed in the clinical or strength measures. Although discriminant analysis revealed the 11-item version of the Disabilities of the Arm, Shoulder and Hand produced a 97% true-positive rate for predicting partial thickness retears, it also produced a 90% false-positive rate whereby it incorrectly predicted a retear in 90% of patients whose repair was intact. The ability to discriminate between groups was enhanced with up to 5 variables entered; however, only 87% of the partial-retear group and 36% of the intact-repair group were correctly classified. No differences in clinical scores existed between patients stratified by the Sugaya MRI classification system at 16 weeks. An intact repair or partial-thickness retear could not be accurately predicted. Our results suggest that correct classification of healing in the early postoperative stages should involve imaging. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Proposal of a Classification System for the Assessment and Treatment of Prominent Ear Deformity.
Lee, Youngdae; Kim, Young Seok; Lee, Won Jai; Rha, Dong Kyun; Kim, Jiye
2018-06-01
Prominent ear is the most common external ear deformity. To comprehensively treat prominent ear deformity, adequate comprehension of its pathophysiology is crucial. In this article, we analyze cases of prominent ear and suggest a simple classification system and treatment algorithm according to pathophysiology. We retrospectively reviewed a total of 205 Northeast Asian patients' clinical data who underwent an operation for prominent ear deformity. Follow-up assessments were conducted 3, 6, and 12 months after surgery. Prominent ear deformities were classified by diagnostic checkpoints. Class I (simple prominent ear) includes prominent ear that developed with the absence of the antihelix without conchal hypertrophy. Class II (mixed-type prominent ear) is defined as having not only a flat antihelix, but also conchal excess. Class III (conchal-type prominent ear) has an enlarged conchal bowl with a well-developed antihelix. Among the three types of prominent ear, class I was most frequent (162 patients, 81.6%). Class II was observed in 28 patients (13.6%) and class III in 10 patients (4.8%). We used the scaphomastoid suture method for correction of antihelical effacement, the anterior approach conchal resection for correction of conchal hypertrophy, and Bauer's squid incision for lobule prominence. The complication rate was 9.2% including early hematoma, hypersensitivity, and suture extrusion. Unfavorable results occurred in 4% including partial recurrence, overcorrection, and undercorrection. To reduce unfavorable results and avoid recurrence, we propose the use of a classification and treatment algorithm in preoperative evaluation of prominent ear. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Finn, James E.; Burger, Carl V.; Holland-Bartels, Leslie E.
1997-01-01
We used otolith banding patterns formed during incubation to discriminate among hatchery- and wild-incubated fry of sockeye salmon Oncorhynchus nerka from Tustumena Lake, Alaska. Fourier analysis of otolith luminance profiles was used to describe banding patterns: the amplitudes of individual Fourier harmonics were discriminant variables. Correct classification of otoliths to either hatchery or wild origin was 83.1% (cross-validation) and 72.7% (test data) with the use of quadratic discriminant function analysts on 10 Fourier amplitudes. Overall classification rates among the six test groups (one hatchery and five wild groups) were 46.5% (cross-validation) and 39.3% (test data) with the use of linear discriminant function analysis on 16 Fourier amplitudes. Although classification rates for wild-incubated fry from any one site never exceeded 67% (cross-validation) or 60% (test data), location-specific information was evident for all groups because the probability of classifying an individual to its true incubation location was significantly greater than chance. Results indicate phenotypic differences in otolith microstructure among incubation sites separated by less than 10 km. Analysis of otolith luminance profiles is a potentially useful technique for discriminating among and between various populations of hatchery and wild fish.
Attention Recognition in EEG-Based Affective Learning Research Using CFS+KNN Algorithm.
Hu, Bin; Li, Xiaowei; Sun, Shuting; Ratcliffe, Martyn
2018-01-01
The research detailed in this paper focuses on the processing of Electroencephalography (EEG) data to identify attention during the learning process. The identification of affect using our procedures is integrated into a simulated distance learning system that provides feedback to the user with respect to attention and concentration. The authors propose a classification procedure that combines correlation-based feature selection (CFS) and a k-nearest-neighbor (KNN) data mining algorithm. To evaluate the CFS+KNN algorithm, it was test against CFS+C4.5 algorithm and other classification algorithms. The classification performance was measured 10 times with different 3-fold cross validation data. The data was derived from 10 subjects while they were attempting to learn material in a simulated distance learning environment. A self-assessment model of self-report was used with a single valence to evaluate attention on 3 levels (high, neutral, low). It was found that CFS+KNN had a much better performance, giving the highest correct classification rate (CCR) of % for the valence dimension divided into three classes.
Using reconstructed IVUS images for coronary plaque classification.
Caballero, Karla L; Barajas, Joel; Pujol, Oriol; Rodriguez, Oriol; Radeva, Petia
2007-01-01
Coronary plaque rupture is one of the principal causes of sudden death in western societies. Reliable diagnostic of the different plaque types are of great interest for the medical community the predicting their evolution and applying an effective treatment. To achieve this, a tissue classification must be performed. Intravascular Ultrasound (IVUS) represents a technique to explore the vessel walls and to observe its histological properties. In this paper, a method to reconstruct IVUS images from the raw Radio Frequency (RF) data coming from ultrasound catheter is proposed. This framework offers a normalization scheme to compare accurately different patient studies. The automatic tissue classification is based on texture analysis and Adapting Boosting (Adaboost) learning technique combined with Error Correcting Output Codes (ECOC). In this study, 9 in-vivo cases are reconstructed with 7 different parameter set. This method improves the classification rate based on images, yielding a 91% of well-detected tissue using the best parameter set. It also reduces the inter-patient variability compared with the analysis of DICOM images, which are obtained from the commercial equipment.
Using near infrared spectroscopy to classify soybean oil according to expiration date.
da Costa, Gean Bezerra; Fernandes, David Douglas Sousa; Gomes, Adriano A; de Almeida, Valber Elias; Veras, Germano
2016-04-01
A rapid and non-destructive methodology is proposed for the screening of edible vegetable oils according to conservation state expiration date employing near infrared (NIR) spectroscopy and chemometric tools. A total of fifty samples of soybean vegetable oil, of different brands andlots, were used in this study; these included thirty expired and twenty non-expired samples. The oil oxidation was measured by peroxide index. NIR spectra were employed in raw form and preprocessed by offset baseline correction and Savitzky-Golay derivative procedure, followed by PCA exploratory analysis, which showed that NIR spectra would be suitable for the classification task of soybean oil samples. The classification models were based in SPA-LDA (Linear Discriminant Analysis coupled with Successive Projection Algorithm) and PLS-DA (Discriminant Analysis by Partial Least Squares). The set of samples (50) was partitioned into two groups of training (35 samples: 15 non-expired and 20 expired) and test samples (15 samples 5 non-expired and 10 expired) using sample-selection approaches: (i) Kennard-Stone, (ii) Duplex, and (iii) Random, in order to evaluate the robustness of the models. The obtained results for the independent test set (in terms of correct classification rate) were 96% and 98% for SPA-LDA and PLS-DA, respectively, indicating that the NIR spectra can be used as an alternative to evaluate the degree of oxidation of soybean oil samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Accuracy of automated classification of major depressive disorder as a function of symptom severity.
Ramasubbu, Rajamannar; Brown, Matthew R G; Cortese, Filmeno; Gaxiola, Ismael; Goodyear, Bradley; Greenshaw, Andrew J; Dursun, Serdar M; Greiner, Russell
2016-01-01
Growing evidence documents the potential of machine learning for developing brain based diagnostic methods for major depressive disorder (MDD). As symptom severity may influence brain activity, we investigated whether the severity of MDD affected the accuracies of machine learned MDD-vs-Control diagnostic classifiers. Forty-five medication-free patients with DSM-IV defined MDD and 19 healthy controls participated in the study. Based on depression severity as determined by the Hamilton Rating Scale for Depression (HRSD), MDD patients were sorted into three groups: mild to moderate depression (HRSD 14-19), severe depression (HRSD 20-23), and very severe depression (HRSD ≥ 24). We collected functional magnetic resonance imaging (fMRI) data during both resting-state and an emotional-face matching task. Patients in each of the three severity groups were compared against controls in separate analyses, using either the resting-state or task-based fMRI data. We use each of these six datasets with linear support vector machine (SVM) binary classifiers for identifying individuals as patients or controls. The resting-state fMRI data showed statistically significant classification accuracy only for the very severe depression group (accuracy 66%, p = 0.012 corrected), while mild to moderate (accuracy 58%, p = 1.0 corrected) and severe depression (accuracy 52%, p = 1.0 corrected) were only at chance. With task-based fMRI data, the automated classifier performed at chance in all three severity groups. Binary linear SVM classifiers achieved significant classification of very severe depression with resting-state fMRI, but the contribution of brain measurements may have limited potential in differentiating patients with less severe depression from healthy controls.
Parasites as biological tags of fish stocks: a meta-analysis of their discriminatory power.
Poulin, Robert; Kamiya, Tsukushi
2015-01-01
The use of parasites as biological tags to discriminate among marine fish stocks has become a widely accepted method in fisheries management. Here, we first link this approach to its unstated ecological foundation, the decay in the similarity of the species composition of assemblages as a function of increasing distance between them, a phenomenon almost universal in nature. We explain how distance decay of similarity can influence the use of parasites as biological tags. Then, we perform a meta-analysis of 61 uses of parasites as tags of marine fish populations in multivariate discriminant analyses, obtained from 29 articles. Our main finding is that across all studies, the observed overall probability of correct classification of fish based on parasite data was about 71%. This corresponds to a two-fold improvement over the rate of correct classification expected by chance alone, and the average effect size (Zr = 0·463) computed from the original values was also indicative of a medium-to-large effect. However, none of the moderator variables included in the meta-analysis had a significant effect on the proportion of correct classification; these moderators included the total number of fish sampled, the number of parasite species used in the discriminant analysis, the number of localities from which fish were sampled, the minimum and maximum distance between any pair of sampling localities, etc. Therefore, there are no clear-cut situations in which the use of parasites as tags is more useful than others. Finally, we provide recommendations for the future usage of parasites as tags for stock discrimination, to ensure that future applications of the method achieve statistical rigour and a high discriminatory power.
Classification and correction of the radar bright band with polarimetric radar
NASA Astrophysics Data System (ADS)
Hall, Will; Rico-Ramirez, Miguel; Kramer, Stefan
2015-04-01
The annular region of enhanced radar reflectivity, known as the Bright Band (BB), occurs when the radar beam intersects a layer of melting hydrometeors. Radar reflectivity is related to rainfall through a power law equation and so this enhanced region can lead to overestimations of rainfall by a factor of up to 5, so it is important to correct for this. The BB region can be identified by using several techniques including hydrometeor classification and freezing level forecasts from mesoscale meteorological models. Advances in dual-polarisation radar measurements and continued research in the field has led to increased accuracy in the ability to identify the melting snow region. A method proposed by Kitchen et al (1994), a form of which is currently used operationally in the UK, utilises idealised Vertical Profiles of Reflectivity (VPR) to correct for the BB enhancement. A simpler and more computationally efficient method involves the formation of an average VPR from multiple elevations for correction that can still cause a significant decrease in error (Vignal 2000). The purpose of this research is to evaluate a method that relies only on analysis of measurements from an operational C-band polarimetric radar without the need for computationally expensive models. Initial results show that LDR is a strong classifier of melting snow with a high Critical Success Index of 97% when compared to the other variables. An algorithm based on idealised VPRs resulted in the largest decrease in error when BB corrected scans are compared to rain gauges and to lower level scans with a reduction in RMSE of 61% for rain-rate measurements. References Kitchen, M., R. Brown, and A. G. Davies, 1994: Real-time correction of weather radar data for the effects of bright band, range and orographic growth in widespread precipitation. Q.J.R. Meteorol. Soc., 120, 1231-1254. Vignal, B. et al, 2000: Three methods to determine profiles of reflectivity from volumetric radar data to correct precipitation estimates. J. Appl. Meteor., 39(10), 1715-1726.
The Success Rate of Neurology Residents in EEG Interpretation After Formal Training.
Dericioglu, Nese; Ozdemir, Pınar
2018-03-01
EEG is an important tool for neurologists in both diagnosis and classification of seizures. It is not uncommon in clinical practice to see patients who were erroneously diagnosed as epileptic. Most of the time incorrect interpretation of EEG contributes significantly to this problem. In this study, we aimed to investigate the success rate of neurology residents in EEG interpretation after formal training. Eleven neurology residents were included in the study. Duration of EEG training (3 vs 4 months) and time since completion of EEG education were determined. Residents were randomly presented 30 different slides of representative EEG screenshots. They received 1 point for each correct response. The effect of training duration and time since training were investigated statistically. Besides, we looked at the success rate of each question to see whether certain patterns were more readily recognized than others. EEG training duration ( P = .93) and time since completion of training ( P = .16) did not influence the results. The success rate of residents for correct responses was between 17% and 50%. On the other hand, the success rate for each question varied between 0% and 91%. Overall, benign variants and focal ictal onset patterns were the most difficult to recognize. On 13 occasions (6.5%) nonepileptiform patterns were thought to represent epileptiform abnormalities. After formal training, neurology residents could identify ≤50% of the EEG patterns correctly. The wide variation in success rate among residents and also between questions implies that both personal characteristics and inherent EEG features influence successful EEG interpretation.
Karabagias, Ioannis K; Karabournioti, Sofia
2018-05-03
Twenty-two honey samples, namely clover and citrus honeys, were collected from the greater Cairo area during the harvesting year 2014⁻2015. The main purpose of the present study was to characterize the aforementioned honey types and to investigate whether the use of easily assessable physicochemical parameters, including color attributes in combination with chemometrics, could differentiate honey floral origin. Parameters taken into account were: pH, electrical conductivity, ash, free acidity, lactonic acidity, total acidity, moisture content, total sugars (degrees Brix-°Bx), total dissolved solids and their ratio to total acidity, salinity, CIELAB color parameters, along with browning index values. Results showed that all honey samples analyzed met the European quality standards set for honey and had variations in the aforementioned physicochemical parameters depending on floral origin. Application of linear discriminant analysis showed that eight physicochemical parameters, including color, could classify Egyptian honeys according to floral origin ( p < 0.05). Correct classification rate was 95.5% using the original method and 90.9% using the cross validation method. The discriminatory ability of the developed model was further validated using unknown honey samples. The overall correct classification rate was not affected. Specific physicochemical parameter analysis in combination with chemometrics has the potential to enhance the differences in floral honeys produced in a given geographical zone.
Karabournioti, Sofia
2018-01-01
Twenty-two honey samples, namely clover and citrus honeys, were collected from the greater Cairo area during the harvesting year 2014–2015. The main purpose of the present study was to characterize the aforementioned honey types and to investigate whether the use of easily assessable physicochemical parameters, including color attributes in combination with chemometrics, could differentiate honey floral origin. Parameters taken into account were: pH, electrical conductivity, ash, free acidity, lactonic acidity, total acidity, moisture content, total sugars (degrees Brix-°Bx), total dissolved solids and their ratio to total acidity, salinity, CIELAB color parameters, along with browning index values. Results showed that all honey samples analyzed met the European quality standards set for honey and had variations in the aforementioned physicochemical parameters depending on floral origin. Application of linear discriminant analysis showed that eight physicochemical parameters, including color, could classify Egyptian honeys according to floral origin (p < 0.05). Correct classification rate was 95.5% using the original method and 90.9% using the cross validation method. The discriminatory ability of the developed model was further validated using unknown honey samples. The overall correct classification rate was not affected. Specific physicochemical parameter analysis in combination with chemometrics has the potential to enhance the differences in floral honeys produced in a given geographical zone. PMID:29751543
Duara, Ranjan; Loewenstein, David A; Shen, Qian; Barker, Warren; Potter, Elizabeth; Varon, Daniel; Heurlin, Kristen; Vandenberghe, Rik; Buckley, Christopher
2013-05-01
To evaluate the contributions of amyloid-positive (Am+) and medial temporal atrophy-positive (MTA+) scans to the diagnostic classification of prodromal and probable Alzheimer's disease (AD). (18)F-flutemetamol-labeled amyloid positron emission tomography (PET) and magnetic resonance imaging (MRI) were used to classify 10 young normal, 15 elderly normal, 20 amnestic mild cognitive impairment (aMCI), and 27 AD subjects. MTA+ status was determined using a cut point derived from a previous study, and Am+ status was determined using a conservative and liberal cut point. The rates of MRI scans with positive results among young normal, elderly normal, aMCI, and AD subjects were 0%, 20%, 75%, and 82%, respectively. Using conservative cut points, the rates of Am+ scans for these same groups of subjects were 0%, 7%, 50%, and 93%, respectively, with the aMCI group showing the largest discrepancy between Am+ and MTA+ scans. Among aMCI cases, 80% of Am+ subjects were also MTA+, and 70% of amyloid-negative (Am-) subjects were MTA+. The combination of amyloid PET and MTA data was additive, with an overall correct classification rate for aMCI of 86%, when a liberal cut point (standard uptake value ratio = 1.4) was used for amyloid positivity. (18)F-flutemetamol PET and structural MRI provided additive information in the diagnostic classification of aMCI subjects, suggesting an amyloid-independent neurodegenerative component among aMCI subjects in this sample. Copyright © 2013 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Crystal-liquid Fugacity Ratio as a Surrogate Parameter for Intestinal Permeability.
Zakeri-Milani, Parvin; Fasihi, Zohreh; Akbari, Jafar; Jannatabadi, Ensieh; Barzegar-Jalali, Mohammad; Loebenberg, Raimar; Valizadeh, Hadi
We assessed the feasibility of using crystal-liquid fugacity ratio (CLFR) as an alternative parameter for intestinal permeability in the biopharmaceutical classification (BCS) of passively absorbed drugs. Dose number, fraction of dose absorbed, intestinal permeability, and intrinsic dissolution rate were used as the input parameters. CLFR was determined using thermodynamic parameters i.e., melting point, molar fusion enthalpy, and entropy of drug molecules obtained using differential scanning calorimetry. The CLFR values were in the range of 0.06-41.76 mole percent. There was a close relationship between CLFR and in vivo intestinal permeability (r > 0.8). CLFR values of greater than 2 mole percent corresponded to complete intestinal absorption. Applying CLFR versus dose number or intrinsic dissolution rate, more than 92% of tested drugs were correctly classified with respect to the reported classification system on the basis of human intestinal permeability and solubility. This investigation revealed that the CLFR might be an appropriate parameter for quantitative biopharmaceutical classification. This could be attributed to the fact that CLFR could be a measure of solubility of compounds in lipid bilayer which was found in this study to be directly proportional to the intestinal permeability of compounds. This classification enables researchers to define characteristics for intestinal absorption of all four BCS drug classes using suitable cutoff points for both intrinsic dissolution rate and crystal-liquid fugacity ratio. Therefore, it may be used as a surrogate for permeability studies. This article is open to POST-PUBLICATION REVIEW. Registered readers (see "For Readers") may comment by clicking on ABSTRACT on the issue's contents page.
Physical Human Activity Recognition Using Wearable Sensors.
Attal, Ferhat; Mohammed, Samer; Dedabrishvili, Mariam; Chamroukhi, Faicel; Oukhellou, Latifa; Amirat, Yacine
2015-12-11
This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors' placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject.
Physical Human Activity Recognition Using Wearable Sensors
Attal, Ferhat; Mohammed, Samer; Dedabrishvili, Mariam; Chamroukhi, Faicel; Oukhellou, Latifa; Amirat, Yacine
2015-01-01
This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors’ placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject. PMID:26690450
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
Young Kim, Eun; Johnson, Hans J
2013-01-01
A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.
Reduction of Topographic Effect for Curve Number Estimated from Remotely Sensed Imagery
NASA Astrophysics Data System (ADS)
Zhang, Wen-Yan; Lin, Chao-Yuan
2016-04-01
The Soil Conservation Service Curve Number (SCS-CN) method is commonly used in hydrology to estimate direct runoff volume. The CN is the empirical parameter which corresponding to land use/land cover, hydrologic soil group and antecedent soil moisture condition. In large watersheds with complex topography, satellite remote sensing is the appropriate approach to acquire the land use change information. However, the topographic effect have been usually found in the remotely sensed imageries and resulted in land use classification. This research selected summer and winter scenes of Landsat-5 TM during 2008 to classified land use in Chen-You-Lan Watershed, Taiwan. The b-correction, the empirical topographic correction method, was applied to Landsat-5 TM data. Land use were categorized using K-mean classification into 4 groups i.e. forest, grassland, agriculture and river. Accuracy assessment of image classification was performed with national land use map. The results showed that after topographic correction, the overall accuracy of classification was increased from 68.0% to 74.5%. The average CN estimated from remotely sensed imagery decreased from 48.69 to 45.35 where the average CN estimated from national LULC map was 44.11. Therefore, the topographic correction method was recommended to normalize the topographic effect from the satellite remote sensing data before estimating the CN.
The effect of finite field size on classification and atmospheric correction
NASA Technical Reports Server (NTRS)
Kaufman, Y. J.; Fraser, R. S.
1981-01-01
The atmospheric effect on the upward radiance of sunlight scattered from the Earth-atmosphere system is strongly influenced by the contrasts between fields and their sizes. For a given atmospheric turbidity, the atmospheric effect on classification of surface features is much stronger for nonuniform surfaces than for uniform surfaces. Therefore, the classification accuracy of agricultural fields and urban areas is dependent not only on the optical characteristics of the atmosphere, but also on the size of the surface do not account for the nonuniformity of the surface have only a slight effect on the classification accuracy; in other cases the classification accuracy descreases. The radiances above finite fields were computed to simulate radiances measured by a satellite. A simulation case including 11 agricultural fields and four natural fields (water, soil, savanah, and forest) was used to test the effect of the size of the background reflectance and the optical thickness of the atmosphere on classification accuracy. It is concluded that new atmospheric correction methods, which take into account the finite size of the fields, have to be developed to improve significantly the classification accuracy.
Hosey, Chelsea M; Benet, Leslie Z
2015-01-01
The Biopharmaceutics Drug Disposition Classification System (BDDCS) can be utilized to predict drug disposition, including interactions with other drugs and transporter or metabolizing enzyme effects based on the extent of metabolism and solubility of a drug. However, defining the extent of metabolism relies upon clinical data. Drugs exhibiting high passive intestinal permeability rates are extensively metabolized. Therefore, we aimed to determine if in vitro measures of permeability rate or in silico permeability rate predictions could predict the extent of metabolism, to determine a reference compound representing the permeability rate above which compounds would be expected to be extensively metabolized, and to predict the major route of elimination of compounds in a two-tier approach utilizing permeability rate and a previously published model predicting the major route of elimination of parent drug. Twenty-two in vitro permeability rate measurement data sets in Caco-2 and MDCK cell lines and PAMPA were collected from the literature, while in silico permeability rate predictions were calculated using ADMET Predictor™ or VolSurf+. The potential for permeability rate to differentiate between extensively and poorly metabolized compounds was analyzed with receiver operating characteristic curves. Compounds that yielded the highest sensitivity-specificity average were selected as permeability rate reference standards. The major route of elimination of poorly permeable drugs was predicted by our previously published model and the accuracies and predictive values were calculated. The areas under the receiver operating curves were >0.90 for in vitro measures of permeability rate and >0.80 for the VolSurf+ model of permeability rate, indicating they were able to predict the extent of metabolism of compounds. Labetalol and zidovudine predicted greater than 80% of extensively metabolized drugs correctly and greater than 80% of poorly metabolized drugs correctly in Caco-2 and MDCK, respectively, while theophylline predicted greater than 80% of extensively and poorly metabolized drugs correctly in PAMPA. A two-tier approach predicting elimination route predicts 72±9%, 49±10%, and 66±7% of extensively metabolized, biliarily eliminated, and renally eliminated parent drugs correctly when the permeability rate is predicted in silico and 74±7%, 85±2%, and 73±8% of extensively metabolized, biliarily eliminated, and renally eliminated parent drugs correctly, respectively when the permeability rate is determined in vitro. PMID:25816851
Multimethod latent class analysis
Nussbeck, Fridtjof W.; Eid, Michael
2015-01-01
Correct and, hence, valid classifications of individuals are of high importance in the social sciences as these classifications are the basis for diagnoses and/or the assignment to a treatment. The via regia to inspect the validity of psychological ratings is the multitrait-multimethod (MTMM) approach. First, a latent variable model for the analysis of rater agreement (latent rater agreement model) will be presented that allows for the analysis of convergent validity between different measurement approaches (e.g., raters). Models of rater agreement are transferred to the level of latent variables. Second, the latent rater agreement model will be extended to a more informative MTMM latent class model. This model allows for estimating (i) the convergence of ratings, (ii) method biases in terms of differential latent distributions of raters and differential associations of categorizations within raters (specific rater bias), and (iii) the distinguishability of categories indicating if categories are satisfyingly distinct from each other. Finally, an empirical application is presented to exemplify the interpretation of the MTMM latent class model. PMID:26441714
Detection of defects on apple using B-spline lighting correction method
NASA Astrophysics Data System (ADS)
Li, Jiangbo; Huang, Wenqian; Guo, Zhiming
To effectively extract defective areas in fruits, the uneven intensity distribution that was produced by the lighting system or by part of the vision system in the image must be corrected. A methodology was used to convert non-uniform intensity distribution on spherical objects into a uniform intensity distribution. A basically plane image with the defective area having a lower gray level than this plane was obtained by using proposed algorithms. Then, the defective areas can be easily extracted by a global threshold value. The experimental results with a 94.0% classification rate based on 100 apple images showed that the proposed algorithm was simple and effective. This proposed method can be applied to other spherical fruits.
ANALYSIS OF A CLASSIFICATION ERROR MATRIX USING CATEGORICAL DATA TECHNIQUES.
Rosenfield, George H.; Fitzpatrick-Lins, Katherine
1984-01-01
Summary form only given. A classification error matrix typically contains tabulation results of an accuracy evaluation of a thematic classification, such as that of a land use and land cover map. The diagonal elements of the matrix represent the counts corrected, and the usual designation of classification accuracy has been the total percent correct. The nondiagonal elements of the matrix have usually been neglected. The classification error matrix is known in statistical terms as a contingency table of categorical data. As an example, an application of these methodologies to a problem of remotely sensed data concerning two photointerpreters and four categories of classification indicated that there is no significant difference in the interpretation between the two photointerpreters, and that there are significant differences among the interpreted category classifications. However, two categories, oak and cottonwood, are not separable in classification in this experiment at the 0. 51 percent probability. A coefficient of agreement is determined for the interpreted map as a whole, and individually for each of the interpreted categories. A conditional coefficient of agreement for the individual categories is compared to other methods for expressing category accuracy which have already been presented in the remote sensing literature.
Lahmiri, Salim; Gargour, Christian S; Gabrea, Marcel
2014-10-01
An automated diagnosis system that uses complex continuous wavelet transform (CWT) to process retina digital images and support vector machines (SVMs) for classification purposes is presented. In particular, each retina image is transformed into two one-dimensional signals by concatenating image rows and columns separately. The mathematical norm of phase angles found in each one-dimensional signal at each level of CWT decomposition are relied on to characterise the texture of normal images against abnormal images affected by exudates, drusen and microaneurysms. The leave-one-out cross-validation method was adopted to conduct experiments and the results from the SVM show that the proposed approach gives better results than those obtained by other methods based on the correct classification rate, sensitivity and specificity.
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2011 CFR
2011-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2014 CFR
2014-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2012 CFR
2012-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Community corrections center good time...
Studer, S; Naef, R; Schärer, P
1997-12-01
Esthetically correct treatment of a localized alveolar ridge defect is a frequent prosthetic challenge. Such defects can be overcome not only by a variety of prosthetic means, but also by several periodontal surgical techniques, notably soft tissue augmentations. Preoperative classification of the localized alveolar ridge defect can be greatly useful in evaluating the prognosis and technical difficulties involved. A semiquantitative classification, dependent on the severity of vertical and horizontal dimensional loss, is proposed to supplement the recognized qualitative classification of a ridge defect. Various methods of soft tissue augmentation are evaluated, based on initial volumetric measurements. The roll flap technique is proposed when the problem is related to ridge quality (single-tooth defect with little horizontal and vertical loss). Larger defects in which a volumetric problem must be solved are corrected through the subepithelial connective tissue technique. Additional mucogingival problems (eg, insufficient gingival width, high frenum, gingival scarring, or tattoo) should not be corrected simultaneously with augmentation procedures. In these cases, the onlay transplant technique is favored.
An automated approach to the design of decision tree classifiers
NASA Technical Reports Server (NTRS)
Argentiero, P.; Chin, P.; Beaudet, P.
1980-01-01
The classification of large dimensional data sets arising from the merging of remote sensing data with more traditional forms of ancillary data is considered. Decision tree classification, a popular approach to the problem, is characterized by the property that samples are subjected to a sequence of decision rules before they are assigned to a unique class. An automated technique for effective decision tree design which relies only on apriori statistics is presented. This procedure utilizes a set of two dimensional canonical transforms and Bayes table look-up decision rules. An optimal design at each node is derived based on the associated decision table. A procedure for computing the global probability of correct classfication is also provided. An example is given in which class statistics obtained from an actual LANDSAT scene are used as input to the program. The resulting decision tree design has an associated probability of correct classification of .76 compared to the theoretically optimum .79 probability of correct classification associated with a full dimensional Bayes classifier. Recommendations for future research are included.
A simple randomisation procedure for validating discriminant analysis: a methodological note.
Wastell, D G
1987-04-01
Because the goal of discriminant analysis (DA) is to optimise classification, it designedly exaggerates between-group differences. This bias complicates validation of DA. Jack-knifing has been used for validation but is inappropriate when stepwise selection (SWDA) is employed. A simple randomisation test is presented which is shown to give correct decisions for SWDA. The general superiority of randomisation tests over orthodox significance tests is discussed. Current work on non-parametric methods of estimating the error rates of prediction rules is briefly reviewed.
Lewicke, Aaron; Sazonov, Edward; Corwin, Michael J; Neuman, Michael; Schuckers, Stephanie
2008-01-01
Reliability of classification performance is important for many biomedical applications. A classification model which considers reliability in the development of the model such that unreliable segments are rejected would be useful, particularly, in large biomedical data sets. This approach is demonstrated in the development of a technique to reliably determine sleep and wake using only the electrocardiogram (ECG) of infants. Typically, sleep state scoring is a time consuming task in which sleep states are manually derived from many physiological signals. The method was tested with simultaneous 8-h ECG and polysomnogram (PSG) determined sleep scores from 190 infants enrolled in the collaborative home infant monitoring evaluation (CHIME) study. Learning vector quantization (LVQ) neural network, multilayer perceptron (MLP) neural network, and support vector machines (SVMs) are tested as the classifiers. After systematic rejection of difficult to classify segments, the models can achieve 85%-87% correct classification while rejecting only 30% of the data. This corresponds to a Kappa statistic of 0.65-0.68. With rejection, accuracy improves by about 8% over a model without rejection. Additionally, the impact of the PSG scored indeterminate state epochs is analyzed. The advantages of a reliable sleep/wake classifier based only on ECG include high accuracy, simplicity of use, and low intrusiveness. Reliability of the classification can be built directly in the model, such that unreliable segments are rejected.
[Therapeutic strategy for different types of epicanthus].
Gaofeng, Li; Jun, Tan; Zihan, Wu; Wei, Ding; Huawei, Ouyang; Fan, Zhang; Mingcan, Luo
2015-11-01
To explore the reasonable therapeutic strategy for different types of epicanthus. Patients with epicanthus were classificated according to the shape, extent and inner canthal distance and treated with different methods appropriately. Modified asymmetric Z plasty with two curve method was used in lower eyelid type epicanthus, inner canthus type epicanthus and severe upper eyelid type epicanthus. Moderate upper epicanthus underwent '-' shape method. Mild Upper epicanthus in two conditions which underwent nasal augumentation and double eyelid formation with normal inner canthal distance need no correction surgery. The other mild epicanthus underwent '-' shape method. A total of 66 cases underwent the classification and the appropriate treatment. All wounds healed well. During 3 to 12 months follow-up period, all epicanthus were corrected completely with natural contour and unconspicuous scars. All patients were satisfied with the results. Classification of epicanthus hosed on the shape, extent and inner canthal distance and correction with appropriate methods is a reasonable therapeutic strategy.
Classification of brain tumours using short echo time 1H MR spectra
NASA Astrophysics Data System (ADS)
Devos, A.; Lukas, L.; Suykens, J. A. K.; Vanhamme, L.; Tate, A. R.; Howe, F. A.; Majós, C.; Moreno-Torres, A.; van der Graaf, M.; Arús, C.; Van Huffel, S.
2004-09-01
The purpose was to objectively compare the application of several techniques and the use of several input features for brain tumour classification using Magnetic Resonance Spectroscopy (MRS). Short echo time 1H MRS signals from patients with glioblastomas ( n = 87), meningiomas ( n = 57), metastases ( n = 39), and astrocytomas grade II ( n = 22) were provided by six centres in the European Union funded INTERPRET project. Linear discriminant analysis, least squares support vector machines (LS-SVM) with a linear kernel and LS-SVM with radial basis function kernel were applied and evaluated over 100 stratified random splittings of the dataset into training and test sets. The area under the receiver operating characteristic curve (AUC) was used to measure the performance of binary classifiers, while the percentage of correct classifications was used to evaluate the multiclass classifiers. The influence of several factors on the classification performance has been tested: L2- vs. water normalization, magnitude vs. real spectra and baseline correction. The effect of input feature reduction was also investigated by using only the selected frequency regions containing the most discriminatory information, and peak integrated values. Using L2-normalized complete spectra the automated binary classifiers reached a mean test AUC of more than 0.95, except for glioblastomas vs. metastases. Similar results were obtained for all classification techniques and input features except for water normalized spectra, where classification performance was lower. This indicates that data acquisition and processing can be simplified for classification purposes, excluding the need for separate water signal acquisition, baseline correction or phasing.
Chungsomprasong, Paweena; Bositthipichet, Densiri; Ketsara, Salisa; Titaram, Yuttapon; Chanthong, Prakul; Kanjanauthai, Supaluck
2018-01-01
Objective To compare survival of patients with newly diagnosed pulmonary arterial hypertension associated with congenital heart disease (PAH-CHD) according to various clinical classifications with classifications of anatomical-pathophysiological systemic to pulmonary shunts in a single-center cohort. Methods All prevalent cases of PAH-CHD with hemodynamic confirmation by cardiac catheterization in 1995–2015 were retrospectively reviewed. Patients who were younger than three months of age, or with single ventricle following surgery were excluded. Baseline characteristics and clinical outcomes were retrieved from the database. The survival analysis was performed at the end of 2016. Prognostic factors were identified using multivariate analysis. Results A total of 366 consecutive patients (24.5 ± 17.6 years of age, 40% male) with PAH-CHD were analyzed. Most had simple shunts (85 pre-tricuspid, 105 post-tricuspid, 102 combined shunts). Patients with pre-tricuspid shunts were significantly older at diagnosis in comparison to post-tricuspid, combined, and complex shunts. Clinical classifications identified patients as having Eisenmenger syndrome (ES, 26.8%), prevalent left to right shunt (66.7%), PAH with small defect (3%), or PAH following defect correction (3.5%). At follow-up (median = 5.9 years; 0.1–20.7 years), no statistically significant differences in survival rate were seen among the anatomical-pathophysiological shunts (p = 0.1). Conversely, the clinical classifications revealed that patients with PAH-small defect had inferior survival compared to patients with ES, PAH post-corrective surgery, or PAH with prevalent left to right shunt (p = 0.01). Significant mortality risks were functional class III, age < 10 years, PAH-small defect, elevated right atrial pressure > 15 mmHg, and baseline PVR > 8 WU•m.2 Conclusion Patients with PAH-CHD had a modest long-term survival. Different anatomical-pathophysiological shunts affect the natural presentation, while clinical classifications indicate treatment strategies and survival. Contemporary therapy improves survival in deliberately selected patients. PMID:29664959
A Market-Basket Approach to Predict the Acute Aquatic Toxicity of Munitions and Energetic Materials.
Burgoon, Lyle D
2016-06-01
An ongoing challenge in chemical production, including the production of insensitive munitions and energetics, is the ability to make predictions about potential environmental hazards early in the process. To address this challenge, a quantitative structure activity relationship model was developed to predict acute fathead minnow toxicity of insensitive munitions and energetic materials. Computational predictive toxicology models like this one may be used to identify and prioritize environmentally safer materials early in their development. The developed model is based on the Apriori market-basket/frequent itemset mining approach to identify probabilistic prediction rules using chemical atom-pairs and the lethality data for 57 compounds from a fathead minnow acute toxicity assay. Lethality data were discretized into four categories based on the Globally Harmonized System of Classification and Labelling of Chemicals. Apriori identified toxicophores for categories two and three. The model classified 32 of the 57 compounds correctly, with a fivefold cross-validation classification rate of 74 %. A structure-based surrogate approach classified the remaining 25 chemicals correctly at 48 %. This result is unsurprising as these 25 chemicals were fairly unique within the larger set.
NASA Astrophysics Data System (ADS)
Nganvongpanit, Korakot; Buddhachat, Kittisak; Piboon, Promporn; Euppayo, Thippaporn; Kaewmong, Patcharaporn; Cherdsukjai, Phaothep; Kittiwatanawong, Kongkiat; Thitaram, Chatchote
2017-04-01
The elemental composition was investigated and applied for identifying the sex and habitat of dugongs, in addition to distinguishing dugong tusks and teeth from other animal wildlife materials such as Asian elephant (Elephas maximus) tusks and tiger (Panthera tigris tigris) canine teeth. A total of 43 dugong tusks, 60 dugong teeth, 40 dolphin teeth, 1 whale tooth, 40 Asian elephant tusks and 20 tiger canine teeth were included in the study. Elemental analyses were conducted using a handheld X-ray fluorescence analyzer (HH-XRF). There was no significant difference in the elemental composition of male and female dugong tusks, whereas the overall accuracy for identifying habitat (the Andaman Sea and the Gulf of Thailand) was high (88.1%). Dolphin teeth were able to be correctly predicted 100% of the time. Furthermore, we demonstrated a discrepancy in elemental composition among dugong tusks, Asian elephant tusks and tiger canine teeth, and provided a high correct prediction rate among these species of 98.2%. Here, we demonstrate the feasible use of HH-XRF for preliminary species classification and habitat determination prior to using more advanced techniques such as molecular biology.
Identification of mild cognitive impairment in ACTIVE: algorithmic classification and stability.
Cook, Sarah E; Marsiske, Michael; Thomas, Kelsey R; Unverzagt, Frederick W; Wadley, Virginia G; Langbaum, Jessica B S; Crowe, Michael
2013-01-01
Rates of mild cognitive impairment (MCI) have varied substantially, depending on the criteria used and the samples surveyed. The present investigation used a psychometric algorithm for identifying MCI and its stability to determine if low cognitive functioning was related to poorer longitudinal outcomes. The Advanced Cognitive Training of Independent and Vital Elders (ACTIVE) study is a multi-site longitudinal investigation of long-term effects of cognitive training with older adults. ACTIVE exclusion criteria eliminated participants at highest risk for dementia (i.e., Mini-Mental State Examination < 23). Using composite normative for sample- and training-corrected psychometric data, 8.07% of the sample had amnestic impairment, while 25.09% had a non-amnestic impairment at baseline. Poorer baseline functional scores were observed in those with impairment at the first visit, including a higher rate of attrition, depressive symptoms, and self-reported physical functioning. Participants were then classified based upon the stability of their classification. Those who were stably impaired over the 5-year interval had the worst functional outcomes (e.g., Instrumental Activities of Daily Living performance), and inconsistency in classification over time also appeared to be associated increased risk. These findings suggest that there is prognostic value in assessing and tracking cognition to assist in identifying the critical baseline features associated with poorer outcomes.
Bourtembourg, A; Ramanah, R; Martin, A; Pugin-Vivot, A; Maillet, R; Riethmuller, D
2015-06-01
Expulsion upon vaginal delivery is a period at risk for the foetus, especially in case of breech presentation. In fact, monitoring the fetal well-being is complex in this phase. The correct interpretation of fetal heart rate (FHR) during expulsion, using Melchior's classification, is important because it helps screen for fetal acidosis. The aim of this study was to determine if it was possible to tolerate an abnormal FHR during expulsion of breech presentations. A retrospective study was conducted to compare FHR during expulsion and neonatal results between breech and cephalic presentations at Besançon's university hospital. We collected data from 118 breech presentations and 236 cephalic presentations. Melchior's FHR classification types were significantly different between breech and cephalic presentations with a majority of type 1. Neonatal results were significantly less favorable for breech presentations, but without any increase in mortality and in severe morbidity. Melchior's expulsion FHR classification seems to be applicable for breech presentations with a different distribution of FHR types compared to cephalic presentations. Following the results of this study, it seems to be possible to tolerate an abnormal FHR during expulsion of breech presentation, so far as is reasonable. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Vasheghani Farahani, Jamileh; Zare, Mehdi; Lucas, Caro
2012-04-01
Thisarticle presents an adaptive neuro-fuzzy inference system (ANFIS) for classification of low magnitude seismic events reported in Iran by the network of Tehran Disaster Mitigation and Management Organization (TDMMO). ANFIS classifiers were used to detect seismic events using six inputs that defined the seismic events. Neuro-fuzzy coding was applied using the six extracted features as ANFIS inputs. Two types of events were defined: weak earthquakes and mining blasts. The data comprised 748 events (6289 signals) ranging from magnitude 1.1 to 4.6 recorded at 13 seismic stations between 2004 and 2009. We surveyed that there are almost 223 earthquakes with M ≤ 2.2 included in this database. Data sets from the south, east, and southeast of the city of Tehran were used to evaluate the best short period seismic discriminants, and features as inputs such as origin time of event, distance (source to station), latitude of epicenter, longitude of epicenter, magnitude, and spectral analysis (fc of the Pg wave) were used, increasing the rate of correct classification and decreasing the confusion rate between weak earthquakes and quarry blasts. The performance of the ANFIS model was evaluated for training and classification accuracy. The results confirmed that the proposed ANFIS model has good potential for determining seismic events.
Laurencikas, E; Sävendahl, L; Jorulf, H
2006-06-01
To assess the value of the metacarpophalangeal pattern profile (MCPP) analysis as a diagnostic tool for differentiating between patients with dyschondrosteosis, Turner syndrome, and hypochondroplasia. Radiographic and clinical data from 135 patients between 1 and 51 years of age were collected and analyzed. The study included 25 patients with hypochondroplasia (HCP), 39 with dyschondrosteosis (LWD), and 71 with Turner syndrome (TS). Hand pattern profiles were calculated and compared with those of 110 normal individuals. Pearson correlation coefficient (r) and multivariate discriminant analysis were used for pattern profile analysis. Pattern variability index, a measure of dysmorphogenesis, was calculated for LWD, TS, HCP, and normal controls. Our results demonstrate that patients with LWD, TS, or HCP have distinct pattern profiles that are significantly different from each other and from those of normal controls. Discriminant analysis yielded correct classification of normal versus abnormal individuals in 84% of cases. Classification of the patients into LWD, TS, and HCP groups was successful in 75%. The correct classification rate was higher (85%) when differentiating two pathological groups at a time. Pattern variability index was not helpful for differential diagnosis of LWD, TS, and HCP. Patients with LWD, TS, or HCP have distinct MCPPs and can be successfully differentiated from each other using advanced MCPP analysis. Discriminant analysis is to be preferred over Pearson correlation coefficient because it is a more sensitive and specific technique. MCPP analysis is a helpful tool for differentiating between syndromes with similar clinical and radiological abnormalities.
A real-time method for autonomous passive acoustic detection-classification of humpback whales.
Abbot, Ted A; Premus, Vincent E; Abbot, Philip A
2010-05-01
This paper describes a method for real-time, autonomous, joint detection-classification of humpback whale vocalizations. The approach adapts the spectrogram correlation method used by Mellinger and Clark [J. Acoust. Soc. Am. 107, 3518-3529 (2000)] for bowhead whale endnote detection to the humpback whale problem. The objective is the implementation of a system to determine the presence or absence of humpback whales with passive acoustic methods and to perform this classification with low false alarm rate in real time. Multiple correlation kernels are used due to the diversity of humpback song. The approach also takes advantage of the fact that humpbacks tend to vocalize repeatedly for extended periods of time, and identification is declared only when multiple song units are detected within a fixed time interval. Humpback whale vocalizations from Alaska, Hawaii, and Stellwagen Bank were used to train the algorithm. It was then tested on independent data obtained off Kaena Point, Hawaii in February and March of 2009. Results show that the algorithm successfully classified humpback whales autonomously in real time, with a measured probability of correct classification in excess of 74% and a measured probability of false alarm below 1%.
Crop identification from radar imagery of the Huntington County, Indiana test site
NASA Technical Reports Server (NTRS)
Batlivala, P. P.; Ulaby, F. T. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Like polarization was successful in discriminating corn and soybeans; however, pasture and woods were consistently confused as soybeans and corn, respectively. The probability of correct classification was about 65%. The cross polarization component (highest for woods and lowest for pasture) helped in separating the woods from corn, and pasture from soybeans, and when used with the like polarization component, the probability of correct classification increased to 74%.
12 CFR 1229.12 - Procedures related to capital classification and other actions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Procedures related to capital classification and other actions. 1229.12 Section 1229.12 Banks and Banking FEDERAL HOUSING FINANCE AGENCY ENTITY REGULATIONS CAPITAL CLASSIFICATIONS AND PROMPT CORRECTIVE ACTION Federal Home Loan Banks § 1229.12 Procedures...
Experimental study on GMM-based speaker recognition
NASA Astrophysics Data System (ADS)
Ye, Wenxing; Wu, Dapeng; Nucci, Antonio
2010-04-01
Speaker recognition plays a very important role in the field of biometric security. In order to improve the recognition performance, many pattern recognition techniques have be explored in the literature. Among these techniques, the Gaussian Mixture Model (GMM) is proved to be an effective statistic model for speaker recognition and is used in most state-of-the-art speaker recognition systems. The GMM is used to represent the 'voice print' of a speaker through modeling the spectral characteristic of speech signals of the speaker. In this paper, we implement a speaker recognition system, which consists of preprocessing, Mel-Frequency Cepstrum Coefficients (MFCCs) based feature extraction, and GMM based classification. We test our system with TIDIGITS data set (325 speakers) and our own recordings of more than 200 speakers; our system achieves 100% correct recognition rate. Moreover, we also test our system under the scenario that training samples are from one language but test samples are from a different language; our system also achieves 100% correct recognition rate, which indicates that our system is language independent.
Spectral-Based Volume Sensor Prototype, Post-VS4 Test Series Algorithm Development
2009-04-30
Computer Pcorr Probabilty / Percentage of Correct Classification (# Correct / # Total) PD PhotoDiode Pd Probabilty / Percentage of Detection (# Correct...Detections / Total of Sources) Pfa Probabilty / Percentage of False Alarm (# FAs / Total # of Sources) SBVS Spectral-Based Volume Sensor SFA Smoke and
76 FR 23872 - Editorial Corrections to the Export Administration Regulations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-29
... No. 100709293-1073-01] RIN 0694-AE96 Editorial Corrections to the Export Administration Regulations... Administration Regulations (EAR). In particular, this rule corrects the country entry for Syria on the Commerce... the Export Administration Regulations (EAR), including several Export Control Classification Number...
Military personnel recognition system using texture, colour, and SURF features
NASA Astrophysics Data System (ADS)
Irhebhude, Martins E.; Edirisinghe, Eran A.
2014-06-01
This paper presents an automatic, machine vision based, military personnel identification and classification system. Classification is done using a Support Vector Machine (SVM) on sets of Army, Air Force and Navy camouflage uniform personnel datasets. In the proposed system, the arm of service of personnel is recognised by the camouflage of a persons uniform, type of cap and the type of badge/logo. The detailed analysis done include; camouflage cap and plain cap differentiation using gray level co-occurrence matrix (GLCM) texture feature; classification on Army, Air Force and Navy camouflaged uniforms using GLCM texture and colour histogram bin features; plain cap badge classification into Army, Air Force and Navy using Speed Up Robust Feature (SURF). The proposed method recognised camouflage personnel arm of service on sets of data retrieved from google images and selected military websites. Correlation-based Feature Selection (CFS) was used to improve recognition and reduce dimensionality, thereby speeding the classification process. With this method success rates recorded during the analysis include 93.8% for camouflage appearance category, 100%, 90% and 100% rates of plain cap and camouflage cap categories for Army, Air Force and Navy categories, respectively. Accurate recognition was recorded using SURF for the plain cap badge category. Substantial analysis has been carried out and results prove that the proposed method can correctly classify military personnel into various arms of service. We show that the proposed method can be integrated into a face recognition system, which will recognise personnel in addition to determining the arm of service which the personnel belong. Such a system can be used to enhance the security of a military base or facility.
Electronic Nose: A Promising Tool For Early Detection Of Alicyclobacillus spp In Soft Drinks
NASA Astrophysics Data System (ADS)
Concina, I.; Bornšek, M.; Baccelliere, S.; Falasconi, M.; Sberveglieri, G.
2009-05-01
In the present work we investigate the potential use of the Electronic Nose EOS835 (SACMI scarl, Italy) to early detect Alicyclobacillus spp in two flavoured soft drinks. These bacteria have been acknowledged by producer companies as a major quality control target microorganisms because of their ability to survive commercial pasteurization processes and produce taint compounds in final product. Electronic Nose was able to distinguish between uncontaminated and contaminated products before the taint metabolites were identifiable by an untrained panel. Classification tests showed an excellent rate of correct classification for both drinks (from 86% uo to 100%). High performance liquid chromatography analyses showed no presence of the main metabolite at a level of 200 ppb, thus confirming the skill of the Electronic Nose technology in performing an actual early diagnosis of contamination.
Comparison of four statistical and machine learning methods for crash severity prediction.
Iranitalab, Amirfarrokh; Khattak, Aemal
2017-11-01
Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Classification of plum spirit drinks by synchronous fluorescence spectroscopy.
Sádecká, J; Jakubíková, M; Májek, P; Kleinová, A
2016-04-01
Synchronous fluorescence spectroscopy was used in combination with principal component analysis (PCA) and linear discriminant analysis (LDA) for the differentiation of plum spirits according to their geographical origin. A total of 14 Czech, 12 Hungarian and 18 Slovak plum spirit samples were used. The samples were divided in two categories: colorless (22 samples) and colored (22 samples). Synchronous fluorescence spectra (SFS) obtained at a wavelength difference of 60 nm provided the best results. Considering the PCA-LDA applied to the SFS of all samples, Czech, Hungarian and Slovak colorless samples were properly classified in both the calibration and prediction sets. 100% of correct classification was also obtained for Czech and Hungarian colored samples. However, one group of Slovak colored samples was classified as belonging to the Hungarian group in the calibration set. Thus, the total correct classifications obtained were 94% and 100% for the calibration and prediction steps, respectively. The results were compared with those obtained using near-infrared (NIR) spectroscopy. Applying PCA-LDA to NIR spectra (5500-6000 cm(-1)), the total correct classifications were 91% and 92% for the calibration and prediction steps, respectively, which were slightly lower than those obtained using SFS. Copyright © 2015 Elsevier Ltd. All rights reserved.
An online hybrid BCI system based on SSVEP and EMG
NASA Astrophysics Data System (ADS)
Lin, Ke; Cinetto, Andrea; Wang, Yijun; Chen, Xiaogang; Gao, Shangkai; Gao, Xiaorong
2016-04-01
Objective. A hybrid brain-computer interface (BCI) is a device combined with at least one other communication system that takes advantage of both parts to build a link between humans and machines. To increase the number of targets and the information transfer rate (ITR), electromyogram (EMG) and steady-state visual evoked potential (SSVEP) were combined to implement a hybrid BCI. A multi-choice selection method based on EMG was developed to enhance the system performance. Approach. A 60-target hybrid BCI speller was built in this study. A single trial was divided into two stages: a stimulation stage and an output selection stage. In the stimulation stage, SSVEP and EMG were used together. Every stimulus flickered at its given frequency to elicit SSVEP. All of the stimuli were divided equally into four sections with the same frequency set. The frequency of each stimulus in a section was different. SSVEPs were used to discriminate targets in the same section. Different sections were classified using EMG signals from the forearm. Subjects were asked to make different number of fists according to the target section. Canonical Correlation Analysis (CCA) and mean filtering was used to classify SSVEP and EMG separately. In the output selection stage, the top two optimal choices were given. The first choice with the highest probability of an accurate classification was the default output of the system. Subjects were required to make a fist to select the second choice only if the second choice was correct. Main results. The online results obtained from ten subjects showed that the mean accurate classification rate and ITR were 81.0% and 83.6 bits min-1 respectively only using the first choice selection. The ITR of the hybrid system was significantly higher than the ITR of any of the two single modalities (EMG: 30.7 bits min-1, SSVEP: 60.2 bits min-1). After the addition of the second choice selection and the correction task, the accurate classification rate and ITR was enhanced to 85.8% and 90.9 bit min-1. Significance. These results suggest that the hybrid system proposed here is suitable for practical use.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.
a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images
NASA Astrophysics Data System (ADS)
Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei
2018-04-01
Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.
15 CFR 748.3 - Classification requests, advisory opinions, and encryption registrations.
Code of Federal Regulations, 2013 CFR
2013-01-01
... may ask BIS to provide you with the correct Export Control Classification Number (ECCN) down to the... identified in your classification request is either described by an ECCN in the Commerce Control List (CCL) in Supplement No. 1 to Part 774 of the EAR or not described by an ECCN and, therefore, an “EAR99...
15 CFR 748.3 - Classification requests, advisory opinions, and encryption registrations.
Code of Federal Regulations, 2012 CFR
2012-01-01
... may ask BIS to provide you with the correct Export Control Classification Number (ECCN) down to the... identified in your classification request is either described by an ECCN in the Commerce Control List (CCL) in Supplement No. 1 to Part 774 of the EAR or not described by an ECCN and, therefore, an “EAR99...
15 CFR 748.3 - Classification requests, advisory opinions, and encryption registrations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... may ask BIS to provide you with the correct Export Control Classification Number (ECCN) down to the... identified in your classification request is either described by an ECCN in the Commerce Control List (CCL) in supplement No. 1 to part 774 of the EAR or not described by an ECCN and, therefore, an “EAR99...
Schuld, Christian; Franz, Steffen; Brüggemann, Karin; Heutehaus, Laura; Weidner, Norbert; Kirshblum, Steven C; Rupp, Rüdiger
2016-09-01
Prospective cohort study. Comparison of the classification performance between the worksheet revisions of 2011 and 2013 of the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI). Ongoing ISNCSCI instructional courses of the European Multicenter Study on Human Spinal Cord Injury (EMSCI). For quality control all participants were requested to classify five ISNCSCI cases directly before (pre-test) and after (post-test) the workshop. One hundred twenty-five clinicians working in 22 SCI centers attended the instructional course between November 2011 and March 2015. Seventy-two clinicians completed the post-test with the 2011 revision of the worksheet and 53 with the 2013 revision. Not applicable. The clinicians' classification performance assessed by the percentage of correctly determined motor levels (ML) and sensory levels, neurological levels of injury (NLI), ASIA Impairment Scales and zones of partial preservations. While no group differences were found in the pre-tests, the overall performance (rev2011: 92.2% ± 6.7%, rev2013: 94.3% ± 7.7%; P = 0.010), the percentage of correct MLs (83.2% ± 14.5% vs. 88.1% ± 15.3%; P = 0.046) and NLIs (86.1% ± 16.7% vs. 90.9% ± 18.6%; P = 0.043) improved significantly in the post-tests. Detailed ML analysis revealed the largest benefit of the 2013 revision (50.0% vs. 67.0%) in a case with a high cervical injury (NLI C2). The results from the EMSCI ISNCSCI post-tests show a significantly better classification performance using the revised 2013 worksheet presumably due to the body-side based grouping of myotomes and dermatomes and their correct horizontal alignment. Even with these proven advantages of the new layout, the correct determination of MLs in the segments C2-C4 remains difficult.
Oshiyama, Natália F; Bassani, Rosana A; D'Ottaviano, Itala M L; Bassani, José W M
2012-04-01
As technology evolves, the role of medical equipment in the healthcare system, as well as technology management, becomes more important. Although the existence of large databases containing management information is currently common, extracting useful information from them is still difficult. A useful tool for identification of frequently failing equipment, which increases maintenance cost and downtime, would be the classification according to the corrective maintenance data. Nevertheless, establishment of classes may create inconsistencies, since an item may be close to two classes by the same extent. Paraconsistent logic might help solve this problem, as it allows the existence of inconsistent (contradictory) information without trivialization. In this paper, a methodology for medical equipment classification based on the ABC analysis of corrective maintenance data is presented, and complemented with a paraconsistent annotated logic analysis, which may enable the decision maker to take into consideration alerts created by the identification of inconsistencies and indeterminacies in the classification.
NASA Astrophysics Data System (ADS)
Yang, He; Ma, Ben; Du, Qian; Yang, Chenghai
2010-08-01
In this paper, we propose approaches to improve the pixel-based support vector machine (SVM) classification for urban land use and land cover (LULC) mapping from airborne hyperspectral imagery with high spatial resolution. Class spatial neighborhood relationship is used to correct the misclassified class pairs, such as roof and trail, road and roof. These classes may be difficult to be separated because they may have similar spectral signatures and their spatial features are not distinct enough to help their discrimination. In addition, misclassification incurred from within-class trivial spectral variation can be corrected by using pixel connectivity information in a local window so that spectrally homogeneous regions can be well preserved. Our experimental results demonstrate the efficiency of the proposed approaches in classification accuracy improvement. The overall performance is competitive to the object-based SVM classification.
On evaluating clustering procedures for use in classification
NASA Technical Reports Server (NTRS)
Pore, M. D.; Moritz, T. E.; Register, D. T.; Yao, S. S.; Eppler, W. G. (Principal Investigator)
1979-01-01
The problem of evaluating clustering algorithms and their respective computer programs for use in a preprocessing step for classification is addressed. In clustering for classification the probability of correct classification is suggested as the ultimate measure of accuracy on training data. A means of implementing this criterion and a measure of cluster purity are discussed. Examples are given. A procedure for cluster labeling that is based on cluster purity and sample size is presented.
Discrimination of almonds (Prunus dulcis) geographical origin by minerals and fatty acids profiling.
Amorello, Diana; Orecchio, Santino; Pace, Andrea; Barreca, Salvatore
2016-09-01
Twenty-one almond samples from three different geographical origins (Sicily, Spain and California) were investigated by determining minerals and fatty acids compositions. Data were used to discriminate by chemometry almond origin by linear discriminant analysis. With respect to previous PCA profiling studies, this work provides a simpler analytical protocol for the identification of almonds geographical origin. Classification by using mineral contents data only was correct in 77% of the samples, while, by using fatty acid profiles, the percentages of samples correctly classified reached 82%. The coupling of mineral contents and fatty acid profiles lead to an increased efficiency of the classification with 87% of samples correctly classified.
Assessing herbivore foraging behavior with GPS collars in a semiarid grassland.
Augustine, David J; Derner, Justin D
2013-03-15
Advances in global positioning system (GPS) technology have dramatically enhanced the ability to track and study distributions of free-ranging livestock. Understanding factors controlling the distribution of free-ranging livestock requires the ability to assess when and where they are foraging. For four years (2008-2011), we periodically collected GPS and activity sensor data together with direct observations of collared cattle grazing semiarid rangeland in eastern Colorado. From these data, we developed classification tree models that allowed us to discriminate between grazing and non-grazing activities. We evaluated: (1) which activity sensor measurements from the GPS collars were most valuable in predicting cattle foraging behavior, (2) the accuracy of binary (grazing, non-grazing) activity models vs. models with multiple activity categories (grazing, resting, traveling, mixed), and (3) the accuracy of models that are robust across years vs. models specific to a given year. A binary classification tree correctly removed 86.5% of the non-grazing locations, while correctly retaining 87.8% of the locations where the animal was grazing, for an overall misclassification rate of 12.9%. A classification tree that separated activity into four different categories yielded a greater misclassification rate of 16.0%. Distance travelled in a 5 minute interval and the proportion of the interval with the sensor indicating a head down position were the two most important variables predicting grazing activity. Fitting annual models of cattle foraging activity did not improve model accuracy compared to a single model based on all four years combined. This suggests that increased sample size was more valuable than accounting for interannual variation in foraging behavior associated with variation in forage production. Our models differ from previous assessments in semiarid rangeland of Israel and mesic pastures in the United States in terms of the value of different activity sensor measurements for identifying grazing activity, suggesting that the use of GPS collars to classify cattle grazing behavior will require calibrations specific to the environment and vegetation being studied.
NASA Astrophysics Data System (ADS)
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-01
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.
NASA Astrophysics Data System (ADS)
Damayanti, A.; Werdiningsih, I.
2018-03-01
The brain is the organ that coordinates all the activities that occur in our bodies. Small abnormalities in the brain will affect body activity. Tumor of the brain is a mass formed a result of cell growth not normal and unbridled in the brain. MRI is a non-invasive medical test that is useful for doctors in diagnosing and treating medical conditions. The process of classification of brain tumor can provide the right decision and correct treatment and right on the process of treatment of brain tumor. In this study, the classification process performed to determine the type of brain tumor disease, namely Alzheimer’s, Glioma, Carcinoma and normal, using energy coefficient and ANFIS. Process stages in the classification of images of MR brain are the extraction of a feature, reduction of a feature, and process of classification. The result of feature extraction is a vector approximation of each wavelet decomposition level. The feature reduction is a process of reducing the feature by using the energy coefficients of the vector approximation. The feature reduction result for energy coefficient of 100 per feature is 1 x 52 pixels. This vector will be the input on the classification using ANFIS with Fuzzy C-Means and FLVQ clustering process and LM back-propagation. Percentage of success rate of MR brain images recognition using ANFIS-FLVQ, ANFIS, and LM back-propagation was obtained at 100%.
Treatment outcomes of saddle nose correction.
Hyun, Sang Min; Jang, Yong Ju
2013-01-01
Many valuable classification schemes for saddle nose have been suggested that integrate clinical deformity and treatment; however, there is no consensus regarding the most suitable classification and surgical method for saddle nose correction. To present clinical characteristics and treatment outcome of saddle nose deformity and to propose a modified classification system to better characterize the variety of different saddle nose deformities. The retrospective study included 91 patients who underwent rhinoplasty for correction of saddle nose from April 1, 2003, through December 31, 2011, with a minimum follow-up of 8 months. Saddle nose was classified into 4 types according to a modified classification. Aesthetic outcomes were classified as excellent, good, fair, or poor. Patients underwent minor cosmetic concealment by dorsal augmentation (n = 8) or major septal reconstruction combined with dorsal augmentation (n = 83). Autologous costal cartilages were used in 40 patients (44%), and homologous costal cartilages were used in 5 patients (6%). According to postoperative assessment, 29 patients had excellent, 42 patients had good, 18 patients had fair, and 2 patients had poor aesthetic outcomes. No statistical difference in surgical outcome according to saddle nose classification was observed. Eight patients underwent revision rhinoplasty, owing to recurrence of saddle, wound infection, or warping of the costal cartilage for dorsal augmentation. We introduce a modified saddle nose classification scheme that is simpler and better able to characterize different deformities. Among 91 patients with saddle nose, 20 (22%) had unsuccessful outcomes (fair or poor) and 8 (9%) underwent subsequent revision rhinoplasty. Thus, management of saddle nose deformities remains challenging. 4.
The reliability and validity of the Saliba Postural Classification System
Collins, Cristiana Kahl; Johnson, Vicky Saliba; Godwin, Ellen M.; Pappas, Evangelos
2016-01-01
Objectives To determine the reliability and validity of the Saliba Postural Classification System (SPCS). Methods Two physical therapists classified pictures of 100 volunteer participants standing in their habitual posture for inter and intra-tester reliability. For validity, 54 participants stood on a force plate in a habitual and a corrected posture, while a vertical force was applied through the shoulders until the clinician felt a postural give. Data were extracted at the time the give was felt and at a time in the corrected posture that matched the peak vertical ground reaction force (VGRF) in the habitual posture. Results Inter-tester reliability demonstrated 75% agreement with a Kappa = 0.64 (95% CI = 0.524–0.756, SE = 0.059). Intra-tester reliability demonstrated 87% agreement with a Kappa = 0.8, (95% CI = 0.702–0.898, SE = 0.05) and 80% agreement with a Kappa = 0.706, (95% CI = 0.594–0818, SE = 0.057). The examiner applied a significantly higher (p < 0.001) peak vertical force in the corrected posture prior to a postural give when compared to the habitual posture. Within the corrected posture, the %VGRF was higher when the test was ongoing vs. when a postural give was felt (p < 0.001). The %VGRF was not different between the two postures when comparing the peaks (p = 0.214). Discussion The SPCS has substantial agreement for inter- and intra-tester reliability and is largely a valid postural classification system as determined by the larger vertical forces in the corrected postures. Further studies on the correlation between the SPCS and diagnostic classifications are indicated. PMID:27559288
The reliability and validity of the Saliba Postural Classification System.
Collins, Cristiana Kahl; Johnson, Vicky Saliba; Godwin, Ellen M; Pappas, Evangelos
2016-07-01
To determine the reliability and validity of the Saliba Postural Classification System (SPCS). Two physical therapists classified pictures of 100 volunteer participants standing in their habitual posture for inter and intra-tester reliability. For validity, 54 participants stood on a force plate in a habitual and a corrected posture, while a vertical force was applied through the shoulders until the clinician felt a postural give. Data were extracted at the time the give was felt and at a time in the corrected posture that matched the peak vertical ground reaction force (VGRF) in the habitual posture. Inter-tester reliability demonstrated 75% agreement with a Kappa = 0.64 (95% CI = 0.524-0.756, SE = 0.059). Intra-tester reliability demonstrated 87% agreement with a Kappa = 0.8, (95% CI = 0.702-0.898, SE = 0.05) and 80% agreement with a Kappa = 0.706, (95% CI = 0.594-0818, SE = 0.057). The examiner applied a significantly higher (p < 0.001) peak vertical force in the corrected posture prior to a postural give when compared to the habitual posture. Within the corrected posture, the %VGRF was higher when the test was ongoing vs. when a postural give was felt (p < 0.001). The %VGRF was not different between the two postures when comparing the peaks (p = 0.214). The SPCS has substantial agreement for inter- and intra-tester reliability and is largely a valid postural classification system as determined by the larger vertical forces in the corrected postures. Further studies on the correlation between the SPCS and diagnostic classifications are indicated.
Peake, Barrie M; Tong, Alfred Y C; Wells, William J; Harraway, John A; Niven, Brian E; Weege, Butch; LaFollette, Douglas J
2015-06-01
The trace metal content of roots of samples of the American ginseng natural herbal plant species (Panax quinquefolius) was investigated as a means of differentiating between this species grown on Wisconsin and New Zealand farms, and from Canadian and Chinese sources. ICP-MS measurements were undertaken by ashing samples of the roots and then digestion with conc. HNO3 and H2O2. There was considerable variation in the concentrations of 28 detectable elements along the length of a root, between different roots, between different farms/sources and between different countries. Statistical processing of the log-transformed concentration data was undertaken using principal component analysis (PCA) and discriminant function analysis (DFA). Although PCA showed some differentiation between samples, a much clearer discrimination of the Panax quinquefolius species of ginseng from the four countries was observed using DFA. 88% of the variation between countries could be accounted for by only using discriminant function 1 while 80% of the remaining 12% of the variation between countries is accounted for by discriminant function 2. The Fisher Classification Functions classify 98% of the 87 samples to the correct country of origin with 97% of the cross-validated cases correctly classified. The predictive ability of this DFA model was further tested by constructing 100 discriminant models each using a random selection of the data for two thirds of the 87 sampled ginseng root tops, and then using the resulting classification functions to determine correctly the country of origin of the remaining third of the cases. The mean success rate of the 100 classifications was 92%. These results suggest that measurement and statistical analysis of just the trace metal content of the roots of Panax quinquefolius promises to be an excellent predictor of the country of origin of this ginseng species. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Texture operator for snow particle classification into snowflake and graupel
NASA Astrophysics Data System (ADS)
Nurzyńska, Karolina; Kubo, Mamoru; Muramoto, Ken-ichiro
2012-11-01
In order to improve the estimation of precipitation, the coefficients of Z-R relation should be determined for each snow type. Therefore, it is necessary to identify the type of falling snow. Consequently, this research addresses a problem of snow particle classification into snowflake and graupel in an automatic manner (as these types are the most common in the study region). Having correctly classified precipitation events, it is believed that it will be possible to estimate the related parameters accurately. The automatic classification system presented here describes the images with texture operators. Some of them are well-known from the literature: first order features, co-occurrence matrix, grey-tone difference matrix, run length matrix, and local binary pattern, but also a novel approach to design simple local statistic operators is introduced. In this work the following texture operators are defined: mean histogram, min-max histogram, and mean-variance histogram. Moreover, building a feature vector, which is based on the structure created in many from mentioned algorithms is also suggested. For classification, the k-nearest neighbourhood classifier was applied. The results showed that it is possible to achieve correct classification accuracy above 80% by most of the techniques. The best result of 86.06%, was achieved for operator built from a structure achieved in the middle stage of the co-occurrence matrix calculation. Next, it was noticed that describing an image with two texture operators does not improve the classification results considerably. In the best case the correct classification efficiency was 87.89% for a pair of texture operators created from local binary pattern and structure build in a middle stage of grey-tone difference matrix calculation. This also suggests that the information gathered by each texture operator is redundant. Therefore, the principal component analysis was applied in order to remove the unnecessary information and additionally reduce the length of the feature vectors. The improvement of the correct classification efficiency for up to 100% is possible for methods: min-max histogram, texture operator built from structure achieved in a middle stage of co-occurrence matrix calculation, texture operator built from a structure achieved in a middle stage of grey-tone difference matrix creation, and texture operator based on a histogram, when the feature vector stores 99% of initial information.
Deep-learning-based classification of FDG-PET data for Alzheimer's disease categories
NASA Astrophysics Data System (ADS)
Singh, Shibani; Srivastava, Anant; Mi, Liang; Caselli, Richard J.; Chen, Kewei; Goradia, Dhruman; Reiman, Eric M.; Wang, Yalin
2017-11-01
Fluorodeoxyglucose (FDG) positron emission tomography (PET) measures the decline in the regional cerebral metabolic rate for glucose, offering a reliable metabolic biomarker even on presymptomatic Alzheimer's disease (AD) patients. PET scans provide functional information that is unique and unavailable using other types of imaging. However, the computational efficacy of FDG-PET data alone, for the classification of various Alzheimers Diagnostic categories, has not been well studied. This motivates us to correctly discriminate various AD Diagnostic categories using FDG-PET data. Deep learning has improved state-of-the-art classification accuracies in the areas of speech, signal, image, video, text mining and recognition. We propose novel methods that involve probabilistic principal component analysis on max-pooled data and mean-pooled data for dimensionality reduction, and multilayer feed forward neural network which performs binary classification. Our experimental dataset consists of baseline data of subjects including 186 cognitively unimpaired (CU) subjects, 336 mild cognitive impairment (MCI) subjects with 158 Late MCI and 178 Early MCI, and 146 AD patients from Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. We measured F1-measure, precision, recall, negative and positive predictive values with a 10-fold cross validation scheme. Our results indicate that our designed classifiers achieve competitive results while max pooling achieves better classification performance compared to mean-pooled features. Our deep model based research may advance FDG-PET analysis by demonstrating their potential as an effective imaging biomarker of AD.
Model-Based Building Detection from Low-Cost Optical Sensors Onboard Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Karantzalos, K.; Koutsourakis, P.; Kalisperakis, I.; Grammatikopoulos, L.
2015-08-01
The automated and cost-effective building detection in ultra high spatial resolution is of major importance for various engineering and smart city applications. To this end, in this paper, a model-based building detection technique has been developed able to extract and reconstruct buildings from UAV aerial imagery and low-cost imaging sensors. In particular, the developed approach through advanced structure from motion, bundle adjustment and dense image matching computes a DSM and a true orthomosaic from the numerous GoPro images which are characterised by important geometric distortions and fish-eye effect. An unsupervised multi-region, graphcut segmentation and a rule-based classification is responsible for delivering the initial multi-class classification map. The DTM is then calculated based on inpaininting and mathematical morphology process. A data fusion process between the detected building from the DSM/DTM and the classification map feeds a grammar-based building reconstruction and scene building are extracted and reconstructed. Preliminary experimental results appear quite promising with the quantitative evaluation indicating detection rates at object level of 88% regarding the correctness and above 75% regarding the detection completeness.
Texture classification using non-Euclidean Minkowski dilation
NASA Astrophysics Data System (ADS)
Florindo, Joao B.; Bruno, Odemir M.
2018-03-01
This study presents a new method to extract meaningful descriptors of gray-scale texture images using Minkowski morphological dilation based on the Lp metric. The proposed approach is motivated by the success previously achieved by Bouligand-Minkowski fractal descriptors on texture classification. In essence, such descriptors are directly derived from the morphological dilation of a three-dimensional representation of the gray-level pixels using the classical Euclidean metric. In this way, we generalize the dilation for different values of p in the Lp metric (Euclidean is a particular case when p = 2) and obtain the descriptors from the cumulated distribution of the distance transform computed over the texture image. The proposed method is compared to other state-of-the-art approaches (such as local binary patterns and textons for example) in the classification of two benchmark data sets (UIUC and Outex). The proposed descriptors outperformed all the other approaches in terms of rate of images correctly classified. The interesting results suggest the potential of these descriptors in this type of task, with a wide range of possible applications to real-world problems.
Multicategory nets of single-layer perceptrons: complexity and sample-size issues.
Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras
2010-05-01
The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.
Aursand, Marit; Standal, Inger B; Praël, Angelika; McEvoy, Lesley; Irvine, Joe; Axelson, David E
2009-05-13
(13)C nuclear magnetic resonance (NMR) in combination with multivariate data analysis was used to (1) discriminate between farmed and wild Atlantic salmon ( Salmo salar L.), (2) discriminate between different geographical origins, and (3) verify the origin of market samples. Muscle lipids from 195 Atlantic salmon of known origin (wild and farmed salmon from Norway, Scotland, Canada, Iceland, Ireland, the Faroes, and Tasmania) in addition to market samples were analyzed by (13)C NMR spectroscopy and multivariate analysis. Both probabilistic neural networks (PNN) and support vector machines (SVM) provided excellent discrimination (98.5 and 100.0%, respectively) between wild and farmed salmon. Discrimination with respect to geographical origin was somewhat more difficult, with correct classification rates ranging from 82.2 to 99.3% by PNN and SVM, respectively. In the analysis of market samples, five fish labeled and purchased as wild salmon were classified as farmed salmon (indicating mislabeling), and there were also some discrepancies between the classification and the product declaration with regard to geographical origin.
Ahmadian, Alireza; Ay, Mohammad R; Bidgoli, Javad H; Sarkar, Saeed; Zaidi, Habib
2008-10-01
Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (mumap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated mumaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in clinical setting. More importantly, correction of oral contrast artifacts improved the readability and interpretation of the PET scan and showed substantial decrease of the SUV (104.3%) after correction. An automated segmentation algorithm for classification of irregular shapes of regions containing contrast medium was developed for wider applicability of the SCC algorithm for correction of oral contrast artifacts during the CTAC procedure. The algorithm is being refined and further validated in clinical setting.
ERIC Educational Resources Information Center
Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver
2012-01-01
Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…
Statistical models to predict type 2 diabetes remission after bariatric surgery.
Ramos-Levi, Ana M; Matia, Pilar; Cabrerizo, Lucio; Barabash, Ana; Sanchez-Pernaute, Andres; Calle-Pascual, Alfonso L; Torres, Antonio J; Rubio, Miguel A
2014-09-01
Type 2 diabetes (T2D) remission may be achieved after bariatric surgery (BS), but rates vary according to patients' baseline characteristics. The present study evaluates the relevance of several preoperative factors and develops statistical models to predict T2D remission 1 year after BS. We retrospectively studied 141 patients (57.4% women), with a preoperative diagnosis of T2D, who underwent BS in a single center (2006-2011). Anthropometric and glucose metabolism parameters before surgery and at 1-year follow-up were recorded. Remission of T2D was defined according to consensus criteria: HbA1c <6%, fasting glucose (FG) <100 mg/dL, absence of pharmacologic treatment. The influence of several preoperative factors was explored and different statistical models to predict T2D remission were elaborated using logistic regression analysis. Three preoperative characteristics considered individually were identified as the most powerful predictors of T2D remission: C-peptide (R2 = 0.249; odds ratio [OR] 1.652, 95% confidence interval [CI] 1.181-2.309; P = 0.003), T2D duration (R2 = 0.197; OR 0.869, 95% CI 0.808-0.935; P < 0.001), and previous insulin therapy (R2 = 0.165; OR 4.670, 95% CI 2.257-9.665; P < 0.001). High C-peptide levels, a shorter duration of T2D, and the absence of insulin therapy favored remission. Different multivariate logistic regression models were designed. When considering sex, T2D duration, and insulin treatment, remission was correctly predicted in 72.4% of cases. The model that included age, FG and C-peptide levels resulted in 83.7% correct classifications. When sex, FG, C-peptide, insulin treatment, and percentage weight loss were considered, correct classification of T2D remission was achieved in 95.9% of cases. Preoperative characteristics determine T2D remission rates after BS to different extents. The use of statistical models may help clinicians reliably predict T2D remission rates after BS. © 2014 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
Internal fixation of pilon fractures of the distal radius.
Trumble, T. E.; Schmitt, S. R.; Vedder, N. B.
1993-01-01
When closed manipulation fails to restore articular congruity in comminuted, displaced fractures of the distal radius, open reduction and internal fixation is required. Results of surgical stabilization and articular reconstruction of these injuries are reviewed in this retrospective study of 49 patients with 52 displaced, intra-articular distal radius fractures. Forty-three patients (87%) with a mean age of 37 years (range of 17 to 79 years) were available for evaluation. The mean follow-up time was 38 months (range 22-69 months). When rated according to the Association for the Study of Internal Fixation (ASIF), 19 were type C2 and 21 were type C3. We devised an Injury Score System based on the initial injury radiographs to classify severely comminuted intra-articular fractures and to identify those associated with carpal injury (3 patients). Post-operative fracture alignment, articular congruity, and radial length were significantly improved following surgery (p < .01). Grip strength averaged 69% +/- 22% of the contralateral side, and the range of motion averaged 75% +/- 18% of the contralateral side post-operatively. A combined outcome rating system that included grip strength, range of motion, and pain relief averaged 76% +/- 19% of the contralateral side. There was a statistically significant decrease in the combined rating with more severe fracture patterns as defined by the ASIF system (p < .01), Malone classification (p < .03), and the Injury Score System (p < .001). The Injury Score System presented here, and in particular the number of fracture fragments, correlated most closely with outcome of all the classification systems studied. Operative treatment of these distal radius fractures with reconstruction of the articular congruity and correction of the articular surface alignment with internal fixation and/or external fixation, can significantly improve the radiographic alignment and functional outcome. Furthermore, the degree to which articular stepoff, gap between fragments, and radial shortening are improved by surgery is strongly correlated with improved outcome, even when the results are corrected for severity of initial injury, whereas correction of radial tilt or dorsal tilt did not correlate with improved outcome. Images Figure 2 PMID:8209554
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.
1976-01-01
Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.
Using NASA Techniques to Atmospherically Correct AWiFS Data for Carbon Sequestration Studies
NASA Technical Reports Server (NTRS)
Holekamp, Kara L.
2007-01-01
Carbon dioxide is a greenhouse gas emitted in a number of ways, including the burning of fossil fuels and the conversion of forest to agriculture. Research has begun to quantify the ability of vegetative land cover and oceans to absorb and store carbon dioxide. The USDA (U.S. Department of Agriculture) Forest Service is currently evaluating a DSS (decision support system) developed by researchers at the NASA Ames Research Center called CASA-CQUEST (Carnegie-Ames-Stanford Approach-Carbon Query and Evaluation Support Tools). CASA-CQUEST is capable of estimating levels of carbon sequestration based on different land cover types and of predicting the effects of land use change on atmospheric carbon amounts to assist land use management decisions. The CASA-CQUEST DSS currently uses land cover data acquired from MODIS (the Moderate Resolution Imaging Spectroradiometer), and the CASA-CQUEST project team is involved in several projects that use moderate-resolution land cover data derived from Landsat surface reflectance. Landsat offers higher spatial resolution than MODIS, allowing for increased ability to detect land use changes and forest disturbance. However, because of the rate at which changes occur and the fact that disturbances can be hidden by regrowth, updated land cover classifications may be required before the launch of the Landsat Data Continuity Mission, and consistent classifications will be needed after that time. This candidate solution investigates the potential of using NASA atmospheric correction techniques to produce science-quality surface reflectance data from the Indian Remote Sensing Advanced Wide-Field Sensor on the RESOURCESAT-1 mission to produce land cover classification maps for the CASA-CQUEST DSS.
NASA Astrophysics Data System (ADS)
Schirdewan, A.; Gapelyuk, A.; Fischer, R.; Koch, L.; Schütt, H.; Zacharzowsky, U.; Dietz, R.; Thierfelder, L.; Wessel, N.
2007-03-01
Hypertrophic cardiomyopathy (HCM) is a common primary inherited cardiac muscle disorder, defined clinically by the presence of unexplained left ventricular hypertrophy. The detection of affected patients remains challenging. Genetic testing is limited because only in 50%-60% of all HCM diagnoses an underlying mutation can be found. Furthermore, the disease has a varied clinical course and outcome, with many patients having little or no discernible cardiovascular symptoms, whereas others develop profound exercise limitation and recurrent arrhythmias or sudden cardiac death. Therefore prospective screening of HCM family members is strongly recommended. According to the current guidelines this includes serial echocardiographic and electrocardiographic examinations. In this study we investigated the capability of cardiac magnetic field mapping (CMFM) to detect patients suffering from HCM. We introduce for the first time a combined diagnostic approach based on map topology quantification using Kullback-Leibler (KL) entropy and regional magnetic field strength parameters. The cardiac magnetic field was recorded over the anterior chest wall using a multichannel-LT-SQUID system. CMFM was calculated based on a regular 36 point grid. We analyzed CMFM in patients with confirmed diagnosis of HCM (HCM, n =33, 43.8±13 years, 13 women, 20 men), a control group of healthy subjects (NORMAL, n =57, 39.6±8.9 years; 22 women and 35 men), and patients with confirmed cardiac hypertrophy due to arterial hypertension (HYP, n =42, 49.7±7.9 years, 15 women and 27 men). A subgroup analysis was performed between HCM patients suffering from the obstructive (HOCM, n =19) and nonobstructive (HNCM, n =14) form of the disease. KL entropy based map topology quantification alone identified HCM patients with a sensitivity of 78.8% and specificity of 86.9% (overall classification rate 84.8%). The combination of the KL parameters with a regional field strength parameter improved the overall classification rate to 87.9% (sensitivity: 84.8%, specificity: 88.9%, area under ROC curve: 0.94). KL measures applied to discriminate between HOCM and HNCM patients showed a correct classification of 78.8%. The combination of one KL and one regional parameter again improved the overall classification rate to 97%. A preliminary prospective analysis in two HCM families showed the feasibility of this diagnostic approach with a correct diagnosis of all 22 screened family members (1 HOCM, 4 HNCM, 17 normal). In conclusion, Cardiac Magnetic Field Mapping including KL entropy based topology quantifications is a suitable tool for HCM screening.
Mirzarezaee, Mitra; Araabi, Babak N; Sadeghi, Mehdi
2010-12-19
It has been understood that biological networks have modular organizations which are the sources of their observed complexity. Analysis of networks and motifs has shown that two types of hubs, party hubs and date hubs, are responsible for this complexity. Party hubs are local coordinators because of their high co-expressions with their partners, whereas date hubs display low co-expressions and are assumed as global connectors. However there is no mutual agreement on these concepts in related literature with different studies reporting their results on different data sets. We investigated whether there is a relation between the biological features of Saccharomyces Cerevisiae's proteins and their roles as non-hubs, intermediately connected, party hubs, and date hubs. We propose a classifier that separates these four classes. We extracted different biological characteristics including amino acid sequences, domain contents, repeated domains, functional categories, biological processes, cellular compartments, disordered regions, and position specific scoring matrix from various sources. Several classifiers are examined and the best feature-sets based on average correct classification rate and correlation coefficients of the results are selected. We show that fusion of five feature-sets including domains, Position Specific Scoring Matrix-400, cellular compartments level one, and composition pairs with two and one gaps provide the best discrimination with an average correct classification rate of 77%. We study a variety of known biological feature-sets of the proteins and show that there is a relation between domains, Position Specific Scoring Matrix-400, cellular compartments level one, composition pairs with two and one gaps of Saccharomyces Cerevisiae's proteins, and their roles in the protein interaction network as non-hubs, intermediately connected, party hubs and date hubs. This study also confirms the possibility of predicting non-hubs, party hubs and date hubs based on their biological features with acceptable accuracy. If such a hypothesis is correct for other species as well, similar methods can be applied to predict the roles of proteins in those species.
Sentinel-2 Level 2A Prototype Processor: Architecture, Algorithms And First Results
NASA Astrophysics Data System (ADS)
Muller-Wilm, Uwe; Louis, Jerome; Richter, Rudolf; Gascon, Ferran; Niezette, Marc
2013-12-01
Sen2Core is a prototype processor for Sentinel-2 Level 2A product processing and formatting. The processor is developed for and with ESA and performs the tasks of Atmospheric Correction and Scene Classification of Level 1C input data. Level 2A outputs are: Bottom-Of- Atmosphere (BOA) corrected reflectance images, Aerosol Optical Thickness-, Water Vapour-, Scene Classification maps and Quality indicators, including cloud and snow probabilities. The Level 2A Product Formatting performed by the processor follows the specification of the Level 1C User Product.
On Biometrics With Eye Movements.
Zhang, Youming; Juhola, Martti
2017-09-01
Eye movements are a relatively novel data source for biometric identification. When video cameras applied to eye tracking become smaller and more efficient, this data source could offer interesting opportunities for the development of eye movement biometrics. In this paper, we study primarily biometric identification as seen as a classification task of multiple classes, and secondarily biometric verification considered as binary classification. Our research is based on the saccadic eye movement signal measurements from 109 young subjects. In order to test the data measured, we use a procedure of biometric identification according to the one-versus-one (subject) principle. In a development from our previous research, which also involved biometric verification based on saccadic eye movements, we now apply another eye movement tracker device with a higher sampling frequency of 250 Hz. The results obtained are good, with correct identification rates at 80-90% at their best.
Classification of smoke tainted wines using mid-infrared spectroscopy and chemometrics.
Fudge, Anthea L; Wilkinson, Kerry L; Ristic, Renata; Cozzolino, Daniel
2012-01-11
In this study, the suitability of mid-infrared (MIR) spectroscopy, combined with principal component analysis (PCA) and linear discriminant analysis (LDA), was evaluated as a rapid analytical technique to identify smoke tainted wines. Control (i.e., unsmoked) and smoke-affected wines (260 in total) from experimental and commercial sources were analyzed by MIR spectroscopy and chemometrics. The concentrations of guaiacol and 4-methylguaiacol were also determined using gas chromatography-mass spectrometry (GC-MS), as markers of smoke taint. LDA models correctly classified 61% of control wines and 70% of smoke-affected wines. Classification rates were found to be influenced by the extent of smoke taint (based on GC-MS and informal sensory assessment), as well as qualitative differences in wine composition due to grape variety and oak maturation. Overall, the potential application of MIR spectroscopy combined with chemometrics as a rapid analytical technique for screening smoke-affected wines was demonstrated.
Kelly, J Daniel; Petisco, Cristina; Downey, Gerard
2006-08-23
A collection of authentic artisanal Irish honeys (n = 580) and certain of these honeys adulterated by fully inverted beet syrup (n = 280), high-fructose corn syrup (n = 160), partial invert cane syrup (n = 120), dextrose syrup (n = 160), and beet sucrose (n = 120) was assembled. All samples were adjusted to 70 degrees Bx and scanned in the midinfrared region (800-4000 cm(-1)) by attenuated total reflectance sample accessory. By use of soft independent modeling of class analogy (SIMCA) and partial least-squares (PLS) classification, authentic honey and honey adulterated by beet sucrose, dextrose syrups, and partial invert corn syrup could be identified with correct classification rates of 96.2%, 97.5%, 95.8%, and 91.7%, respectively. This combination of spectroscopic technique and chemometric methods was not able to unambiguously detect adulteration by high-fructose corn syrup or fully inverted beet syrup.
Sexing of chicken eggs by fluorescence and Raman spectroscopy through the shell membrane
Preusse, Grit; Schnabel, Christian; Bartels, Thomas; Cramer, Kerstin; Krautwald-Junghanns, Maria-Elisabeth; Koch, Edmund; Steiner, Gerald
2018-01-01
In order to provide an alternative to day-old chick culling in the layer hatcheries, a noninvasive method for egg sexing is required at an early stage of incubation before onset of embryo sensitivity. Fluorescence and Raman spectroscopy of blood offers the potential for precise and contactless in ovo sex determination of the domestic chicken (Gallus gallus f. dom.) eggs already during the fourth incubation day. However, such kind of optical spectroscopy requires a window in the egg shell, is thus invasive to the embryo and leads to decreased hatching rates. Here, we show that near infrared Raman and fluorescence spectroscopy can be performed on perfused extraembryonic vessels while leaving the inner egg shell membrane intact. Sparing the shell membrane makes the measurement minimally invasive, so that the sexing procedure does not affect hatching rates. We analyze the effect of the membrane above the vessels on fluorescence signal intensity and on Raman spectrum of blood, and propose a correction method to compensate for it. After compensation, we attain a correct sexing rate above 90% by applying supervised classification of spectra. Therefore, this approach offers the best premises towards practical deployment in the hatcheries. PMID:29474445
Brandl, Caroline; Zimmermann, Martina E; Günther, Felix; Barth, Teresa; Olden, Matthias; Schelter, Sabine C; Kronenberg, Florian; Loss, Julika; Küchenhoff, Helmut; Helbig, Horst; Weber, Bernhard H F; Stark, Klaus J; Heid, Iris M
2018-06-06
While age-related macular degeneration (AMD) poses an important personal and public health burden, comparing epidemiological studies on AMD is hampered by differing approaches to classify AMD. In our AugUR study survey, recruiting residents from in/around Regensburg, Germany, aged 70+, we analyzed the AMD status derived from color fundus images applying two different classification systems. Based on 1,040 participants with gradable fundus images for at least one eye, we show that including individuals with only one gradable eye (n = 155) underestimates AMD prevalence and we provide a correction procedure. Bias-corrected and standardized to the Bavarian population, late AMD prevalence is 7.3% (95% confidence interval = [5.4; 9.4]). We find substantially different prevalence estimates for "early/intermediate AMD" depending on the classification system: 45.3% (95%-CI = [41.8; 48.7]) applying the Clinical Classification (early/intermediate AMD) or 17.1% (95%-CI = [14.6; 19.7]) applying the Three Continent AMD Consortium Severity Scale (mild/moderate/severe early AMD). We thus provide a first effort to grade AMD in a complete study with different classification systems, a first approach for bias-correction from individuals with only one gradable eye, and the first AMD prevalence estimates from a German elderly population. Our results underscore substantial differences for early/intermediate AMD prevalence estimates between classification systems and an urgent need for harmonization.
NASA Astrophysics Data System (ADS)
Babic, Z.; Pilipovic, R.; Risojevic, V.; Mirjanic, G.
2016-06-01
Honey bees have crucial role in pollination across the world. This paper presents a simple, non-invasive, system for pollen bearing honey bee detection in surveillance video obtained at the entrance of a hive. The proposed system can be used as a part of a more complex system for tracking and counting of honey bees with remote pollination monitoring as a final goal. The proposed method is executed in real time on embedded systems co-located with a hive. Background subtraction, color segmentation and morphology methods are used for segmentation of honey bees. Classification in two classes, pollen bearing honey bees and honey bees that do not have pollen load, is performed using nearest mean classifier, with a simple descriptor consisting of color variance and eccentricity features. On in-house data set we achieved correct classification rate of 88.7% with 50 training images per class. We show that the obtained classification results are not far behind from the results of state-of-the-art image classification methods. That favors the proposed method, particularly having in mind that real time video transmission to remote high performance computing workstation is still an issue, and transfer of obtained parameters of pollination process is much easier.
Zhang, Lu; Tang, Meng-Yao; Jin, Rong; Zhang, Ying; Shi, Yao-Ming; Sun, Bao-Shan; Zhang, Yu-Guang
2015-07-01
One of the earliest signs of aging appears in the nasolabial fold, which is a special anatomical region that requires many factors for comprehensive assessment. Hence, it is inadequate to rely on a single index to facilitate the classification of nasolabial folds. Through clinical observation, we have observed that traditional filling treatments provide little improvement for some patients, which prompted us to seek a more specific and scientific classification standard and assessment system. A total of 900 patients who sought facial rejuvenation treatment in Shanghai 9th People's Hospital were invited in this study. We observed the different nasolabial fold traits for different age groups and in different states, and the results were compared with the Wrinkle Severity Rating Scale (WSRS). We summarized the data, presented a classification scheme, and proposed a selection of treatment options. Consideration of the anatomical and histological features of nasolabial folds allowed us to divide nasolabial folds into five types, namely the skin type, fat pad type, muscular type, bone retrusion type, and hybrid type. Because different types of nasolabial folds require different treatments, it is crucial to accurately assess and correctly classify the conditions. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Valeriano, D. D.
1981-01-01
An evaluation of the multispectral image analyzer (system Image 1-100), using automatic classification, is presented. The region studied is situated. The automatic was carried out using the maximum likelihood (MAXVER) classification system. The following classes were established: urban area, bare soil, sugar cane, citrus culture (oranges), pastures, and reforestation. The classification matrix of the test sites indicate that the percentage of correct classification varied between 63% and 100%.
Human Factors Engineering. Student Supplement,
1981-08-01
a job TASK TAXONOMY A classification scheme for the different levels of activities in a system, i.e., job - task - sub-task, etc. TASK-AN~ALYSIS...with the classification of learning objectives by learning category so as to identify learningPhas III guidelines necessary for optimum learning to...correct. .4... .the sequencing of all dependent tasks. .1.. .the classification of learning objectives by learning category and the Identification of
Methods for data classification
Garrity, George [Okemos, MI; Lilburn, Timothy G [Front Royal, VA
2011-10-11
The present invention provides methods for classifying data and uncovering and correcting annotation errors. In particular, the present invention provides a self-organizing, self-correcting algorithm for use in classifying data. Additionally, the present invention provides a method for classifying biological taxa.
NASA Technical Reports Server (NTRS)
Quattrochi, D. A.
1984-01-01
An initial analysis of LANDSAT 4 Thematic Mapper (TM) data for the discrimination of agricultural, forested wetland, and urban land covers is conducted using a scene of data collected over Arkansas and Tennessee. A classification of agricultural lands derived from multitemporal LANDSAT Multispectral Scanner (MSS) data is compared with a classification of TM data for the same area. Results from this comparative analysis show that the multitemporal MSS classification produced an overall accuracy of 80.91% while the TM classification yields an overall classification accuracy of 97.06% correct.
Power System Transient Stability Based on Data Mining Theory
NASA Astrophysics Data System (ADS)
Cui, Zhen; Shi, Jia; Wu, Runsheng; Lu, Dan; Cui, Mingde
2018-01-01
In order to study the stability of power system, a power system transient stability based on data mining theory is designed. By introducing association rules analysis in data mining theory, an association classification method for transient stability assessment is presented. A mathematical model of transient stability assessment based on data mining technology is established. Meanwhile, combining rule reasoning with classification prediction, the method of association classification is proposed to perform transient stability assessment. The transient stability index is used to identify the samples that cannot be correctly classified in association classification. Then, according to the critical stability of each sample, the time domain simulation method is used to determine the state, so as to ensure the accuracy of the final results. The results show that this stability assessment system can improve the speed of operation under the premise that the analysis result is completely correct, and the improved algorithm can find out the inherent relation between the change of power system operation mode and the change of transient stability degree.
Höller, Yvonne; Bergmann, Jürgen; Thomschewski, Aljoscha; Kronbichler, Martin; Höller, Peter; Crone, Julia S.; Schmid, Elisabeth V.; Butz, Kevin; Nardone, Raffaele; Trinka, Eugen
2013-01-01
Current research aims at identifying voluntary brain activation in patients who are behaviorally diagnosed as being unconscious, but are able to perform commands by modulating their brain activity patterns. This involves machine learning techniques and feature extraction methods such as applied in brain computer interfaces. In this study, we try to answer the question if features/classification methods which show advantages in healthy participants are also accurate when applied to data of patients with disorders of consciousness. A sample of healthy participants (N = 22), patients in a minimally conscious state (MCS; N = 5), and with unresponsive wakefulness syndrome (UWS; N = 9) was examined with a motor imagery task which involved imagery of moving both hands and an instruction to hold both hands firm. We extracted a set of 20 features from the electroencephalogram and used linear discriminant analysis, k-nearest neighbor classification, and support vector machines (SVM) as classification methods. In healthy participants, the best classification accuracies were seen with coherences (mean = .79; range = .53−.94) and power spectra (mean = .69; range = .40−.85). The coherence patterns in healthy participants did not match the expectation of central modulated -rhythm. Instead, coherence involved mainly frontal regions. In healthy participants, the best classification tool was SVM. Five patients had at least one feature-classifier outcome with p0.05 (none of which were coherence or power spectra), though none remained significant after false-discovery rate correction for multiple comparisons. The present work suggests the use of coherences in patients with disorders of consciousness because they show high reliability among healthy subjects and patient groups. However, feature extraction and classification is a challenging task in unresponsive patients because there is no ground truth to validate the results. PMID:24282545
NASA Astrophysics Data System (ADS)
Seong, Cho Kyu; Ho, Chung Duk; Pyo, Hong Deok; Kyeong Jin, Park
2016-04-01
This study aimed to investigate the classification ability with naked eyes according to the understanding level about rocks of pre-service science teachers. We developed a questionnaire concerning misconception about minerals and rocks. The participant were 132 pre-service science teachers. Data were analyzed using Rasch model. Participants were divided into a master group and a novice group according to their understanding level. Seventeen rocks samples (6 igneous, 5 sedimentary, and 6 metamorphic rocks) were presented to pre-service science teachers to examine their classification ability, and they classified the rocks according to the criteria we provided. The study revealed three major findings. First, the pre-service science teachers mainly classified rocks according to textures, color, and grain size. Second, while they relatively easily classified igneous rocks, participants were confused when distinguishing sedimentary and metamorphic rocks from one another by using the same classification criteria. On the other hand, the understanding level of rocks has shown a statistically significant correlation with the classification ability in terms of the formation mechanism of rocks, whereas there was no statically significant relationship found with determination of correct name of rocks. However, this study found that there was a statistically significant relationship between the classification ability with regard the formation mechanism of rocks and the determination of correct name of rocks Keywords : Pre-service science teacher, Understanding level, Rock classification ability, Formation mechanism, Criterion of classification
Predictive models reduce talent development costs in female gymnastics.
Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle
2017-04-01
This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.
NASA Astrophysics Data System (ADS)
Khan, Faisal; Enzmann, Frieder; Kersten, Michael
2016-03-01
Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-25
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety. Copyright © 2014 Elsevier B.V. All rights reserved.
Jia, Shengyao; Li, Hongyang; Wang, Yanjie; Tong, Renyuan; Li, Qing
2017-01-01
Soil is an important environment for crop growth. Quick and accurately access to soil nutrient content information is a prerequisite for scientific fertilization. In this work, hyperspectral imaging (HSI) technology was applied for the classification of soil types and the measurement of soil total nitrogen (TN) content. A total of 183 soil samples collected from Shangyu City (People’s Republic of China), were scanned by a near-infrared hyperspectral imaging system with a wavelength range of 874–1734 nm. The soil samples belonged to three major soil types typical of this area, including paddy soil, red soil and seashore saline soil. The successive projections algorithm (SPA) method was utilized to select effective wavelengths from the full spectrum. Pattern texture features (energy, contrast, homogeneity and entropy) were extracted from the gray-scale images at the effective wavelengths. The support vector machines (SVM) and partial least squares regression (PLSR) methods were used to establish classification and prediction models, respectively. The results showed that by using the combined data sets of effective wavelengths and texture features for modelling an optimal correct classification rate of 91.8%. could be achieved. The soil samples were first classified, then the local models were established for soil TN according to soil types, which achieved better prediction results than the general models. The overall results indicated that hyperspectral imaging technology could be used for soil type classification and soil TN determination, and data fusion combining spectral and image texture information showed advantages for the classification of soil types. PMID:28974005
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Mendonca, F. J.
1980-01-01
Ten segments of the size 20 x 10 km were aerially photographed and used as training areas for automatic classifications. The study areas was covered by four LANDSAT paths: 235, 236, 237, and 238. The percentages of overall correct classification for these paths range from 79.56 percent for path 238 to 95.59 percent for path 237.
NASA Astrophysics Data System (ADS)
Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.
2017-09-01
Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.
39 CFR 3020.91 - Modification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Change the Mail Classification Schedule § 3020.91 Modification. The Postal Service shall submit corrections to product descriptions in the Mail Classification Schedule that do not constitute a proposal to modify the market dominant product list or the competitive product list as defined in § 3020.30 by filing...
39 CFR 3020.91 - Modification.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Change the Mail Classification Schedule § 3020.91 Modification. The Postal Service shall submit corrections to product descriptions in the Mail Classification Schedule that do not constitute a proposal to modify the market dominant product list or the competitive product list as defined in § 3020.30 by filing...
Exercise-Associated Collapse in Endurance Events: A Classification System.
ERIC Educational Resources Information Center
Roberts, William O.
1989-01-01
Describes a classification system devised for exercise-associated collapse in endurance events based on casualties observed at six Twin Cities Marathons. Major diagnostic criteria are body temperature and mental status. Management protocol includes fluid and fuel replacement, temperature correction, and leg cramp treatment. (Author/SM)
Neyman Pearson detection of K-distributed random variables
NASA Astrophysics Data System (ADS)
Tucker, J. Derek; Azimi-Sadjadi, Mahmood R.
2010-04-01
In this paper a new detection method for sonar imagery is developed in K-distributed background clutter. The equation for the log-likelihood is derived and compared to the corresponding counterparts derived for the Gaussian and Rayleigh assumptions. Test results of the proposed method on a data set of synthetic underwater sonar images is also presented. This database contains images with targets of different shapes inserted into backgrounds generated using a correlated K-distributed model. Results illustrating the effectiveness of the K-distributed detector are presented in terms of probability of detection, false alarm, and correct classification rates for various bottom clutter scenarios.
Wang, Kun-Ching
2015-01-14
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.
Císař, Petr; Labbé, Laurent; Souček, Pavel; Pelissier, Pablo; Kerneis, Thierry
2018-01-01
The main aim of this study was to develop a new objective method for evaluating the impacts of different diets on the live fish skin using image-based features. In total, one-hundred and sixty rainbow trout (Oncorhynchus mykiss) were fed either a fish-meal based diet (80 fish) or a 100% plant-based diet (80 fish) and photographed using consumer-grade digital camera. Twenty-three colour features and four texture features were extracted. Four different classification methods were used to evaluate fish diets including Random forest (RF), Support vector machine (SVM), Logistic regression (LR) and k-Nearest neighbours (k-NN). The SVM with radial based kernel provided the best classifier with correct classification rate (CCR) of 82% and Kappa coefficient of 0.65. Although the both LR and RF methods were less accurate than SVM, they achieved good classification with CCR 75% and 70% respectively. The k-NN was the least accurate (40%) classification model. Overall, it can be concluded that consumer-grade digital cameras could be employed as the fast, accurate and non-invasive sensor for classifying rainbow trout based on their diets. Furthermore, these was a close association between image-based features and fish diet received during cultivation. These procedures can be used as non-invasive, accurate and precise approaches for monitoring fish status during the cultivation by evaluating diet’s effects on fish skin. PMID:29596375
Saberioon, Mohammadmehdi; Císař, Petr; Labbé, Laurent; Souček, Pavel; Pelissier, Pablo; Kerneis, Thierry
2018-03-29
The main aim of this study was to develop a new objective method for evaluating the impacts of different diets on the live fish skin using image-based features. In total, one-hundred and sixty rainbow trout ( Oncorhynchus mykiss ) were fed either a fish-meal based diet (80 fish) or a 100% plant-based diet (80 fish) and photographed using consumer-grade digital camera. Twenty-three colour features and four texture features were extracted. Four different classification methods were used to evaluate fish diets including Random forest (RF), Support vector machine (SVM), Logistic regression (LR) and k -Nearest neighbours ( k -NN). The SVM with radial based kernel provided the best classifier with correct classification rate (CCR) of 82% and Kappa coefficient of 0.65. Although the both LR and RF methods were less accurate than SVM, they achieved good classification with CCR 75% and 70% respectively. The k -NN was the least accurate (40%) classification model. Overall, it can be concluded that consumer-grade digital cameras could be employed as the fast, accurate and non-invasive sensor for classifying rainbow trout based on their diets. Furthermore, these was a close association between image-based features and fish diet received during cultivation. These procedures can be used as non-invasive, accurate and precise approaches for monitoring fish status during the cultivation by evaluating diet's effects on fish skin.
Bukreyev, Alexander A.; Chandran, Kartik; Dolnik, Olga; Dye, John M.; Ebihara, Hideki; Leroy, Eric M.; Mühlberger, Elke; Netesov, Sergey V.; Patterson, Jean L.; Paweska, Janusz T.; Saphire, Erica Ollmann; Smither, Sophie J.; Takada, Ayato; Towner, Jonathan S.; Volchkov, Viktor E.; Warren, Travis K.; Kuhn, Jens H.
2013-01-01
The International Committee on Taxonomy of Viruses (ICTV) Filoviridae Study Group prepares proposals on the classification and nomenclature of filoviruses to reflect current knowledge or to correct disagreements with the International Code of Virus Classification and Nomenclature (ICVCN). In recent years, filovirus taxonomy has been corrected and updated, but parts of it remain controversial, and several topics remain to be debated. This article summarizes the decisions and discussion of the currently acting ICTV Filoviridae Study Group since its inauguration in January 2012. PMID:24122154
Hyperspectral analysis of columbia spotted frog habitat
Shive, J.P.; Pilliod, D.S.; Peterson, C.R.
2010-01-01
Wildlife managers increasingly are using remotely sensed imagery to improve habitat delineations and sampling strategies. Advances in remote sensing technology, such as hyperspectral imagery, provide more information than previously was available with multispectral sensors. We evaluated accuracy of high-resolution hyperspectral image classifications to identify wetlands and wetland habitat features important for Columbia spotted frogs (Rana luteiventris) and compared the results to multispectral image classification and United States Geological Survey topographic maps. The study area spanned 3 lake basins in the Salmon River Mountains, Idaho, USA. Hyperspectral data were collected with an airborne sensor on 30 June 2002 and on 8 July 2006. A 12-year comprehensive ground survey of the study area for Columbia spotted frog reproduction served as validation for image classifications. Hyperspectral image classification accuracy of wetlands was high, with a producer's accuracy of 96 (44 wetlands) correctly classified with the 2002 data and 89 (41 wetlands) correctly classified with the 2006 data. We applied habitat-based rules to delineate breeding habitat from other wetlands, and successfully predicted 74 (14 wetlands) of known breeding wetlands for the Columbia spotted frog. Emergent sedge microhabitat classification showed promise for directly predicting Columbia spotted frog egg mass locations within a wetland by correctly identifying 72 (23 of 32) of known locations. Our study indicates hyperspectral imagery can be an effective tool for mapping spotted frog breeding habitat in the selected mountain basins. We conclude that this technique has potential for improving site selection for inventory and monitoring programs conducted across similar wetland habitat and can be a useful tool for delineating wildlife habitats. ?? 2010 The Wildlife Society.
NASA Astrophysics Data System (ADS)
Treloar, W. J.; Taylor, G. E.; Flenley, J. R.
2004-12-01
This is the first of a series of papers on the theme of automated pollen analysis. The automation of pollen analysis could result in numerous advantages for the reconstruction of past environments, with larger data sets made practical, objectivity and fine resolution sampling. There are also applications in apiculture and medicine. Previous work on the classification of pollen using texture measures has been successful with small numbers of pollen taxa. However, as the number of pollen taxa to be identified increases, more features may be required to achieve a successful classification. This paper describes the use of simple geometric measures to augment the texture measures. The feasibility of this new approach is tested using scanning electron microscope (SEM) images of 12 taxa of fresh pollen taken from reference material collected on Henderson Island, Polynesia. Pollen images were captured directly from a SEM connected to a PC. A threshold grey-level was set and binary images were then generated. Pollen edges were then located and the boundaries were traced using a chain coding system. A number of simple geometric variables were calculated directly from the chain code of the pollen and a variable selection procedure was used to choose the optimal subset to be used for classification. The efficiency of these variables was tested using a leave-one-out classification procedure. The system successfully split the original 12 taxa sample into five sub-samples containing no more than six pollen taxa each. The further subdivision of echinate pollen types was then attempted with a subset of four pollen taxa. A set of difference codes was constructed for a range of displacements along the chain code. From these difference codes probability variables were calculated. A variable selection procedure was again used to choose the optimal subset of probabilities that may be used for classification. The efficiency of these variables was again tested using a leave-one-out classification procedure. The proportion of correctly classified pollen ranged from 81% to 100% depending on the subset of variables used. The best set of variables had an overall classification rate averaging at about 95%. This is comparable with the classification rates from the earlier texture analysis work for other types of pollen. Copyright
ERIC Educational Resources Information Center
Mozumdar, Arupendra; Liguori, Gary
2011-01-01
The purposes of this study were to generate correction equations for self-reported height and weight quartiles and to test the accuracy of the body mass index (BMI) classification based on corrected self-reported height and weight among 739 male and 434 female college students. The BMIqc (from height and weight quartile-specific, corrected…
Rajasekaran, S; Bhushan, Manindra; Aiyer, Siddharth; Kanna, Rishi; Shetty, Ajoy Prasad
2018-01-09
To develop a classification based on the technical complexity encountered during pedicle screw insertion and to evaluate the performance of AIRO ® CT navigation system based on this classification, in the clinical scenario of complex spinal deformity. 31 complex spinal deformity correction surgeries were prospectively analyzed for performance of AIRO ® mobile CT-based navigation system. Pedicles were classified according to complexity of insertion into five types. Analysis was performed to estimate the accuracy of screw placement and time for screw insertion. Breach greater than 2 mm was considered for analysis. 452 pedicle screws were inserted (T1-T6: 116; T7-T12: 171; L1-S1: 165). The average Cobb angle was 68.3° (range 60°-104°). We had 242 grade 2 pedicles, 133 grade 3, and 77 grade 4, and 44 pedicles were unfit for pedicle screw insertion. We noted 27 pedicle screw breach (medial: 10; lateral: 16; anterior: 1). Among lateral breach (n = 16), ten screws were planned for in-out-in pedicle screw insertion. Among lateral breach (n = 16), ten screws were planned for in-out-in pedicle screw insertion. Average screw insertion time was 1.76 ± 0.89 min. After accounting for planned breach, the effective breach rate was 3.8% resulting in 96.2% accuracy for pedicle screw placement. This classification helps compare the accuracy of screw insertion in range of conditions by considering the complexity of screw insertion. Considering the clinical scenario of complex pedicle anatomy in spinal deformity AIRO ® navigation showed an excellent accuracy rate of 96.2%.
ChariDingari, Narahara; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P.; Kumar, G. Manoj
2012-01-01
Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real world applications, e.g. quality assurance and process monitoring. Specifically, variability in sample, system and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a non-linear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), due to its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data – highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples as well as in related areas of forensic and biological sample analysis. PMID:22292496
Dingari, Narahara Chari; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P; Kumar Gundawar, Manoj
2012-03-20
Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real-world applications, e.g., quality assurance and process monitoring. Specifically, variability in sample, system, and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a nonlinear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that the application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), because of its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data-highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples, as well as in related areas of forensic and biological sample analysis.
Kanna, Rishi Mugesh; Schroeder, Gregory D.; Oner, Frank Cumhur; Vialle, Luiz; Chapman, Jens; Dvorak, Marcel; Fehlings, Michael; Shetty, Ajoy Prasad; Schnake, Klaus; Kandziora, Frank; Vaccaro, Alexander R.
2017-01-01
Study Design: Prospective survey-based study. Objectives: The AO Spine thoracolumbar injury classification has been shown to have good reproducibility among clinicians. However, the influence of spine surgeons’ clinical experience on fracture classification, stability assessment, and decision on management based on this classification has not been studied. Furthermore, the usefulness of varying imaging modalities including radiographs, computed tomography (CT) and magnetic resonance imaging (MRI) in the decision process was also studied. Methods: Forty-one spine surgeons from different regions, acquainted with the AOSpine classification system, were provided with 30 thoracolumbar fractures in a 3-step assessment: first radiographs, followed by CT and MRI. Surgeons classified the fracture, evaluated stability, chose management, and identified reasons for any changes. The surgeons were divided into 2 groups based on years of clinical experience as <10 years (n = 12) and >10 years (n = 29). Results: There were no significant differences between the 2 groups in correctly classifying A1, B2, and C type fractures. Surgeons with less experience had more correct diagnosis in classifying A3 (47.2% vs 38.5% in step 1, 73.6% vs 60.3% in step 2 and 77.8% vs 65.5% in step 3), A4 (16.7% vs 24.1% in step 1, 72.9% vs 57.8% in step 2 and 70.8% vs 56.0% in step3) and B1 injuries (31.9% vs 20.7% in step 1, 41.7% vs 36.8% in step 2 and 38.9% vs 33.9% in step 3). In the assessment of fracture stability and decision on treatment, the less and more experienced surgeons performed equally. The selection of a particular treatment plan varied in all subtypes except in A1 and C type injuries. Conclusion: Surgeons’ experience did not significantly affect overall fracture classification, evaluating stability and planning the treatment. Surgeons with less experience had a higher percentage of correct classification in A3 and A4 injuries. Despite variations between them in classification, the assessment of overall stability and management decisions were similar between the 2 groups. PMID:28815158
Rajasekaran, Shanmuganathan; Kanna, Rishi Mugesh; Schroeder, Gregory D; Oner, Frank Cumhur; Vialle, Luiz; Chapman, Jens; Dvorak, Marcel; Fehlings, Michael; Shetty, Ajoy Prasad; Schnake, Klaus; Kandziora, Frank; Vaccaro, Alexander R
2017-06-01
Prospective survey-based study. The AO Spine thoracolumbar injury classification has been shown to have good reproducibility among clinicians. However, the influence of spine surgeons' clinical experience on fracture classification, stability assessment, and decision on management based on this classification has not been studied. Furthermore, the usefulness of varying imaging modalities including radiographs, computed tomography (CT) and magnetic resonance imaging (MRI) in the decision process was also studied. Forty-one spine surgeons from different regions, acquainted with the AOSpine classification system, were provided with 30 thoracolumbar fractures in a 3-step assessment: first radiographs, followed by CT and MRI. Surgeons classified the fracture, evaluated stability, chose management, and identified reasons for any changes. The surgeons were divided into 2 groups based on years of clinical experience as <10 years (n = 12) and >10 years (n = 29). There were no significant differences between the 2 groups in correctly classifying A1, B2, and C type fractures. Surgeons with less experience had more correct diagnosis in classifying A3 (47.2% vs 38.5% in step 1, 73.6% vs 60.3% in step 2 and 77.8% vs 65.5% in step 3), A4 (16.7% vs 24.1% in step 1, 72.9% vs 57.8% in step 2 and 70.8% vs 56.0% in step3) and B1 injuries (31.9% vs 20.7% in step 1, 41.7% vs 36.8% in step 2 and 38.9% vs 33.9% in step 3). In the assessment of fracture stability and decision on treatment, the less and more experienced surgeons performed equally. The selection of a particular treatment plan varied in all subtypes except in A1 and C type injuries. Surgeons' experience did not significantly affect overall fracture classification, evaluating stability and planning the treatment. Surgeons with less experience had a higher percentage of correct classification in A3 and A4 injuries. Despite variations between them in classification, the assessment of overall stability and management decisions were similar between the 2 groups.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-21
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 866 [Docket No. FDA-2010-N-0026] Medical Devices; Immunology and Microbiology Devices; Classification of Ovarian Adnexal Mass Assessment Score Test System; Correction AGENCY: Food and Drug Administration, HHS. ACTION...
12 CFR 1229.5 - Capital distributions for adequately capitalized Banks.
Code of Federal Regulations, 2010 CFR
2010-01-01
... CAPITAL CLASSIFICATIONS AND PROMPT CORRECTIVE ACTION Federal Home Loan Banks § 1229.5 Capital... classification of adequately capitalized. A Bank may not make a capital distribution if such distribution would... redeem its shares of stock if the transaction is made in connection with the issuance of additional Bank...
Estimation and Q-Matrix Validation for Diagnostic Classification Models
ERIC Educational Resources Information Center
Feng, Yuling
2013-01-01
Diagnostic classification models (DCMs) are structured latent class models widely discussed in the field of psychometrics. They model subjects' underlying attribute patterns and classify subjects into unobservable groups based on their mastery of attributes required to answer the items correctly. The effective implementation of DCMs depends…
77 FR 32010 - Applications (Classification, Advisory, and License) and Documentation
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-31
... DEPARTMENT OF COMMERCE Bureau of Industry and Security 15 CFR Part 748 Applications (Classification, Advisory, and License) and Documentation CFR Correction 0 In Title 15 of the Code of Federal... fourth column of the table, the two entries for ``National Semiconductor Hong Kong Limited'' are removed...
Detection of stress factors in crop and weed species using hyperspectral remote sensing reflectance
NASA Astrophysics Data System (ADS)
Henry, William Brien
The primary objective of this work was to determine if stress factors such as moisture stress or herbicide injury stress limit the ability to distinguish between weeds and crops using remotely sensed data. Additional objectives included using hyperspectral reflectance data to measure moisture content within a species, and to measure crop injury in response to drift rates of non-selective herbicides. Moisture stress did not reduce the ability to discriminate between species. Regardless of analysis technique, the trend was that as moisture stress increased, so too did the ability to distinguish between species. Signature amplitudes (SA) of the top 5 bands, discrete wavelet transforms (DWT), and multiple indices were promising analysis techniques. Discriminant models created from one year's data set and validated on additional data sets provided, on average, approximately 80% accurate classification among weeds and crop. This suggests that these models are relatively robust and could potentially be used across environmental conditions in field scenarios. Distinguishing between leaves grown at high-moisture stress and no-stress was met with limited success, primarily because there was substantial variation among samples within the treatments. Leaf water potential (LWP) was measured, and these were classified into three categories using indices. Classification accuracies were as high as 68%. The 10 bands most highly correlated to LWP were selected; however, there were no obvious trends or patterns in these top 10 bands with respect to time, species or moisture level, suggesting that LWP is an elusive parameter to quantify spectrally. In order to address herbicide injury stress and its impact on species discrimination, discriminant models were created from combinations of multiple indices. The model created from the second experimental run's data set and validated on the first experimental run's data provided an average of 97% correct classification of soybean and an overall average classification accuracy of 65% for all species. This suggests that these models are relatively robust and could potentially be used across a wide range of herbicide applications in field scenarios. From the pooled data set, a single discriminant model was created with multiple indices that discriminated soybean from weeds 88%, on average, regardless of herbicide, rate or species. Several analysis techniques including multiple indices, signature amplitude with spectral bands as features, and wavelet analysis were employed to distinguish between herbicide-treated and nontreated plants. Classification accuracy using signature amplitude (SA) analysis of paraquat injury on soybean was better than 75% for both 1/2 and 1/8X rates at 1, 4, and 7 DAA. Classification accuracy of paraquat injury on corn was better than 72% for the 1/2X rate at 1, 4, and 7 DAA. These data suggest that hyperspectral reflectance may be used to distinguish between healthy plants and injured plants to which herbicides have been applied; however, the classification accuracies remained at 75% or higher only when the higher rates of herbicide were applied. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Chestek, Cynthia A.; Gilja, Vikash; Blabe, Christine H.; Foster, Brett L.; Shenoy, Krishna V.; Parvizi, Josef; Henderson, Jaimie M.
2013-04-01
Objective. Brain-machine interface systems translate recorded neural signals into command signals for assistive technology. In individuals with upper limb amputation or cervical spinal cord injury, the restoration of a useful hand grasp could significantly improve daily function. We sought to determine if electrocorticographic (ECoG) signals contain sufficient information to select among multiple hand postures for a prosthetic hand, orthotic, or functional electrical stimulation system.Approach. We recorded ECoG signals from subdural macro- and microelectrodes implanted in motor areas of three participants who were undergoing inpatient monitoring for diagnosis and treatment of intractable epilepsy. Participants performed five distinct isometric hand postures, as well as four distinct finger movements. Several control experiments were attempted in order to remove sensory information from the classification results. Online experiments were performed with two participants. Main results. Classification rates were 68%, 84% and 81% for correct identification of 5 isometric hand postures offline. Using 3 potential controls for removing sensory signals, error rates were approximately doubled on average (2.1×). A similar increase in errors (2.6×) was noted when the participant was asked to make simultaneous wrist movements along with the hand postures. In online experiments, fist versus rest was successfully classified on 97% of trials; the classification output drove a prosthetic hand. Online classification performance for a larger number of hand postures remained above chance, but substantially below offline performance. In addition, the long integration windows used would preclude the use of decoded signals for control of a BCI system. Significance. These results suggest that ECoG is a plausible source of command signals for prosthetic grasp selection. Overall, avenues remain for improvement through better electrode designs and placement, better participant training, and characterization of non-stationarities such that ECoG could be a viable signal source for grasp control for amputees or individuals with paralysis.
NASA Astrophysics Data System (ADS)
Zamora Ramos, Ernesto
Artificial Intelligence is a big part of automation and with today's technological advances, artificial intelligence has taken great strides towards positioning itself as the technology of the future to control, enhance and perfect automation. Computer vision includes pattern recognition and classification and machine learning. Computer vision is at the core of decision making and it is a vast and fruitful branch of artificial intelligence. In this work, we expose novel algorithms and techniques built upon existing technologies to improve pattern recognition and neural network training, initially motivated by a multidisciplinary effort to build a robot that helps maintain and optimize solar panel energy production. Our contributions detail an improved non-linear pre-processing technique to enhance poorly illuminated images based on modifications to the standard histogram equalization for an image. While the original motivation was to improve nocturnal navigation, the results have applications in surveillance, search and rescue, medical imaging enhancing, and many others. We created a vision system for precise camera distance positioning motivated to correctly locate the robot for capture of solar panel images for classification. The classification algorithm marks solar panels as clean or dirty for later processing. Our algorithm extends past image classification and, based on historical and experimental data, it identifies the optimal moment in which to perform maintenance on marked solar panels as to minimize the energy and profit loss. In order to improve upon the classification algorithm, we delved into feedforward neural networks because of their recent advancements, proven universal approximation and classification capabilities, and excellent recognition rates. We explore state-of-the-art neural network training techniques offering pointers and insights, culminating on the implementation of a complete library with support for modern deep learning architectures, multilayer percepterons and convolutional neural networks. Our research with neural networks has encountered a great deal of difficulties regarding hyperparameter estimation for good training convergence rate and accuracy. Most hyperparameters, including architecture, learning rate, regularization, trainable parameters (or weights) initialization, and so on, are chosen via a trial and error process with some educated guesses. However, we developed the first quantitative method to compare weight initialization strategies, a critical hyperparameter choice during training, to estimate among a group of candidate strategies which would make the network converge to the highest classification accuracy faster with high probability. Our method provides a quick, objective measure to compare initialization strategies to select the best possible among them beforehand without having to complete multiple training sessions for each candidate strategy to compare final results.
Wold, Jens Petter; Veiseth-Kent, Eva; Høst, Vibeke; Løvland, Atle
2017-01-01
The main objective of this work was to develop a method for rapid and non-destructive detection and grading of wooden breast (WB) syndrome in chicken breast fillets. Near-infrared (NIR) spectroscopy was chosen as detection method, and an industrial NIR scanner was applied and tested for large scale on-line detection of the syndrome. Two approaches were evaluated for discrimination of WB fillets: 1) Linear discriminant analysis based on NIR spectra only, and 2) a regression model for protein was made based on NIR spectra and the estimated concentrations of protein were used for discrimination. A sample set of 197 fillets was used for training and calibration. A test set was recorded under industrial conditions and contained spectra from 79 fillets. The classification methods obtained 99.5-100% correct classification of the calibration set and 100% correct classification of the test set. The NIR scanner was then installed in a commercial chicken processing plant and could detect incidence rates of WB in large batches of fillets. Examples of incidence are shown for three broiler flocks where a high number of fillets (9063, 6330 and 10483) were effectively measured. Prevalence of WB of 0.1%, 6.6% and 8.5% were estimated for these flocks based on the complete sample volumes. Such an on-line system can be used to alleviate the challenges WB represents to the poultry meat industry. It enables automatic quality sorting of chicken fillets to different product categories. Manual laborious grading can be avoided. Incidences of WB from different farms and flocks can be tracked and information can be used to understand and point out main causes for WB in the chicken production. This knowledge can be used to improve the production procedures and reduce today's extensive occurrence of WB.
Delineation of sympatric morphotypes of lake trout in Lake Superior
Moore, Seth A.; Bronte, Charles R.
2001-01-01
Three morphotypes of lake trout Salvelinus namaycush are recognized in Lake Superior: lean, siscowet, and humper. Absolute morphotype assignment can be difficult. We used a size-free, whole-body morphometric analysis (truss protocol) to determine whether differences in body shape existed among lake trout morphotypes. Our results showed discrimination where traditional morphometric characters and meristic measurements failed to detect differences. Principal components analysis revealed some separation of all three morphotypes based on head and caudal peduncle shape, but it also indicated considerable overlap in score values. Humper lake trout have smaller caudal peduncle widths to head length and depth characters than do lean or siscowet lake trout. Lean lake trout had larger head measures to caudal widths, whereas siscowet had higher caudal peduncle to head measures. Backward stepwise discriminant function analysis retained two head measures, three midbody measures, and four caudal peduncle measures; correct classification rates when using these variables were 83% for leans, 80% for siscowets, and 83% for humpers, which suggests the measures we used for initial classification were consistent. Although clear ecological reasons for these differences are not readily apparent, patterns in misclassification rates may be consistent with evolutionary hypotheses for lake trout within the Laurentian Great Lakes.
NASA Astrophysics Data System (ADS)
Chen, B.; Chehdi, K.; De Oliveria, E.; Cariou, C.; Charbonnier, B.
2015-10-01
In this paper a new unsupervised top-down hierarchical classification method to partition airborne hyperspectral images is proposed. The unsupervised approach is preferred because the difficulty of area access and the human and financial resources required to obtain ground truth data, constitute serious handicaps especially over large areas which can be covered by airborne or satellite images. The developed classification approach allows i) a successive partitioning of data into several levels or partitions in which the main classes are first identified, ii) an estimation of the number of classes automatically at each level without any end user help, iii) a nonsystematic subdivision of all classes of a partition Pj to form a partition Pj+1, iv) a stable partitioning result of the same data set from one run of the method to another. The proposed approach was validated on synthetic and real hyperspectral images related to the identification of several marine algae species. In addition to highly accurate and consistent results (correct classification rate over 99%), this approach is completely unsupervised. It estimates at each level, the optimal number of classes and the final partition without any end user intervention.
Tone classification of syllable-segmented Thai speech based on multilayer perception
NASA Astrophysics Data System (ADS)
Satravaha, Nuttavudh; Klinkhachorn, Powsiri; Lass, Norman
2002-05-01
Thai is a monosyllabic tonal language that uses tone to convey lexical information about the meaning of a syllable. Thus to completely recognize a spoken Thai syllable, a speech recognition system not only has to recognize a base syllable but also must correctly identify a tone. Hence, tone classification of Thai speech is an essential part of a Thai speech recognition system. Thai has five distinctive tones (``mid,'' ``low,'' ``falling,'' ``high,'' and ``rising'') and each tone is represented by a single fundamental frequency (F0) pattern. However, several factors, including tonal coarticulation, stress, intonation, and speaker variability, affect the F0 pattern of a syllable in continuous Thai speech. In this study, an efficient method for tone classification of syllable-segmented Thai speech, which incorporates the effects of tonal coarticulation, stress, and intonation, as well as a method to perform automatic syllable segmentation, were developed. Acoustic parameters were used as the main discriminating parameters. The F0 contour of a segmented syllable was normalized by using a z-score transformation before being presented to a tone classifier. The proposed system was evaluated on 920 test utterances spoken by 8 speakers. A recognition rate of 91.36% was achieved by the proposed system.
Robust through-the-wall radar image classification using a target-model alignment procedure.
Smith, Graeme E; Mobasseri, Bijan G
2012-02-01
A through-the-wall radar image (TWRI) bears little resemblance to the equivalent optical image, making it difficult to interpret. To maximize the intelligence that may be obtained, it is desirable to automate the classification of targets in the image to support human operators. This paper presents a technique for classifying stationary targets based on the high-range resolution profile (HRRP) extracted from 3-D TWRIs. The dependence of the image on the target location is discussed using a system point spread function (PSF) approach. It is shown that the position dependence will cause a classifier to fail, unless the image to be classified is aligned to a classifier-training location. A target image alignment technique based on deconvolution of the image with the system PSF is proposed. Comparison of the aligned target images with measured images shows the alignment process introducing normalized mean squared error (NMSE) ≤ 9%. The HRRP extracted from aligned target images are classified using a naive Bayesian classifier supported by principal component analysis. The classifier is tested using a real TWRI of canonical targets behind a concrete wall and shown to obtain correct classification rates ≥ 97%. © 2011 IEEE
Motamedi, Mohammad; Müller, Rolf
2014-06-01
The biosonar beampatterns found across different bat species are highly diverse in terms of global and local shape properties such as overall beamwidth or the presence, location, and shape of multiple lobes. It may be hypothesized that some of this variability reflects evolutionary adaptation. To investigate this hypothesis, the present work has searched for patterns in the variability across a set of 283 numerical predictions of emission and reception beampatterns from 88 bat species belonging to four major families (Rhinolophidae, Hipposideridae, Phyllostomidae, Vespertilionidae). This was done using a lossy compression of the beampatterns that utilized real spherical harmonics as basis functions. The resulting vector representations showed differences between the families as well as between emission and reception. These differences existed in the means of the power spectra as well as in their distribution. The distributions were characterized in a low dimensional space found through principal component analysis. The distinctiveness of the beampatterns across the groups was corroborated by pairwise classification experiments that yielded correct classification rates between ~85 and ~98%. Beamwidth was a major factor but not the sole distinguishing feature in these classification experiments. These differences could be seen as an indication of adaptive trends at the beampattern level.
Gradel, Kim Oren
2015-01-01
Aim: Evaluation of the International Classification of Functioning, Disability and Health child and youth version (ICF-CY) activities and participation d code functions in clinical practice with children across diagnoses, disabilities, ages, and genders. Methods: A set of 57 codes were selected and worded to describe children’s support needs in everyday life. Parents of children aged 1 to 15 years participated in interviews to discuss and rate their child’s disability. Results: Of 367 invited parents, 332 (90.5%) participated. The mean age of their children with disability was 9.4 years. The mean code scores were 50.67, the corrected code–total correlations were .76, intercode correlations had the mean of 0.61, and Cronbach’s α was .98. As a result of Rasch analysis, graphical data for disability measures paralleled clinical expectations across the total population of 332 children. Conclusion: The World Health Organization International Classification of Functioning, Disability and Health child and youth version d code data can provide a coherent measure of severity of disability in children across various diagnoses, ages, and genders. PMID:28503598
Development of Vision Based Multiview Gait Recognition System with MMUGait Database
Ng, Hu; Tan, Wooi-Haw; Tong, Hau-Lee
2014-01-01
This paper describes the acquisition setup and development of a new gait database, MMUGait. This database consists of 82 subjects walking under normal condition and 19 subjects walking with 11 covariate factors, which were captured under two views. This paper also proposes a multiview model-based gait recognition system with joint detection approach that performs well under different walking trajectories and covariate factors, which include self-occluded or external occluded silhouettes. In the proposed system, the process begins by enhancing the human silhouette to remove the artifacts. Next, the width and height of the body are obtained. Subsequently, the joint angular trajectories are determined once the body joints are automatically detected. Lastly, crotch height and step-size of the walking subject are determined. The extracted features are smoothened by Gaussian filter to eliminate the effect of outliers. The extracted features are normalized with linear scaling, which is followed by feature selection prior to the classification process. The classification experiments carried out on MMUGait database were benchmarked against the SOTON Small DB from University of Southampton. Results showed correct classification rate above 90% for all the databases. The proposed approach is found to outperform other approaches on SOTON Small DB in most cases. PMID:25143972
Analyzing thematic maps and mapping for accuracy
Rosenfield, G.H.
1982-01-01
Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.
Sexing adult black-legged kittiwakes by DNA, behavior, and morphology
Jodice, P.G.R.; Lanctot, Richard B.; Gill, V.A.; Roby, D.D.; Hatch, Shyla A.
2000-01-01
We sexed adult Black-legged Kittiwakes (Rissa tridactyla) using DNA-based genetic techniques, behavior and morphology and compared results from these techniques. Genetic and morphology data were collected on 605 breeding kittiwakes and sex-specific behaviors were recorded for a sub-sample of 285 of these individuals. We compared sex classification based on both genetic and behavioral techniques for this sub-sample to assess the accuracy of the genetic technique. DNA-based techniques correctly sexed 97.2% and sex-specific behaviors, 96.5% of this sub-sample. We used the corrected genetic classifications from this sub-sample and the genetic classifications for the remaining birds, under the assumption they were correct, to develop predictive morphometric discriminant function models for all 605 birds. These models accurately predicted the sex of 73-96% of individuals examined, depending on the sample of birds used and the characters included. The most accurate single measurement for determining sex was length of head plus bill, which correctly classified 88% of individuals tested. When both members of a pair were measured, classification levels improved and approached the accuracy of both behavioral observations and genetic analyses. Morphometric techniques were only slightly less accurate than genetic techniques but were easier to implement in the field and less costly. Behavioral observations, while highly accurate, required that birds be easily observable during the breeding season and that birds be identifiable. As such, sex-specific behaviors may best be applied as a confirmation of sex for previously marked birds. All three techniques thus have the potential to be highly accurate, and the selection of one or more will depend on the circumstances of any particular field study.
Stöggl, Thomas; Holst, Anders; Jonasson, Arndt; Andersson, Erik; Wunsch, Tobias; Norström, Christer; Holmberg, Hans-Christer
2014-10-31
The purpose of the current study was to develop and validate an automatic algorithm for classification of cross-country (XC) ski-skating gears (G) using Smartphone accelerometer data. Eleven XC skiers (seven men, four women) with regional-to-international levels of performance carried out roller skiing trials on a treadmill using fixed gears (G2left, G2right, G3, G4left, G4right) and a 950-m trial using different speeds and inclines, applying gears and sides as they normally would. Gear classification by the Smartphone (on the chest) and based on video recordings were compared. Formachine-learning, a collective database was compared to individual data. The Smartphone application identified the trials with fixed gears correctly in all cases. In the 950-m trial, participants executed 140 ± 22 cycles as assessed by video analysis, with the automatic Smartphone application giving a similar value. Based on collective data, gears were identified correctly 86.0% ± 8.9% of the time, a value that rose to 90.3% ± 4.1% (P < 0.01) with machine learning from individual data. Classification was most often incorrect during transition between gears, especially to or from G3. Identification was most often correct for skiers who made relatively few transitions between gears. The accuracy of the automatic procedure for identifying G2left, G2right, G3, G4left and G4right was 96%, 90%, 81%, 88% and 94%, respectively. The algorithm identified gears correctly 100% of the time when a single gear was used and 90% of the time when different gears were employed during a variable protocol. This algorithm could be improved with respect to identification of transitions between gears or the side employed within a given gear.
75 FR 33989 - Export Administration Regulations: Technical Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-16
... 0694-AE69 Export Administration Regulations: Technical Corrections AGENCY: Bureau of Industry and... section of Export Control Classification Number 2B001 and the other is in the Technical Note on Adjusted... language regarding certain performance criteria of turning machines covered by Export Control...
Influence of chronic back pain on kinematic reactions to unpredictable arm pulls.
Götze, Martin; Ernst, Michael; Koch, Markus; Blickhan, Reinhard
2015-03-01
There is evidence that muscle reflexes are delayed in patients with chronic low back pain in response to perturbations. It is still unrevealed whether these delays accompanied by an altered kinematic or compensated by adaption of other muscle parameters. The aim of this study was to investigate whether chronic low back pain patients show an altered kinematic reaction and if such data are reliable for the classification of chronic low back pain. In an experiment involving 30 females, sudden lateral perturbations were applied to the arm of a subject in an upright, standing position. Kinematics was used to distinguish between chronic low back pain patients and healthy controls. A calculated model of a stepwise discriminant function analysis correctly predicted 100% of patients and 80% of healthy controls. The estimation of the classification error revealed a constant rate for the classification of the healthy controls and a slightly decreased rate for the patients. Observed reflex delays and identified kinematic differences inside and outside the region of pain during impaired movement indicated that chronic low back pain patients have an altered motor control that is not restricted to the lumbo-pelvic region. This applied paradigm of external perturbations can be used to detect chronic low back pain patients and also persons without chronic low back pain but with an altered motor control. Further investigations are essential to reveal whether healthy persons with changes in motor function have an increased potential to develop chronic back pain. Copyright © 2015 Elsevier Ltd. All rights reserved.
In-Line Sorting of Harumanis Mango Based on External Quality Using Visible Imaging
Ibrahim, Mohd Firdaus; Ahmad Sa’ad, Fathinul Syahir; Zakaria, Ammar; Md Shakaff, Ali Yeon
2016-01-01
The conventional method of grading Harumanis mango is time-consuming, costly and affected by human bias. In this research, an in-line system was developed to classify Harumanis mango using computer vision. The system was able to identify the irregularity of mango shape and its estimated mass. A group of images of mangoes of different size and shape was used as database set. Some important features such as length, height, centroid and parameter were extracted from each image. Fourier descriptor and size-shape parameters were used to describe the mango shape while the disk method was used to estimate the mass of the mango. Four features have been selected by stepwise discriminant analysis which was effective in sorting regular and misshapen mango. The volume from water displacement method was compared with the volume estimated by image processing using paired t-test and Bland-Altman method. The result between both measurements was not significantly different (P > 0.05). The average correct classification for shape classification was 98% for a training set composed of 180 mangoes. The data was validated with another testing set consist of 140 mangoes which have the success rate of 92%. The same set was used for evaluating the performance of mass estimation. The average success rate of the classification for grading based on its mass was 94%. The results indicate that the in-line sorting system using machine vision has a great potential in automatic fruit sorting according to its shape and mass. PMID:27801799
In-Line Sorting of Harumanis Mango Based on External Quality Using Visible Imaging.
Ibrahim, Mohd Firdaus; Ahmad Sa'ad, Fathinul Syahir; Zakaria, Ammar; Md Shakaff, Ali Yeon
2016-10-27
The conventional method of grading Harumanis mango is time-consuming, costly and affected by human bias. In this research, an in-line system was developed to classify Harumanis mango using computer vision. The system was able to identify the irregularity of mango shape and its estimated mass. A group of images of mangoes of different size and shape was used as database set. Some important features such as length, height, centroid and parameter were extracted from each image. Fourier descriptor and size-shape parameters were used to describe the mango shape while the disk method was used to estimate the mass of the mango. Four features have been selected by stepwise discriminant analysis which was effective in sorting regular and misshapen mango. The volume from water displacement method was compared with the volume estimated by image processing using paired t -test and Bland-Altman method. The result between both measurements was not significantly different (P > 0.05). The average correct classification for shape classification was 98% for a training set composed of 180 mangoes. The data was validated with another testing set consist of 140 mangoes which have the success rate of 92%. The same set was used for evaluating the performance of mass estimation. The average success rate of the classification for grading based on its mass was 94%. The results indicate that the in-line sorting system using machine vision has a great potential in automatic fruit sorting according to its shape and mass.
Robust point cloud classification based on multi-level semantic relationships for urban scenes
NASA Astrophysics Data System (ADS)
Zhu, Qing; Li, Yuan; Hu, Han; Wu, Bo
2017-07-01
The semantic classification of point clouds is a fundamental part of three-dimensional urban reconstruction. For datasets with high spatial resolution but significantly more noises, a general trend is to exploit more contexture information to surmount the decrease of discrimination of features for classification. However, previous works on adoption of contexture information are either too restrictive or only in a small region and in this paper, we propose a point cloud classification method based on multi-level semantic relationships, including point-homogeneity, supervoxel-adjacency and class-knowledge constraints, which is more versatile and incrementally propagate the classification cues from individual points to the object level and formulate them as a graphical model. The point-homogeneity constraint clusters points with similar geometric and radiometric properties into regular-shaped supervoxels that correspond to the vertices in the graphical model. The supervoxel-adjacency constraint contributes to the pairwise interactions by providing explicit adjacent relationships between supervoxels. The class-knowledge constraint operates at the object level based on semantic rules, guaranteeing the classification correctness of supervoxel clusters at that level. International Society of Photogrammetry and Remote Sensing (ISPRS) benchmark tests have shown that the proposed method achieves state-of-the-art performance with an average per-area completeness and correctness of 93.88% and 95.78%, respectively. The evaluation of classification of photogrammetric point clouds and DSM generated from aerial imagery confirms the method's reliability in several challenging urban scenes.
Yang, Jiaojiao; Guo, Qian; Li, Wenjie; Wang, Suhong; Zou, Ling
2016-04-01
This paper aims to assist the individual clinical diagnosis of children with attention-deficit/hyperactivity disorder using electroencephalogram signal detection method.Firstly,in our experiments,we obtained and studied the electroencephalogram signals from fourteen attention-deficit/hyperactivity disorder children and sixteen typically developing children during the classic interference control task of Simon-spatial Stroop,and we completed electroencephalogram data preprocessing including filtering,segmentation,removal of artifacts and so on.Secondly,we selected the subset electroencephalogram electrodes using principal component analysis(PCA)method,and we collected the common channels of the optimal electrodes which occurrence rates were more than 90%in each kind of stimulation.We then extracted the latency(200~450ms)mean amplitude features of the common electrodes.Finally,we used the k-nearest neighbor(KNN)classifier based on Euclidean distance and the support vector machine(SVM)classifier based on radial basis kernel function to classify.From the experiment,at the same kind of interference control task,the attention-deficit/hyperactivity disorder children showed lower correct response rates and longer reaction time.The N2 emerged in prefrontal cortex while P2 presented in the inferior parietal area when all kinds of stimuli demonstrated.Meanwhile,the children with attention-deficit/hyperactivity disorder exhibited markedly reduced N2 and P2amplitude compared to typically developing children.KNN resulted in better classification accuracy than SVM classifier,and the best classification rate was 89.29%in StI task.The results showed that the electroencephalogram signals were different in the brain regions of prefrontal cortex and inferior parietal cortex between attention-deficit/hyperactivity disorder and typically developing children during the interference control task,which provided a scientific basis for the clinical diagnosis of attention-deficit/hyperactivity disorder individuals.
Spectral feature design in high dimensional multispectral data
NASA Technical Reports Server (NTRS)
Chen, Chih-Chien Thomas; Landgrebe, David A.
1988-01-01
The High resolution Imaging Spectrometer (HIRIS) is designed to acquire images simultaneously in 192 spectral bands in the 0.4 to 2.5 micrometers wavelength region. It will make possible the collection of essentially continuous reflectance spectra at a spectral resolution sufficient to extract significantly enhanced amounts of information from return signals as compared to existing systems. The advantages of such high dimensional data come at a cost of increased system and data complexity. For example, since the finer the spectral resolution, the higher the data rate, it becomes impractical to design the sensor to be operated continuously. It is essential to find new ways to preprocess the data which reduce the data rate while at the same time maintaining the information content of the high dimensional signal produced. Four spectral feature design techniques are developed from the Weighted Karhunen-Loeve Transforms: (1) non-overlapping band feature selection algorithm; (2) overlapping band feature selection algorithm; (3) Walsh function approach; and (4) infinite clipped optimal function approach. The infinite clipped optimal function approach is chosen since the features are easiest to find and their classification performance is the best. After the preprocessed data has been received at the ground station, canonical analysis is further used to find the best set of features under the criterion that maximal class separability is achieved. Both 100 dimensional vegetation data and 200 dimensional soil data were used to test the spectral feature design system. It was shown that the infinite clipped versions of the first 16 optimal features had excellent classification performance. The overall probability of correct classification is over 90 percent while providing for a reduced downlink data rate by a factor of 10.
Classification of mineral deposits into types using mineralogy with a probabilistic neural network
Singer, Donald A.; Kouda, Ryoichi
1997-01-01
In order to determine whether it is desirable to quantify mineral-deposit models further, a test of the ability of a probabilistic neural network to classify deposits into types based on mineralogy was conducted. Presence or absence of ore and alteration mineralogy in well-typed deposits were used to train the network. To reduce the number of minerals considered, the analyzed data were restricted to minerals present in at least 20% of at least one deposit type. An advantage of this restriction is that single or rare occurrences of minerals did not dominate the results. Probabilistic neural networks can provide mathematically sound confidence measures based on Bayes theorem and are relatively insensitive to outliers. Founded on Parzen density estimation, they require no assumptions about distributions of random variables used for classification, even handling multimodal distributions. They train quickly and work as well as, or better than, multiple-layer feedforward networks. Tests were performed with a probabilistic neural network employing a Gaussian kernel and separate sigma weights for each class and each variable. The training set was reduced to the presence or absence of 58 reported minerals in eight deposit types. The training set included: 49 Cyprus massive sulfide deposits; 200 kuroko massive sulfide deposits; 59 Comstock epithermal vein gold districts; 17 quartzalunite epithermal gold deposits; 25 Creede epithermal gold deposits; 28 sedimentary-exhalative zinc-lead deposits; 28 Sado epithermal vein gold deposits; and 100 porphyry copper deposits. The most common training problem was the error of classifying about 27% of Cyprus-type deposits in the training set as kuroko. In independent tests with deposits not used in the training set, 88% of 224 kuroko massive sulfide deposits were classed correctly, 92% of 25 porphyry copper deposits, 78% of 9 Comstock epithermal gold-silver districts, and 83% of six quartzalunite epithermal gold deposits were classed correctly. Across all deposit types, 88% of deposits in the validation dataset were correctly classed. Misclassifications were most common if a deposit was characterized by only a few minerals, e.g., pyrite, chalcopyrite,and sphalerite. The success rate jumped to 98% correctly classed deposits when just two rock types were added. Such a high success rate of the probabilistic neural network suggests that not only should this preliminary test be expanded to include other deposit types, but that other deposit features should be added.
NASA Astrophysics Data System (ADS)
Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye
2016-06-01
This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.
Genome-Wide Comparative Gene Family Classification
Frech, Christian; Chen, Nansheng
2010-01-01
Correct classification of genes into gene families is important for understanding gene function and evolution. Although gene families of many species have been resolved both computationally and experimentally with high accuracy, gene family classification in most newly sequenced genomes has not been done with the same high standard. This project has been designed to develop a strategy to effectively and accurately classify gene families across genomes. We first examine and compare the performance of computer programs developed for automated gene family classification. We demonstrate that some programs, including the hierarchical average-linkage clustering algorithm MC-UPGMA and the popular Markov clustering algorithm TRIBE-MCL, can reconstruct manual curation of gene families accurately. However, their performance is highly sensitive to parameter setting, i.e. different gene families require different program parameters for correct resolution. To circumvent the problem of parameterization, we have developed a comparative strategy for gene family classification. This strategy takes advantage of existing curated gene families of reference species to find suitable parameters for classifying genes in related genomes. To demonstrate the effectiveness of this novel strategy, we use TRIBE-MCL to classify chemosensory and ABC transporter gene families in C. elegans and its four sister species. We conclude that fully automated programs can establish biologically accurate gene families if parameterized accordingly. Comparative gene family classification finds optimal parameters automatically, thus allowing rapid insights into gene families of newly sequenced species. PMID:20976221
Katoh, Takao; Kuwamoto, Kana; Kato, Daisuke; Kuroishi, Kentarou
2016-12-01
To assess the effect of 25 or 50 mg mirabegron on cardiovascular end-points and adverse drug reactions in real-world Japanese patients with overactive bladder and cardiovascular disease. Participants had overactive bladder, a history of/coexisting cardiovascular disease and a 12-lead electrocardiogram carried out ≤7 days before initiating 4 weeks of mirabegron treatment. Patients with "serious cardiovascular disease" (class III or IV on the New York Heart Association functional classification and further confirmed by expert analysis) were excluded. Patient demographics, physical characteristics and cardiovascular history were recorded. After 4 weeks, patients underwent another electrocardiogram. Incidence of cardiovascular adverse drug reactions and change from baseline in electrocardiogram parameters (RR, PR, QRS intervals, Fridericia's corrected QT and heart rate) were assessed. Of 316 patients registered, 236 met criteria and had baseline/post-dose electrocardiograms: 61.9% male; 60.2% aged ≥75 years; 93.6% with coexisting cardiovascular disease, notably, arrhythmia (67.8%) and angina pectoris (19.1%). Starting mirabegron daily doses were 25 mg (19.9%) or 50 mg (80.1%). The incidence of cardiovascular adverse drug reactions was 5.51%. After 4 weeks, the mean heart rate increased by 1.24 b.p.m. (statistically significant, but clinically acceptable as per previous trials). No significant changes were observed in PR, QRS or Fridericia's corrected QT. No significant correlations in the total population or age-/sex-segregated subgroups were observed between baseline Fridericia's corrected QT and change at 4 weeks. No correlation for heart rate versus change from baseline heart rate with treatment was observed. Mirabegron was well tolerated in real-world Japanese patients with overactive bladder and coexisting cardiovascular disease. No unexpected cardiovascular safety concerns were observed. © 2016 The Japanese Urological Association.
Evaluation of communication in wireless underground sensor networks
NASA Astrophysics Data System (ADS)
Yu, X. Q.; Zhang, Z. L.; Han, W. T.
2017-06-01
Wireless underground sensor networks (WUSN) are an emerging area of research that promises to provide communication capabilities to buried sensors. In this paper, experimental measurements have been conducted with commodity sensor motes at the frequency of 2.4GHz and 433 MHz, respectively. Experiments are run to examine the received signal strength of correctly received packets and the packet error rate for a communication link. The tests show the potential feasibility of the WUSN with the use of powerful RF transceivers at 433MHz frequency. Moreover, we also illustrate a classification for wireless underground sensor network communication. Finally, we conclude that the effects of burial depth, inter-node distance and volumetric water content of the soil on the signal strength and packet error rate in communication of WUSN.
Cloud cover determination in polar regions from satellite imagery
NASA Technical Reports Server (NTRS)
Barry, R. G.; Maslanik, J. A.; Key, J. R.
1987-01-01
A definition is undertaken of the spectral and spatial characteristics of clouds and surface conditions in the polar regions, and to the creation of calibrated, geometrically correct data sets suitable for quantitative analysis. Ways are explored in which this information can be applied to cloud classifications as new methods or as extensions to existing classification schemes. A methodology is developed that uses automated techniques to merge Advanced Very High Resolution Radiometer (AVHRR) and Scanning Multichannel Microwave Radiometer (SMMR) data, and to apply first-order calibration and zenith angle corrections to the AVHRR imagery. Cloud cover and surface types are manually interpreted, and manual methods are used to define relatively pure training areas to describe the textural and multispectral characteristics of clouds over several surface conditions. The effects of viewing angle and bidirectional reflectance differences are studied for several classes, and the effectiveness of some key components of existing classification schemes is tested.
Provenance establishment of coffee using solution ICP-MS and ICP-AES.
Valentin, Jenna L; Watling, R John
2013-11-01
Statistical interpretation of the concentrations of 59 elements, determined using solution based inductively coupled plasma mass spectrometry (ICP-MS) and inductively coupled plasma emission spectroscopy (ICP-AES), was used to establish the provenance of coffee samples from 15 countries across five continents. Data confirmed that the harvest year, degree of ripeness and whether the coffees were green or roasted had little effect on the elemental composition of the coffees. The application of linear discriminant analysis and principal component analysis of the elemental concentrations permitted up to 96.9% correct classification of the coffee samples according to their continent of origin. When samples from each continent were considered separately, up to 100% correct classification of coffee samples into their countries, and plantations of origin was achieved. This research demonstrates the potential of using elemental composition, in combination with statistical classification methods, for accurate provenance establishment of coffee. Copyright © 2013 Elsevier Ltd. All rights reserved.
Classification of cancerous cells based on the one-class problem approach
NASA Astrophysics Data System (ADS)
Murshed, Nabeel A.; Bortolozzi, Flavio; Sabourin, Robert
1996-03-01
One of the most important factors in reducing the effect of cancerous diseases is the early diagnosis, which requires a good and a robust method. With the advancement of computer technologies and digital image processing, the development of a computer-based system has become feasible. In this paper, we introduce a new approach for the detection of cancerous cells. This approach is based on the one-class problem approach, through which the classification system need only be trained with patterns of cancerous cells. This reduces the burden of the training task by about 50%. Based on this approach, a computer-based classification system is developed, based on the Fuzzy ARTMAP neural networks. Experimental results were performed using a set of 542 patterns taken from a sample of breast cancer. Results of the experiment show 98% correct identification of cancerous cells and 95% correct identification of non-cancerous cells.
USDA-ARS?s Scientific Manuscript database
Panax quinquefolius L (P. quinquefolius L) samples grown in the United States and China were analyzed with high performance liquid chromatography-mass spectrometry (HPLC—MS). Prior to classification, the two-way datasets were subjected to pretreatment including baseline correction and retention tim...
USDA-ARS?s Scientific Manuscript database
Panax quinquefolius L (P. quinquefolius L) samples grown in the United States and China were analyzed with high performance liquid chromatography-mass spectrometry (HPLC—MS). Prior to classification, the two-way datasets were subjected to pretreatment including baseline correction and retention ti...
USDA-ARS?s Scientific Manuscript database
In this paper, we propose approaches to improve the pixel-based support vector machine (SVM) classification for urban land use and land cover (LULC) mapping from airborne hyperspectral imagery with high spatial resolution. Class spatial neighborhood relationship is used to correct the misclassified ...
ERIC Educational Resources Information Center
Potter, Penny F.; Graham-Moore, Brian E.
Most organizations planning to assess adverse impact or perform a stock analysis for affirmative action planning must correctly classify their jobs into appropriate occupational categories. Two methods of job classification were assessed in a combination archival and field study. Classification results from expert judgment of functional job…
Classification with asymmetric label noise: Consistency and maximal denoising
Blanchard, Gilles; Flaska, Marek; Handy, Gregory; ...
2016-09-20
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
Devos, Olivier; Downey, Gerard; Duponchel, Ludovic
2014-04-01
Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.
Classification with asymmetric label noise: Consistency and maximal denoising
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, Gilles; Flaska, Marek; Handy, Gregory
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
77 FR 16661 - Tuberculosis in Cattle and Bison; State and Zone Designations; NM; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
...-0124] Tuberculosis in Cattle and Bison; State and Zone Designations; NM; Correction AGENCY: Animal and... in the regulatory text of an interim rule that amended the bovine tuberculosis regulations by establishing two separate zones with different tuberculosis risk classifications for the State of New Mexico...
Tallman, Sean D; Winburn, Allysha P
2015-09-01
Ancestry assessment from the postcranial skeleton presents a significant challenge to forensic anthropologists. However, metric dimensions of the femur subtrochanteric region are believed to distinguish between individuals of Asian and non-Asian descent. This study tests the discriminatory power of subtrochanteric shape using modern samples of 128 Thai and 77 White American males. Results indicate that the samples' platymeric index distributions are significantly different (p≤0.001), with the Thai platymeric index range generally lower and the White American range generally higher. While the application of ancestry assessment methods developed from Native American subtrochanteric data results in low correct classification rates for the Thai sample (50.8-57.8%), adapting these methods to the current samples leads to better classification. The Thai data may be more useful in forensic analysis than previously published subtrochanteric data derived from Native American samples. Adapting methods to include appropriate geographic and contemporaneous populations increases the accuracy of femur subtrochanteric ancestry methods. © 2015 American Academy of Forensic Sciences.
Infinite hidden conditional random fields for human behavior analysis.
Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja
2013-01-01
Hidden conditional random fields (HCRFs) are discriminative latent variable models that have been shown to successfully learn the hidden structure of a given classification problem (provided an appropriate validation of the number of hidden states). In this brief, we present the infinite HCRF (iHCRF), which is a nonparametric model based on hierarchical Dirichlet processes and is capable of automatically learning the optimal number of hidden states for a classification task. We show how we learn the model hyperparameters with an effective Markov-chain Monte Carlo sampling technique, and we explain the process that underlines our iHCRF model with the Restaurant Franchise Rating Agencies analogy. We show that the iHCRF is able to converge to a correct number of represented hidden states, and outperforms the best finite HCRFs--chosen via cross-validation--for the difficult tasks of recognizing instances of agreement, disagreement, and pain. Moreover, the iHCRF manages to achieve this performance in significantly less total training, validation, and testing time.
Osorio, Maria Teresa; Haughey, Simon A; Elliott, Christopher T; Koidis, Anastasios
2015-12-15
European Regulation 1169/2011 requires producers of foods that contain refined vegetable oils to label the oil types. A novel rapid and staged methodology has been developed for the first time to identify common oil species in oil blends. The qualitative method consists of a combination of a Fourier Transform Infrared (FTIR) spectroscopy to profile the oils and fatty acid chromatographic analysis to confirm the composition of the oils when required. Calibration models and specific classification criteria were developed and all data were fused into a simple decision-making system. The single lab validation of the method demonstrated the very good performance (96% correct classification, 100% specificity, 4% false positive rate). Only a small fraction of the samples needed to be confirmed with the majority of oils identified rapidly using only the spectroscopic procedure. The results demonstrate the huge potential of the methodology for a wide range of oil authenticity work. Copyright © 2014 Elsevier Ltd. All rights reserved.
Kelly, J F Daniel; Downey, Gerard
2005-05-04
Fourier transform infrared spectroscopy and attenuated total reflection sampling have been used to detect adulteration of single strength apple juice samples. The sample set comprised 224 authentic apple juices and 480 adulterated samples. Adulterants used included partially inverted cane syrup (PICS), beet sucrose (BS), high fructose corn syrup (HFCS), and a synthetic solution of fructose, glucose, and sucrose (FGS). Adulteration was carried out on individual apple juice samples at levels of 10, 20, 30, and 40% w/w. Spectral data were compressed by principal component analysis and analyzed using k-nearest neighbors and partial least squares regression techniques. Prediction results for the best classification models achieved an overall (authentic plus adulterated) correct classification rate of 96.5, 93.9, 92.2, and 82.4% for PICS, BS, HFCS, and FGS adulterants, respectively. This method shows promise as a rapid screening technique for the detection of a broad range of potential adulterants in apple juice.
Autonomic specificity of basic emotions: evidence from pattern classification and cluster analysis.
Stephens, Chad L; Christie, Israel C; Friedman, Bruce H
2010-07-01
Autonomic nervous system (ANS) specificity of emotion remains controversial in contemporary emotion research, and has received mixed support over decades of investigation. This study was designed to replicate and extend psychophysiological research, which has used multivariate pattern classification analysis (PCA) in support of ANS specificity. Forty-nine undergraduates (27 women) listened to emotion-inducing music and viewed affective films while a montage of ANS variables, including heart rate variability indices, peripheral vascular activity, systolic time intervals, and electrodermal activity, were recorded. Evidence for ANS discrimination of emotion was found via PCA with 44.6% of overall observations correctly classified into the predicted emotion conditions, using ANS variables (z=16.05, p<.001). Cluster analysis of these data indicated a lack of distinct clusters, which suggests that ANS responses to the stimuli were nomothetic and stimulus-specific rather than idiosyncratic and individual-specific. Collectively these results further confirm and extend support for the notion that basic emotions have distinct ANS signatures. Copyright © 2010 Elsevier B.V. All rights reserved.
Less-Complex Method of Classifying MPSK
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2006-01-01
An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).
Automatic red eye correction and its quality metric
NASA Astrophysics Data System (ADS)
Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho
2008-01-01
The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.
Mengel, M; Sis, B; Halloran, P F
2007-10-01
The Banff process defined the diagnostic histologic lesions for renal allograft rejection and created a standardized classification system where none had existed. By correcting this deficit the process had universal impact on clinical practice and clinical and basic research. All trials of new drugs since the early 1990s benefited, because the Banff classification of lesions permitted the end point of biopsy-proven rejection. The Banff process has strengths, weaknesses, opportunities and threats (SWOT). The strength is its self-organizing group structure to create consensus. Consensus does not mean correctness: defining consensus is essential if a widely held view is to be proved wrong. The weaknesses of the Banff process are the absence of an independent external standard to test the classification; and its almost exclusive reliance on histopathology, which has inherent limitations in intra- and interobserver reproducibility, particularly at the interface between borderline and rejection, is exactly where clinicians demand precision. The opportunity lies in the new technology such as transcriptomics, which can form an external standard and can be incorporated into a new classification combining the elegance of histopathology and the objectivity of transcriptomics. The threat is the degree to which the renal transplant community will participate in and support this process.
Bruns, Nora; Dransfeld, Frauke; Hüning, Britta; Hobrecht, Julia; Storbeck, Tobias; Weiss, Christel; Felderhoff-Müser, Ursula; Müller, Hanna
2017-02-01
Neurodevelopmental outcome after prematurity is crucial. The aim was to compare two amplitude-integrated EEG (aEEG) classifications (Hellström-Westas (HW), Burdjalov) for outcome prediction. We recruited 65 infants ≤32 weeks gestational age with aEEG recordings within the first 72 h of life and Bayley testing at 24 months corrected age or death. Statistical analyses were performed for each 24 h section to determine whether very immature/depressed or mature/developed patterns predict survival/neurological outcome and to find predictors for mental development index (MDI) and psychomotor development index (PDI) at 24 months corrected age. On day 2, deceased infants showed no cycling in 80% (HW, p = 0.0140) and 100% (Burdjalov, p = 0.0041). The Burdjalov total score significantly differed between groups on day 2 (p = 0.0284) and the adapted Burdjalov total score on day 2 (p = 0.0183) and day 3 (p = 0.0472). Cycling on day 3 (HW; p = 0.0059) and background on day 3 (HW; p = 0.0212) are independent predictors for MDI (p = 0.0016) whereas no independent predictor for PDI was found (multiple regression analyses). Cycling in both classifications is a valuable tool to assess chance of survival. The classification by HW is also associated with long-term mental outcome. What is Known: •Neurodevelopmental outcome after preterm birth remains one of the major concerns in neonatology. •aEEG is used to measure brain activity and brain maturation in preterm infants. What is New: •The two common aEEG classifications and scoring systems described by Hellström-Westas and Burdjalov are valuable tools to predict neurodevelopmental outcome when performed within the first 72 h of life. •Both aEEG classifications are useful to predict chance of survival. The classification by Hellström-Westas can also predict long-term outcome at corrected age of 2 years.
Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).
Bevilacqua, Marta; Marini, Federico
2014-08-01
The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.
Wang, Kun-Ching
2015-01-01
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. PMID:25594590
Automatic photointerpretation for land use management in Minnesota
NASA Technical Reports Server (NTRS)
Swanlund, G. D. (Principal Investigator); Pile, D. R.
1973-01-01
The author has identified the following significant results. The Minnesota Iron Range area was selected as one of the land use areas to be evaluated. Six classes were selected: (1) hardwood; (2) conifer; (3) water (including in mines); (4) mines, tailings and wet areas; (5) open area; and (6) urban. Initial classification results show a correct classification of 70.1 to 95.4% for the six classes. This is extremely good. It can be further improved since there were some incorrect classifications in the ground truth.
Morphometric classification of Spanish thoroughbred stallion sperm heads.
Hidalgo, Manuel; Rodríguez, Inmaculada; Dorado, Jesús; Soler, Carles
2008-01-30
This work used semen samples collected from 12 stallions and assessed for sperm morphometry by the Sperm Class Analyzer (SCA) computer-assisted system. A discriminant analysis was performed on the morphometric data from that sperm to obtain a classification matrix for sperm head shape. Thereafter, we defined six types of sperm head shape. Classification of sperm head by this method obtained a globally correct assignment of 90.1%. Moreover, significant differences (p<0.05) were found between animals for all the sperm head morphometric parameters assessed.
The effect of monitor raster latency on VEPs, ERPs and Brain-Computer Interface performance.
Nagel, Sebastian; Dreher, Werner; Rosenstiel, Wolfgang; Spüler, Martin
2018-02-01
Visual neuroscience experiments and Brain-Computer Interface (BCI) control often require strict timings in a millisecond scale. As most experiments are performed using a personal computer (PC), the latencies that are introduced by the setup should be taken into account and be corrected. As a standard computer monitor uses a rastering to update each line of the image sequentially, this causes a monitor raster latency which depends on the position, on the monitor and the refresh rate. We technically measured the raster latencies of different monitors and present the effects on visual evoked potentials (VEPs) and event-related potentials (ERPs). Additionally we present a method for correcting the monitor raster latency and analyzed the performance difference of a code-modulated VEP BCI speller by correcting the latency. There are currently no other methods validating the effects of monitor raster latency on VEPs and ERPs. The timings of VEPs and ERPs are directly affected by the raster latency. Furthermore, correcting the raster latency resulted in a significant reduction of the target prediction error from 7.98% to 4.61% and also in a more reliable classification of targets by significantly increasing the distance between the most probable and the second most probable target by 18.23%. The monitor raster latency affects the timings of VEPs and ERPs, and correcting resulted in a significant error reduction of 42.23%. It is recommend to correct the raster latency for an increased BCI performance and methodical correctness. Copyright © 2017 Elsevier B.V. All rights reserved.
Crammer, Koby; Singer, Yoram
2005-01-01
We discuss the problem of ranking instances. In our framework, each instance is associated with a rank or a rating, which is an integer in 1 to k. Our goal is to find a rank-prediction rule that assigns each instance a rank that is as close as possible to the instance's true rank. We discuss a group of closely related online algorithms, analyze their performance in the mistake-bound model, and prove their correctness. We describe two sets of experiments, with synthetic data and with the EachMovie data set for collaborative filtering. In the experiments we performed, our algorithms outperform online algorithms for regression and classification applied to ranking.
Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.
Hoya, T; Chambers, J A
2001-01-01
In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.
ERIC Educational Resources Information Center
Scott, Marcia Strong; Delgado, Christine F.; Tu, Shihfen; Fletcher, Kathryn L.
2005-01-01
In this study, predictive classification accuracy was used to select those tasks from a kindergarten screening battery that best identified children who, three years later, were classified as educable mentally handicapped or as having a specific learning disability. A subset of measures enabled correct classification of 91% of the children in…
Classifications for Cesarean Section: A Systematic Review
Torloni, Maria Regina; Betran, Ana Pilar; Souza, Joao Paulo; Widmer, Mariana; Allen, Tomas; Gulmezoglu, Metin; Merialdi, Mario
2011-01-01
Background Rising cesarean section (CS) rates are a major public health concern and cause worldwide debates. To propose and implement effective measures to reduce or increase CS rates where necessary requires an appropriate classification. Despite several existing CS classifications, there has not yet been a systematic review of these. This study aimed to 1) identify the main CS classifications used worldwide, 2) analyze advantages and deficiencies of each system. Methods and Findings Three electronic databases were searched for classifications published 1968–2008. Two reviewers independently assessed classifications using a form created based on items rated as important by international experts. Seven domains (ease, clarity, mutually exclusive categories, totally inclusive classification, prospective identification of categories, reproducibility, implementability) were assessed and graded. Classifications were tested in 12 hypothetical clinical case-scenarios. From a total of 2948 citations, 60 were selected for full-text evaluation and 27 classifications identified. Indications classifications present important limitations and their overall score ranged from 2–9 (maximum grade = 14). Degree of urgency classifications also had several drawbacks (overall scores 6–9). Woman-based classifications performed best (scores 5–14). Other types of classifications require data not routinely collected and may not be relevant in all settings (scores 3–8). Conclusions This review and critical appraisal of CS classifications is a methodologically sound contribution to establish the basis for the appropriate monitoring and rational use of CS. Results suggest that women-based classifications in general, and Robson's classification, in particular, would be in the best position to fulfill current international and local needs and that efforts to develop an internationally applicable CS classification would be most appropriately placed in building upon this classification. The use of a single CS classification will facilitate auditing, analyzing and comparing CS rates across different settings and help to create and implement effective strategies specifically targeted to optimize CS rates where necessary. PMID:21283801
NASA Technical Reports Server (NTRS)
Park, Steve
1990-01-01
A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-02
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 866 [Docket No... Serological Reagents; Correction AGENCY: Food and Drug Administration, HHS. ACTION: Final rule; correction. SUMMARY: In the Federal Register of March 9, 2012 (76 FR 14272), the Food and Drug Administration (FDA...
Multispectral Resource Sampler (MPS): Proof of Concept. Literature survey of atmospheric corrections
NASA Technical Reports Server (NTRS)
Schowengerdt, R. A.; Slater, P. N.
1981-01-01
Work done in combining spectral bands to reduce atmospheric effects on spectral signatures is described. The development of atmospheric models and their use with ground and aerial measurements in correcting spectral signatures is reviewed. An overview of studies of atmospheric effects on the accuracy of scene classification is provided.
2008-09-01
automated processing of images for color correction, segmentation of foreground targets from sediment and classification of targets to taxonomic category...element in the development of HabCam as a tool for habitat characterization is the automated processing of images for color correction, segmentation of
Transgender Inmates in Prisons.
Routh, Douglas; Abess, Gassan; Makin, David; Stohr, Mary K; Hemmens, Craig; Yoo, Jihye
2017-05-01
Transgender inmates provide a conundrum for correctional staff, particularly when it comes to classification, victimization, and medical and health issues. Using LexisNexis and WestLaw and state Department of Corrections (DOC) information, we collected state statutes and DOC policies concerning transgender inmates. We utilized academic legal research with content analysis to determine whether a statute or policy addressed issues concerning classification procedures, access to counseling services, the initiation and continuation of hormone therapy, and sex reassignment surgery. We found that while more states are providing either statutory or policy guidelines for transgender inmates, a number of states are lagging behind and there is a shortage of guidance dealing with the medical issues related to being transgender.
Code of Federal Regulations, 2011 CFR
2011-07-01
... different classifications of real property are taxed at different rates? 222.68 Section 222.68 Education... different classifications of real property are taxed at different rates? If the real property of an LEA and its generally comparable LEAs consists of two or more classifications of real property taxed at...
Bolivian satellite technology program on ERTS natural resources
NASA Technical Reports Server (NTRS)
Brockmann, H. C. (Principal Investigator); Bartoluccic C., L.; Hoffer, R. M.; Levandowski, D. W.; Ugarte, I.; Valenzuela, R. R.; Urena E., M.; Oros, R.
1977-01-01
The author has identified the following significant results. Application of digital classification for mapping land use permitted the separation of units at more specific levels in less time. A correct classification of data in the computer has a positive effect on the accuracy of the final products. Land use unit comparison with types of soils as represented by the colors of the coded map showed a class relation. Soil types in relation to land cover and land use demonstrated that vegetation was a positive factor in soils classification. Groupings of image resolution elements (pixels) permit studies of land use at different levels, thereby forming parameters for the classification of soils.
Spectral band selection for classification of soil organic matter content
NASA Technical Reports Server (NTRS)
Henderson, Tracey L.; Szilagyi, Andrea; Baumgardner, Marion F.; Chen, Chih-Chien Thomas; Landgrebe, David A.
1989-01-01
This paper describes the spectral-band-selection (SBS) algorithm of Chen and Landgrebe (1987, 1988, and 1989) and uses the algorithm to classify the organic matter content in the earth's surface soil. The effectiveness of the algorithm was evaluated comparing the results of classification of the soil organic matter using SBS bands with those obtained using Landsat MSS bands and TM bands, showing that the algorithm was successful in finding important spectral bands for classification of organic matter content. Using the calculated bands, the probabilities of correct classification for climate-stratified data were found to range from 0.910 to 0.980.
Kaewkamnerd, Saowaluck; Uthaipibull, Chairat; Intarapanich, Apichart; Pannarut, Montri; Chaotheing, Sastra; Tongsima, Sissades
2012-01-01
Current malaria diagnosis relies primarily on microscopic examination of Giemsa-stained thick and thin blood films. This method requires vigorously trained technicians to efficiently detect and classify the malaria parasite species such as Plasmodium falciparum (Pf) and Plasmodium vivax (Pv) for an appropriate drug administration. However, accurate classification of parasite species is difficult to achieve because of inherent technical limitations and human inconsistency. To improve performance of malaria parasite classification, many researchers have proposed automated malaria detection devices using digital image analysis. These image processing tools, however, focus on detection of parasites on thin blood films, which may not detect the existence of parasites due to the parasite scarcity on the thin blood film. The problem is aggravated with low parasitemia condition. Automated detection and classification of parasites on thick blood films, which contain more numbers of parasite per detection area, would address the previous limitation. The prototype of an automatic malaria parasite identification system is equipped with mountable motorized units for controlling the movements of objective lens and microscope stage. This unit was tested for its precision to move objective lens (vertical movement, z-axis) and microscope stage (in x- and y-horizontal movements). The average precision of x-, y- and z-axes movements were 71.481 ± 7.266 μm, 40.009 ± 0.000 μm, and 7.540 ± 0.889 nm, respectively. Classification of parasites on 60 Giemsa-stained thick blood films (40 blood films containing infected red blood cells and 20 control blood films of normal red blood cells) was tested using the image analysis module. By comparing our results with the ones verified by trained malaria microscopists, the prototype detected parasite-positive and parasite-negative blood films at the rate of 95% and 68.5% accuracy, respectively. For classification performance, the thick blood films with Pv parasite was correctly classified with the success rate of 75% while the accuracy of Pf classification was 90%. This work presents an automatic device for both detection and classification of malaria parasite species on thick blood film. The system is based on digital image analysis and featured with motorized stage units, designed to easily be mounted on most conventional light microscopes used in the endemic areas. The constructed motorized module could control the movements of objective lens and microscope stage at high precision for effective acquisition of quality images for analysis. The analysis program could accurately classify parasite species, into Pf or Pv, based on distribution of chromatin size.
NASA Astrophysics Data System (ADS)
Prochazka, D.; Mazura, M.; Samek, O.; Rebrošová, K.; Pořízka, P.; Klus, J.; Prochazková, P.; Novotný, J.; Novotný, K.; Kaiser, J.
2018-01-01
In this work, we investigate the impact of data provided by complementary laser-based spectroscopic methods on multivariate classification accuracy. Discrimination and classification of five Staphylococcus bacterial strains and one strain of Escherichia coli is presented. The technique that we used for measurements is a combination of Raman spectroscopy and Laser-Induced Breakdown Spectroscopy (LIBS). Obtained spectroscopic data were then processed using Multivariate Data Analysis algorithms. Principal Components Analysis (PCA) was selected as the most suitable technique for visualization of bacterial strains data. To classify the bacterial strains, we used Neural Networks, namely a supervised version of Kohonen's self-organizing maps (SOM). We were processing results in three different ways - separately from LIBS measurements, from Raman measurements, and we also merged data from both mentioned methods. The three types of results were then compared. By applying the PCA to Raman spectroscopy data, we observed that two bacterial strains were fully distinguished from the rest of the data set. In the case of LIBS data, three bacterial strains were fully discriminated. Using a combination of data from both methods, we achieved the complete discrimination of all bacterial strains. All the data were classified with a high success rate using SOM algorithm. The most accurate classification was obtained using a combination of data from both techniques. The classification accuracy varied, depending on specific samples and techniques. As for LIBS, the classification accuracy ranged from 45% to 100%, as for Raman Spectroscopy from 50% to 100% and in case of merged data, all samples were classified correctly. Based on the results of the experiments presented in this work, we can assume that the combination of Raman spectroscopy and LIBS significantly enhances discrimination and classification accuracy of bacterial species and strains. The reason is the complementarity in obtained chemical information while using these two methods.
Boatin, A A; Cullinane, F; Torloni, M R; Betrán, A P
2018-01-01
In most regions worldwide, caesarean section (CS) rates are increasing. In these settings, new strategies are needed to reduce CS rates. To identify, critically appraise and synthesise studies using the Robson classification as a system to categorise and analyse data in clinical audit cycles to reduce CS rates. Medline, Embase, CINAHL and LILACS were searched from 2001 to 2016. Studies reporting use of the Robson classification to categorise and analyse data in clinical audit cycles to reduce CS rates. Data on study design, interventions used, CS rates, and perinatal outcomes were extracted. Of 385 citations, 30 were assessed for full text review and six studies, conducted in Brazil, Chile, Italy and Sweden, were included. All studies measured initial CS rates, provided feedback and monitored performance using the Robson classification. In two studies, the audit cycle consisted exclusively of feedback using the Robson classification; the other four used audit and feedback as part of a multifaceted intervention. Baseline CS rates ranged from 20 to 36.8%; after the intervention, CS rates ranged from 3.1 to 21.2%. No studies were randomised or controlled and all had a high risk of bias. We identified six studies using the Robson classification within clinical audit cycles to reduce CS rates. All six report reductions in CS rates; however, results should be interpreted with caution because of limited methodological quality. Future trials are needed to evaluate the role of the Robson classification within audit cycles aimed at reducing CS rates. Use of the Robson classification in clinical audit cycles to reduce caesarean rates. © 2017 The Authors. BJOG An International Journal of Obstetrics and Gynaecology published by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.
NASA Astrophysics Data System (ADS)
Shi, Liehang; Ling, Tonghui; Zhang, Jianguo
2016-03-01
Radiologists currently use a variety of terminologies and standards in most hospitals in China, and even there are multiple terminologies being used for different sections in one department. In this presentation, we introduce a medical semantic comprehension system (MedSCS) to extract semantic information about clinical findings and conclusion from free text radiology reports so that the reports can be classified correctly based on medical terms indexing standards such as Radlex or SONMED-CT. Our system (MedSCS) is based on both rule-based methods and statistics-based methods which improve the performance and the scalability of MedSCS. In order to evaluate the over all of the system and measure the accuracy of the outcomes, we developed computation methods to calculate the parameters of precision rate, recall rate, F-score and exact confidence interval.
Nketiah, Gabriel; Selnaes, Kirsten M; Sandsmark, Elise; Teruel, Jose R; Krüger-Stokke, Brage; Bertilsson, Helena; Bathen, Tone F; Elschot, Mattijs
2018-05-01
To evaluate the effect of correction for B 0 inhomogeneity-induced geometric distortion in echo-planar diffusion-weighted imaging on quantitative apparent diffusion coefficient (ADC) analysis in multiparametric prostate MRI. Geometric distortion correction was performed in echo-planar diffusion-weighted images (b = 0, 50, 400, 800 s/mm 2 ) of 28 patients, using two b 0 scans with opposing phase-encoding polarities. Histology-matched tumor and healthy tissue volumes of interest delineated on T 2 -weighted images were mapped to the nondistortion-corrected and distortion-corrected data sets by resampling with and without spatial coregistration. The ADC values were calculated on the volume and voxel level. The effect of distortion correction on ADC quantification and tissue classification was evaluated using linear-mixed models and logistic regression, respectively. Without coregistration, the absolute differences in tumor ADC (range: 0.0002-0.189 mm 2 /s×10 -3 (volume level); 0.014-0.493 mm 2 /s×10 -3 (voxel level)) between the nondistortion-corrected and distortion-corrected were significantly associated (P < 0.05) with distortion distance (mean: 1.4 ± 1.3 mm; range: 0.3-5.3 mm). No significant associations were found upon coregistration; however, in patients with high rectal gas residue, distortion correction resulted in improved spatial representation and significantly better classification of healthy versus tumor voxels (P < 0.05). Geometric distortion correction in DWI could improve quantitative ADC analysis in multiparametric prostate MRI. Magn Reson Med 79:2524-2532, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
ERIC Educational Resources Information Center
Spearing, Debra; Woehlke, Paula
To assess the effect on discriminant analysis in terms of correct classification into two groups, the following parameters were systematically altered using Monte Carlo techniques: sample sizes; proportions of one group to the other; number of independent variables; and covariance matrices. The pairing of the off diagonals (or covariances) with…
ERIC Educational Resources Information Center
Duffrin, Christopher; Eakin, Angela; Bertrand, Brenda; Barber-Heidel, Kimberly; Carraway-Stage, Virginia
2011-01-01
The American College Health Association estimated that 31% of college students are overweight or obese. It is important that students have a correct perception of body weight status as extra weight has potential adverse health effects. This study assessed accuracy of perceived weight status versus medical classification among 102 college students.…
Wheat cultivation: Identifying and estimating area by means of LANDSAT data
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Mendonca, F. J.; Cottrell, D. A.; Tardin, A. T.; Lee, D. C. L.; Shimabukuro, Y. E.; Moreira, M. A.; Delima, A. M.; Maia, F. C. S.
1981-01-01
Automatic classification of LANDSAT data supported by aerial photography for identification and estimation of wheat growing areas was evaluated. Data covering three regions in the State of Rio Grande do Sul, Brazil were analyzed. The average correct classification of IMAGE-100 data was 51.02% and 63.30%, respectively, for the periods of July and of September/October, 1979.
Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C
2017-02-15
Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.
Nikolić, Biljana; Martinović, Jelena; Matić, Milan; Stefanović, Đorđe
2018-05-29
Different variables determine the performance of cyclists, which brings up the question how these parameters may help in their classification by specialty. The aim of the study was to determine differences in cardiorespiratory parameters of male cyclists according to their specialty, flat rider (N=21), hill rider (N=35) and sprinter (N=20) and obtain the multivariate model for further cyclists classification by specialties, based on selected variables. Seventeen variables were measured at submaximal and maximum load on the cycle ergometer Cosmed E 400HK (Cosmed, Rome, Italy) (initial 100W with 25W increase, 90-100 rpm). Multivariate discriminant analysis was used to determine which variables group cyclists within their specialty, and to predict which variables can direct cyclists to a particular specialty. Among nine variables that statistically contribute to the discriminant power of the model, achieved power on the anaerobic threshold and the produced CO2 had the biggest impact. The obtained discriminatory model correctly classified 91.43% of flat riders, 85.71% of hill riders, while sprinters were classified completely correct (100%), i.e. 92.10% of examinees were correctly classified, which point out the strength of the discriminatory model. Respiratory indicators mostly contribute to the discriminant power of the model, which may significantly contribute to training practice and laboratory tests in future.
Automated target classification in high resolution dual frequency sonar imagery
NASA Astrophysics Data System (ADS)
Aridgides, Tom; Fernández, Manuel
2007-04-01
An improved computer-aided-detection / computer-aided-classification (CAD/CAC) processing string has been developed. The classified objects of 2 distinct strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution dual frequency sonar imagery. Three significant fusion algorithm improvements were made. First, a nonlinear 2nd order (Volterra) feature LLRT fusion algorithm was developed. Second, a Box-Cox nonlinear feature LLRT fusion algorithm was developed. The Box-Cox transformation consists of raising the features to a to-be-determined power. Third, a repeated application of a subset feature selection / feature orthogonalization / Volterra feature LLRT fusion block was utilized. It was shown that cascaded Volterra feature LLRT fusion of the CAD/CAC processing strings outperforms summing, baseline single-stage Volterra and Box-Cox feature LLRT algorithms, yielding significant improvements over the best single CAD/CAC processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate. Additionally, the robustness of cascaded Volterra feature fusion was demonstrated, by showing that the algorithm yields similar performance with the training and test sets.
Zhang, Jianhua; Yin, Zhong; Wang, Rubin
2017-01-01
This paper developed a cognitive task-load (CTL) classification algorithm and allocation strategy to sustain the optimal operator CTL levels over time in safety-critical human-machine integrated systems. An adaptive human-machine system is designed based on a non-linear dynamic CTL classifier, which maps a set of electroencephalogram (EEG) and electrocardiogram (ECG) related features to a few CTL classes. The least-squares support vector machine (LSSVM) is used as dynamic pattern classifier. A series of electrophysiological and performance data acquisition experiments were performed on seven volunteer participants under a simulated process control task environment. The participant-specific dynamic LSSVM model is constructed to classify the instantaneous CTL into five classes at each time instant. The initial feature set, comprising 56 EEG and ECG related features, is reduced to a set of 12 salient features (including 11 EEG-related features) by using the locality preserving projection (LPP) technique. An overall correct classification rate of about 80% is achieved for the 5-class CTL classification problem. Then the predicted CTL is used to adaptively allocate the number of process control tasks between operator and computer-based controller. Simulation results showed that the overall performance of the human-machine system can be improved by using the adaptive automation strategy proposed.
Tse, Samson; Davidson, Larry; Chung, Ka-Fai; Yu, Chong Ho; Ng, King Lam; Tsoi, Emily
2015-02-01
More mental health services are adopting the recovery paradigm. This study adds to prior research by (a) using measures of stages of recovery and elements of recovery that were designed and validated in a non-Western, Chinese culture and (b) testing which demographic factors predict advanced recovery and whether placing importance on certain elements predicts advanced recovery. We examined recovery and factors associated with recovery among 75 Hong Kong adults who were diagnosed with schizophrenia and assessed to be in clinical remission. Data were collected on socio-demographic factors, recovery stages and elements associated with recovery. Logistic regression analysis was used to identify variables that could best predict stages of recovery. Receiver operating characteristic curves were used to detect the classification accuracy of the model (i.e. rates of correct classification of stages of recovery). Logistic regression results indicated that stages of recovery could be distinguished with reasonable accuracy for Stage 3 ('living with disability', classification accuracy = 75.45%) and Stage 4 ('living beyond disability', classification accuracy = 75.50%). However, there was no sufficient information to predict Combined Stages 1 and 2 ('overwhelmed by disability' and 'struggling with disability'). It was found that having a meaningful role and age were the most important differentiators of recovery stage. Preliminary findings suggest that adopting salient life roles personally is important to recovery and that this component should be incorporated into mental health services. © The Author(s) 2014.
Word-level language modeling for P300 spellers based on discriminative graphical models
NASA Astrophysics Data System (ADS)
Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat
2015-04-01
Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.
Vélez-de Lachica, J C; Valdez-Jiménez, L A; Inzunza-Sánchez, J M
2017-01-01
Hallux valgus is considered the most common musculoskeletal deformity, with a prevalence of 88%. There are more than 130 surgical techniques for its treatment; currently, percutaneous ones are popular; however, they do not take into account the metatarsal-phalangeal correction angle. The aim of this study is to propose a modified technique for the correction of the percutaneous metatarsal-phalangeal and inter-metatarsal angles and to evaluate its clinical and radiological results. An experimental, prospective and longitudinal study in 10 patients with moderate to severe hallux valgus according to the classification of Coughlin and Mann were collected; the results were evaluated with the AOFAS scale at 15, 30, 60 and 90 days. The McBride technique and the technique of percutaneous anchor with the proposed amendment were performed. The AOFAS scale was applied as described, finding a progressive increase of the rating; the average correction of the inter-metatarsal angle was 8.8 degrees and of the metatarsal-phalangeal, 9.12. The modified technique of percutaneous anchor showed clear clinical and radiographic improvements in the short term. Our modified technique is proposed for future projects, including a large sample with long-term follow-up.
Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.
2012-01-01
Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets imputed under our model can be investigated in additional subsequent analyses, our method will be useful for preparing data for applications in diverse contexts in population genetics and molecular ecology. PMID:22851645
Stöggl, Thomas; Holst, Anders; Jonasson, Arndt; Andersson, Erik; Wunsch, Tobias; Norström, Christer; Holmberg, Hans-Christer
2014-01-01
The purpose of the current study was to develop and validate an automatic algorithm for classification of cross-country (XC) ski-skating gears (G) using Smartphone accelerometer data. Eleven XC skiers (seven men, four women) with regional-to-international levels of performance carried out roller skiing trials on a treadmill using fixed gears (G2left, G2right, G3, G4left, G4right) and a 950-m trial using different speeds and inclines, applying gears and sides as they normally would. Gear classification by the Smartphone (on the chest) and based on video recordings were compared. Formachine-learning, a collective database was compared to individual data. The Smartphone application identified the trials with fixed gears correctly in all cases. In the 950-m trial, participants executed 140 ± 22 cycles as assessed by video analysis, with the automatic Smartphone application giving a similar value. Based on collective data, gears were identified correctly 86.0% ± 8.9% of the time, a value that rose to 90.3% ± 4.1% (P < 0.01) with machine learning from individual data. Classification was most often incorrect during transition between gears, especially to or from G3. Identification was most often correct for skiers who made relatively few transitions between gears. The accuracy of the automatic procedure for identifying G2left, G2right, G3, G4left and G4right was 96%, 90%, 81%, 88% and 94%, respectively. The algorithm identified gears correctly 100% of the time when a single gear was used and 90% of the time when different gears were employed during a variable protocol. This algorithm could be improved with respect to identification of transitions between gears or the side employed within a given gear. PMID:25365459
Vaz de Souza, Daniel; Schirru, Elia; Mannocci, Francesco; Foschi, Federico; Patel, Shanon
2017-01-01
The aim of this study was to compare the diagnostic efficacy of 2 cone-beam computed tomographic (CBCT) units with parallax periapical (PA) radiographs for the detection and classification of simulated external cervical resorption (ECR) lesions. Simulated ECR lesions were created on 13 mandibular teeth from 3 human dry mandibles. PA and CBCT scans were taken using 2 different units, Kodak CS9300 (Carestream Health Inc, Rochester, NY) and Morita 3D Accuitomo 80 (J Morita, Kyoto, Japan), before and after the creation of the ECR lesions. The lesions were then classified according to Heithersay's classification and their position on the root surface. Sensitivity, specificity, positive predictive values, negative predictive values, and receiver operator characteristic curves as well as the reproducibility of each technique were determined for diagnostic accuracy. The area under the receiver operating characteristic value for diagnostic accuracy for PA radiography and Kodak and Morita CBCT scanners was 0.872, 0.99, and 0.994, respectively. The sensitivity and specificity for both CBCT scanners were significantly better than PA radiography (P < .001). There was no statistical difference between the sensitivity and specificity of the 2 scanners. The percentage of correct diagnoses according to the tooth type was 87.4% for the Kodak scanner, 88.3% for the Morita scanner, and 48.5% for PA radiography.The ECR lesions were correctly identified according to the tooth surface in 87.8% Kodak, 89.1% Morita and 49.4% PA cases. The ECR lesions were correctly classified according to Heithersay classification in 70.5% of Kodak, 69.2% of Morita, and 39.7% of PA cases. This study revealed that both CBCT scanners tested were equally accurate in diagnosing ECR and significantly better than PA radiography. CBCT scans were more likely to correctly categorize ECR according to the Heithersay classification compared with parallax PA radiographs. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
48 CFR 47.305-9 - Commodity description and freight classification.
Code of Federal Regulations, 2010 CFR
2010-10-01
... freight classification. 47.305-9 Section 47.305-9 Federal Acquisition Regulations System FEDERAL... Commodity description and freight classification. (a) Generally, the freight rate for supplies is based on the rating applicable to the freight classification description published in the National Motor...
NASA Astrophysics Data System (ADS)
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo
2018-06-01
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo
2018-06-05
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.
Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R
2016-03-01
This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.
Novel high/low solubility classification methods for new molecular entities.
Dave, Rutwij A; Morris, Marilyn E
2016-09-10
This research describes a rapid solubility classification approach that could be used in the discovery and development of new molecular entities. Compounds (N=635) were divided into two groups based on information available in the literature: high solubility (BDDCS/BCS 1/3) and low solubility (BDDCS/BCS 2/4). We established decision rules for determining solubility classes using measured log solubility in molar units (MLogSM) or measured solubility (MSol) in mg/ml units. ROC curve analysis was applied to determine statistically significant threshold values of MSol and MLogSM. Results indicated that NMEs with MLogSM>-3.05 or MSol>0.30mg/mL will have ≥85% probability of being highly soluble and new molecular entities with MLogSM≤-3.05 or MSol≤0.30mg/mL will have ≥85% probability of being poorly soluble. When comparing solubility classification using the threshold values of MLogSM or MSol with BDDCS, we were able to correctly classify 85% of compounds. We also evaluated solubility classification of an independent set of 108 orally administered drugs using MSol (0.3mg/mL) and our method correctly classified 81% and 95% of compounds into high and low solubility classes, respectively. The high/low solubility classification using MLogSM or MSol is novel and independent of traditionally used dose number criteria. Copyright © 2016 Elsevier B.V. All rights reserved.
Observation versus classification in supervised category learning.
Levering, Kimery R; Kurtz, Kenneth J
2015-02-01
The traditional supervised classification paradigm encourages learners to acquire only the knowledge needed to predict category membership (a discriminative approach). An alternative that aligns with important aspects of real-world concept formation is learning with a broader focus to acquire knowledge of the internal structure of each category (a generative approach). Our work addresses the impact of a particular component of the traditional classification task: the guess-and-correct cycle. We compare classification learning to a supervised observational learning task in which learners are shown labeled examples but make no classification response. The goals of this work sit at two levels: (1) testing for differences in the nature of the category representations that arise from two basic learning modes; and (2) evaluating the generative/discriminative continuum as a theoretical tool for understand learning modes and their outcomes. Specifically, we view the guess-and-correct cycle as consistent with a more discriminative approach and therefore expected it to lead to narrower category knowledge. Across two experiments, the observational mode led to greater sensitivity to distributional properties of features and correlations between features. We conclude that a relatively subtle procedural difference in supervised category learning substantially impacts what learners come to know about the categories. The results demonstrate the value of the generative/discriminative continuum as a tool for advancing the psychology of category learning and also provide a valuable constraint for formal models and associated theories.
NASA Astrophysics Data System (ADS)
Dementev, A. O.; Dmitriev, E. V.; Kozoderov, V. V.; Egorov, V. D.
2017-10-01
Hyperspectral imaging is up-to-date promising technology widely applied for the accurate thematic mapping. The presence of a large number of narrow survey channels allows us to use subtle differences in spectral characteristics of objects and to make a more detailed classification than in the case of using standard multispectral data. The difficulties encountered in the processing of hyperspectral images are usually associated with the redundancy of spectral information which leads to the problem of the curse of dimensionality. Methods currently used for recognizing objects on multispectral and hyperspectral images are usually based on standard base supervised classification algorithms of various complexity. Accuracy of these algorithms can be significantly different depending on considered classification tasks. In this paper we study the performance of ensemble classification methods for the problem of classification of the forest vegetation. Error correcting output codes and boosting are tested on artificial data and real hyperspectral images. It is demonstrates, that boosting gives more significant improvement when used with simple base classifiers. The accuracy in this case in comparable the error correcting output code (ECOC) classifier with Gaussian kernel SVM base algorithm. However the necessity of boosting ECOC with Gaussian kernel SVM is questionable. It is demonstrated, that selected ensemble classifiers allow us to recognize forest species with high enough accuracy which can be compared with ground-based forest inventory data.
Low-cost real-time automatic wheel classification system
NASA Astrophysics Data System (ADS)
Shabestari, Behrouz N.; Miller, John W. V.; Wedding, Victoria
1992-11-01
This paper describes the design and implementation of a low-cost machine vision system for identifying various types of automotive wheels which are manufactured in several styles and sizes. In this application, a variety of wheels travel on a conveyor in random order through a number of processing steps. One of these processes requires the identification of the wheel type which was performed manually by an operator. A vision system was designed to provide the required identification. The system consisted of an annular illumination source, a CCD TV camera, frame grabber, and 386-compatible computer. Statistical pattern recognition techniques were used to provide robust classification as well as a simple means for adding new wheel designs to the system. Maintenance of the system can be performed by plant personnel with minimal training. The basic steps for identification include image acquisition, segmentation of the regions of interest, extraction of selected features, and classification. The vision system has been installed in a plant and has proven to be extremely effective. The system properly identifies the wheels correctly up to 30 wheels per minute regardless of rotational orientation in the camera's field of view. Correct classification can even be achieved if a portion of the wheel is blocked off from the camera. Significant cost savings have been achieved by a reduction in scrap associated with incorrect manual classification as well as a reduction of labor in a tedious task.
Superconducting fluctuations at arbitrary disorder strength
NASA Astrophysics Data System (ADS)
Stepanov, Nikolai A.; Skvortsov, Mikhail A.
2018-04-01
We study the effect of superconducting fluctuations on the conductivity of metals at arbitrary temperatures T and impurity scattering rates τ-1. Using the standard diagrammatic technique but in the Keldysh representation, we derive the general expression for the fluctuation correction to the dc conductivity applicable for any space dimensionality and analyze it in the case of the film geometry. We observe that the usual classification in terms of the Aslamazov-Larkin, Maki-Thompson, and density-of-states diagrams is to some extent artificial since these contributions produce similar terms, which partially cancel each other. In the diffusive limit, our results fully coincide with recent calculations in the Keldysh technique. In the ballistic limit near the transition, we demonstrate the absence of a divergent term (Tτ ) 2 attributed previously to the density-of-states contribution. In the ballistic limit far above the transition, the temperature-dependent part of the conductivity correction is shown to grow as T τ /ln(T /Tc) , where Tc is the critical temperature.
Lastra-Mejías, Miguel; Torreblanca-Zanca, Albertina; Aroca-Santos, Regina; Cancilla, John C; Izquierdo, Jesús G; Torrecilla, José S
2018-08-01
A set of 10 honeys comprising a diverse range of botanical origins have been successfully characterized through fluorescence spectroscopy using inexpensive light-emitting diodes (LEDs) as light sources. It has been proven that each LED-honey combination tested originates a unique emission spectrum, which enables the authentication of every honey, being able to correctly label it with its botanical origin. Furthermore, the analysis was backed up by a mathematical analysis based on partial least square models which led to a correct classification rate of each type of honey of over 95%. Finally, the same approach was followed to analyze rice syrup, which is a common honey adulterant that is challenging to identify when mixed with honey. A LED-dependent and unique fluorescence spectrum was found for the syrup, which presumably qualifies this approach for the design of uncomplicated, fast, and cost-effective quality control and adulteration assessing tools for different types of honey. Copyright © 2018 Elsevier B.V. All rights reserved.
Taxman, Faye S; Kitsantas, Panagiota
2009-08-01
OBJECTIVE TO BE ADDRESSED: The purpose of this study was to investigate the structural and organizational factors that contribute to the availability and increased capacity for substance abuse treatment programs in correctional settings. We used classification and regression tree statistical procedures to identify how multi-level data can explain the variability in availability and capacity of substance abuse treatment programs in jails and probation/parole offices. The data for this study combined the National Criminal Justice Treatment Practices (NCJTP) Survey and the 2000 Census. The NCJTP survey was a nationally representative sample of correctional administrators for jails and probation/parole agencies. The sample size included 295 substance abuse treatment programs that were classified according to the intensity of their services: high, medium, and low. The independent variables included jurisdictional-level structural variables, attributes of the correctional administrators, and program and service delivery characteristics of the correctional agency. The two most important variables in predicting the availability of all three types of services were stronger working relationships with other organizations and the adoption of a standardized substance abuse screening tool by correctional agencies. For high and medium intensive programs, the capacity increased when an organizational learning strategy was used by administrators and the organization used a substance abuse screening tool. Implications on advancing treatment practices in correctional settings are discussed, including further work to test theories on how to better understand access to intensive treatment services. This study presents the first phase of understanding capacity-related issues regarding treatment programs offered in correctional settings.
Adriaens, E; Guest, R; Willoughby, J A; Fochtman, P; Kandarova, H; Verstraelen, S; Van Rompay, A R
2018-06-01
Assessment of ocular irritancy is an international regulatory requirement in the safety evaluation of industrial and consumer products. Although many in vitro ocular irritation assays exist, alone they are incapable of fully categorizing chemicals. The objective of CEFIC-LRI-AIMT6-VITO CON4EI (CONsortium for in vitro Eye Irritation testing strategy) project was to develop tiered testing strategies for eye irritation assessment that can lead to complete replacement of the in vivo Draize rabbit eye test (OECD TG 405). A set of 80 reference chemicals was tested with seven test methods, one method was the Slug Mucosal Irritation (SMI) test method. The method measures the amount of mucus produced (MP) during a single 1-hour contact with a 1% and 10% dilution of the chemical. Based on the total MP, a classification (Cat 1, Cat 2, or No Cat) is predicted. The SMI test method correctly identified 65.8% of the Cat 1 chemicals with a specificity of 90.5% (low over-prediction rate for in vivo Cat 2 and No Cat chemicals). Mispredictions were predominantly unidirectional towards lower classifications with 26.7% of the liquids and 40% of the solids being underpredicted. In general, the performance was better for liquids than for solids with respectively 76.5% vs 57.1% (Cat 1), 61.5% vs 50% (Cat 2), and 87.5% vs 85.7% (No Cat) being identified correctly. Copyright © 2017 Elsevier Ltd. All rights reserved.
Toward Automated Cochlear Implant Fitting Procedures Based on Event-Related Potentials.
Finke, Mareike; Billinger, Martin; Büchner, Andreas
Cochlear implants (CIs) restore hearing to the profoundly deaf by direct electrical stimulation of the auditory nerve. To provide an optimal electrical stimulation pattern the CI must be individually fitted to each CI user. To date, CI fitting is primarily based on subjective feedback from the user. However, not all CI users are able to provide such feedback, for example, small children. This study explores the possibility of using the electroencephalogram (EEG) to objectively determine if CI users are able to hear differences in tones presented to them, which has potential applications in CI fitting or closed loop systems. Deviant and standard stimuli were presented to 12 CI users in an active auditory oddball paradigm. The EEG was recorded in two sessions and classification of the EEG data was performed with shrinkage linear discriminant analysis. Also, the impact of CI artifact removal on classification performance and the possibility to reuse a trained classifier in future sessions were evaluated. Overall, classification performance was above chance level for all participants although performance varied considerably between participants. Also, artifacts were successfully removed from the EEG without impairing classification performance. Finally, reuse of the classifier causes only a small loss in classification performance. Our data provide first evidence that EEG can be automatically classified on single-trial basis in CI users. Despite the slightly poorer classification performance over sessions, classifier and CI artifact correction appear stable over successive sessions. Thus, classifier and artifact correction weights can be reused without repeating the set-up procedure in every session, which makes the technique easier applicable. With our present data, we can show successful classification of event-related cortical potential patterns in CI users. In the future, this has the potential to objectify and automate parts of CI fitting procedures.
Age and gender classification of Merriam's turkeys from foot measurements
Mark A. Rumble; Todd R. Mills; Brian F. Wakeling; Richard W. Hoffman
1996-01-01
Wild turkey sex and age information is needed to define population structure but is difficult to obtain. We classified age and gender of Merriamâs turkeys (Meleagris gallopavo merriami) accurately based on measurements of two foot characteristics. Gender of birds was correctly classified 93% of the time from measurements of middle toe pads; correct...
Boursier, Jérôme; Bertrais, Sandrine; Oberti, Frédéric; Gallois, Yves; Fouchard-Hubert, Isabelle; Rousselet, Marie-Christine; Zarski, Jean-Pierre; Calès, Paul
2011-11-30
Non-invasive tests have been constructed and evaluated mainly for binary diagnoses such as significant fibrosis. Recently, detailed fibrosis classifications for several non-invasive tests have been developed, but their accuracy has not been thoroughly evaluated in comparison to liver biopsy, especially in clinical practice and for Fibroscan. Therefore, the main aim of the present study was to evaluate the accuracy of detailed fibrosis classifications available for non-invasive tests and liver biopsy. The secondary aim was to validate these accuracies in independent populations. Four HCV populations provided 2,068 patients with liver biopsy, four different pathologist skill-levels and non-invasive tests. Results were expressed as percentages of correctly classified patients. In population #1 including 205 patients and comparing liver biopsy (reference: consensus reading by two experts) and blood tests, Metavir fibrosis (FM) stage accuracy was 64.4% in local pathologists vs. 82.2% (p < 10-3) in single expert pathologist. Significant discrepancy (≥ 2FM vs reference histological result) rates were: Fibrotest: 17.2%, FibroMeter2G: 5.6%, local pathologists: 4.9%, FibroMeter3G: 0.5%, expert pathologist: 0% (p < 10-3). In population #2 including 1,056 patients and comparing blood tests, the discrepancy scores, taking into account the error magnitude, of detailed fibrosis classification were significantly different between FibroMeter2G (0.30 ± 0.55) and FibroMeter3G (0.14 ± 0.37, p < 10-3) or Fibrotest (0.84 ± 0.80, p < 10-3). In population #3 (and #4) including 458 (359) patients and comparing blood tests and Fibroscan, accuracies of detailed fibrosis classification were, respectively: Fibrotest: 42.5% (33.5%), Fibroscan: 64.9% (50.7%), FibroMeter2G: 68.7% (68.2%), FibroMeter3G: 77.1% (83.4%), p < 10-3 (p < 10-3). Significant discrepancy (≥ 2 FM) rates were, respectively: Fibrotest: 21.3% (22.2%), Fibroscan: 12.9% (12.3%), FibroMeter2G: 5.7% (6.0%), FibroMeter3G: 0.9% (0.9%), p < 10-3 (p < 10-3). The accuracy in detailed fibrosis classification of the best-performing blood test outperforms liver biopsy read by a local pathologist, i.e., in clinical practice; however, the classification precision is apparently lesser. This detailed classification accuracy is much lower than that of significant fibrosis with Fibroscan and even Fibrotest but higher with FibroMeter3G. FibroMeter classification accuracy was significantly higher than those of other non-invasive tests. Finally, for hepatitis C evaluation in clinical practice, fibrosis degree can be evaluated using an accurate blood test.
2011-01-01
Background Non-invasive tests have been constructed and evaluated mainly for binary diagnoses such as significant fibrosis. Recently, detailed fibrosis classifications for several non-invasive tests have been developed, but their accuracy has not been thoroughly evaluated in comparison to liver biopsy, especially in clinical practice and for Fibroscan. Therefore, the main aim of the present study was to evaluate the accuracy of detailed fibrosis classifications available for non-invasive tests and liver biopsy. The secondary aim was to validate these accuracies in independent populations. Methods Four HCV populations provided 2,068 patients with liver biopsy, four different pathologist skill-levels and non-invasive tests. Results were expressed as percentages of correctly classified patients. Results In population #1 including 205 patients and comparing liver biopsy (reference: consensus reading by two experts) and blood tests, Metavir fibrosis (FM) stage accuracy was 64.4% in local pathologists vs. 82.2% (p < 10-3) in single expert pathologist. Significant discrepancy (≥ 2FM vs reference histological result) rates were: Fibrotest: 17.2%, FibroMeter2G: 5.6%, local pathologists: 4.9%, FibroMeter3G: 0.5%, expert pathologist: 0% (p < 10-3). In population #2 including 1,056 patients and comparing blood tests, the discrepancy scores, taking into account the error magnitude, of detailed fibrosis classification were significantly different between FibroMeter2G (0.30 ± 0.55) and FibroMeter3G (0.14 ± 0.37, p < 10-3) or Fibrotest (0.84 ± 0.80, p < 10-3). In population #3 (and #4) including 458 (359) patients and comparing blood tests and Fibroscan, accuracies of detailed fibrosis classification were, respectively: Fibrotest: 42.5% (33.5%), Fibroscan: 64.9% (50.7%), FibroMeter2G: 68.7% (68.2%), FibroMeter3G: 77.1% (83.4%), p < 10-3 (p < 10-3). Significant discrepancy (≥ 2 FM) rates were, respectively: Fibrotest: 21.3% (22.2%), Fibroscan: 12.9% (12.3%), FibroMeter2G: 5.7% (6.0%), FibroMeter3G: 0.9% (0.9%), p < 10-3 (p < 10-3). Conclusions The accuracy in detailed fibrosis classification of the best-performing blood test outperforms liver biopsy read by a local pathologist, i.e., in clinical practice; however, the classification precision is apparently lesser. This detailed classification accuracy is much lower than that of significant fibrosis with Fibroscan and even Fibrotest but higher with FibroMeter3G. FibroMeter classification accuracy was significantly higher than those of other non-invasive tests. Finally, for hepatitis C evaluation in clinical practice, fibrosis degree can be evaluated using an accurate blood test. PMID:22129438
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
... Authorization of Additional Classification and Rate, Standard Form 1444 AGENCY: Department of Defense (DOD... of Additional Classification and Rate, Standard Form 1444. DATES: Comments may be submitted on or.../or business confidential information provided. FOR FURTHER INFORMATION CONTACT: Mr. Ernest Woodson...
Li, Yun; Zhang, Jin-Yu; Wang, Yuan-Zhong
2018-01-01
Three data fusion strategies (low-llevel, mid-llevel, and high-llevel) combined with a multivariate classification algorithm (random forest, RF) were applied to authenticate the geographical origins of Panax notoginseng collected from five regions of Yunnan province in China. In low-level fusion, the original data from two spectra (Fourier transform mid-IR spectrum and near-IR spectrum) were directly concatenated into a new matrix, which then was applied for the classification. Mid-level fusion was the strategy that inputted variables extracted from the spectral data into an RF classification model. The extracted variables were processed by iterate variable selection of the RF model and principal component analysis. The use of high-level fusion combined the decision making of each spectroscopic technique and resulted in an ensemble decision. The results showed that the mid-level and high-level data fusion take advantage of the information synergy from two spectroscopic techniques and had better classification performance than that of independent decision making. High-level data fusion is the most effective strategy since the classification results are better than those of the other fusion strategies: accuracy rates ranged between 93% and 96% for the low-level data fusion, between 95% and 98% for the mid-level data fusion, and between 98% and 100% for the high-level data fusion. In conclusion, the high-level data fusion strategy for Fourier transform mid-IR and near-IR spectra can be used as a reliable tool for correct geographical identification of P. notoginseng. Graphical abstract The analytical steps of Fourier transform mid-IR and near-IR spectral data fusion for the geographical traceability of Panax notoginseng.
Grabowska, Hanna; Narkiewicz, Krzysztof; Grabowski, Władysław; Grzegorczyk, Michał; Gaworska-Krzemińska, Aleksandra; Swietlik, Dariusz
2009-01-01
Arterial hypertension is among the most important risk factors of atherosclerosis and associated cardiovascular pathology with a prevalence rate estimated at 20-30% of the adult population. Nowadays, it is recommended to perform an individual assessment of cardiovascular risk in a patient and to determine the threshold value for arterial hypertension, even though blood pressure classification values according to the European Society of Hypertension and the European Society of Cardiology (ESH/ESC), as well as the Polish Society of Hypertension (PTNT) have remained unchanged. To determine what nurses with a Bachelor of Nursing degree know about the prevalence and classification of arterial blood pressure, as well as sequellae of arterial hypertension. This study was done in 116 qualified nurses (112 females, 4 males; age 21-50; seniority 0-29 years). The research period was from June 2007 to January 2008. The research tool was a questionnaire devised by the authors. We found that half (on the average) of those questioned have an up-to-date knowledge regarding classification of blood pressure and prevalence of arterial hypertension but just one out of three respondents (on the average) was able to describe its sequellae. Relatively less known among nurses with a Bachelor of Nursing degree were aspects of "white coat hypertension". Statistically significant differences regarding correct answers were noted depending on seniority (p = 0.002), place of work p < 0.001), or position (p < 0.001). There were no differences depending on age, place of residence, marital status, or form of postgraduate education of nurses with a Bachelor of Nursing degree. It is necessary to improve knowledge among students of nursing (BN degree) about current classification of blood pressure, as well as prevalence of arterial hypertension and its sequellae.
Provisional in-silico biopharmaceutics classification (BCS) to guide oral drug product development
Wolk, Omri; Agbaria, Riad; Dahan, Arik
2014-01-01
The main objective of this work was to investigate in-silico predictions of physicochemical properties, in order to guide oral drug development by provisional biopharmaceutics classification system (BCS). Four in-silico methods were used to estimate LogP: group contribution (CLogP) using two different software programs, atom contribution (ALogP), and element contribution (KLogP). The correlations (r2) of CLogP, ALogP and KLogP versus measured LogP data were 0.97, 0.82, and 0.71, respectively. The classification of drugs with reported intestinal permeability in humans was correct for 64.3%–72.4% of the 29 drugs on the dataset, and for 81.82%–90.91% of the 22 drugs that are passively absorbed using the different in-silico algorithms. Similar permeability classification was obtained with the various in-silico methods. The in-silico calculations, along with experimental melting points, were then incorporated into a thermodynamic equation for solubility estimations that largely matched the reference solubility values. It was revealed that the effect of melting point on the solubility is minor compared to the partition coefficient, and an average melting point (162.7°C) could replace the experimental values, with similar results. The in-silico methods classified 20.76% (±3.07%) as Class 1, 41.51% (±3.32%) as Class 2, 30.49% (±4.47%) as Class 3, and 6.27% (±4.39%) as Class 4. In conclusion, in-silico methods can be used for BCS classification of drugs in early development, from merely their molecular formula and without foreknowledge of their chemical structure, which will allow for the improved selection, engineering, and developability of candidates. These in-silico methods could enhance success rates, reduce costs, and accelerate oral drug products development. PMID:25284986
Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W
2004-09-01
Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.
Description of cervical cancer mortality in Belgium using Bayesian age-period-cohort models
2009-01-01
Objective To correct cervical cancer mortality rates for death cause certification problems in Belgium and to describe the corrected trends (1954-1997) using Bayesian models. Method Cervical cancer (cervix uteri (CVX), corpus uteri (CRP), not otherwise specified (NOS) uterus cancer and other very rare uterus cancer (OTH) mortality data were extracted from the WHO mortality database together with population data for Belgium and the Netherlands. Different ICD (International Classification of Diseases) were used over time for death cause certification. In the Netherlands, the proportion of not-otherwise specified uterine cancer deaths was small over large periods and therefore internal reallocation could be used to estimate the corrected rates cervical cancer mortality. In Belgium, the proportion of improperly defined uterus deaths was high. Therefore, the age-specific proportions of uterus cancer deaths that are probably of cervical origin for the Netherlands was applied to Belgian uterus cancer deaths to estimate the corrected number of cervix cancer deaths (corCVX). A Bayesian loglinear Poisson-regression model was performed to disentangle the separate effects of age, period and cohort. Results The corrected age standardized mortality rate (ASMR) decreased regularly from 9.2/100 000 in the mid 1950s to 2.5/100,000 in the late 1990s. Inclusion of age, period and cohort into the models were required to obtain an adequate fit. Cervical cancer mortality increases with age, declines over calendar period and varied irregularly by cohort. Conclusion Mortality increased with ageing and declined over time in most age-groups, but varied irregularly by birth cohort. In global, with some discrete exceptions, mortality decreased for successive generations up to the cohorts born in the 1930s. This decline stopped for cohorts born in the 1940s and thereafter. For the youngest cohorts, even a tendency of increasing risk of dying from cervical cancer could be observed, reflecting increased exposure to risk factors. The fact that this increase was limited for the youngest cohorts could be explained as an effect of screening. Bayesian modeling provided similar results compared to previously used classical Poisson models. However, Bayesian models are more robust for estimating rates when data are sparse (youngest age groups, most recent cohorts) and can be used to for predicting future trends.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., charges, classifications, rules or regulations. 565.9 Section 565.9 Shipping FEDERAL MARITIME COMMISSION... Commission review, suspension and prohibition of rates, charges, classifications, rules or regulations. (a)(1..., charges, classifications, rules or regulations) from the Commission, each controlled carrier shall file a...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., charges, classifications, rules or regulations. 565.9 Section 565.9 Shipping FEDERAL MARITIME COMMISSION... Commission review, suspension and prohibition of rates, charges, classifications, rules or regulations. (a)(1..., charges, classifications, rules or regulations) from the Commission, each controlled carrier shall file a...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
...-AM78 Prevailing Rate Systems; North American Industry Classification System Based Federal Wage System... 2007 North American Industry Classification System (NAICS) codes currently used in Federal Wage System... (OPM) issued a final rule (73 FR 45853) to update the 2002 North American Industry Classification...
76 FR 53699 - Labor Surplus Area Classification Under Executive Orders 12073 and 10582
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-29
... DEPARTMENT OF LABOR Employment and Training Administration Labor Surplus Area Classification Under... estimates provided to ETA by the Bureau of Labor Statistics are used in making these classifications. The... classification criteria include a ``floor unemployment rate'' (6.0%) and a ``ceiling unemployment rate'' (10.0...
A long-term perspective on deforestation rates in the Brazilian Amazon
NASA Astrophysics Data System (ADS)
Velasco Gomez, M. D.; Beuchle, R.; Shimabukuro, Y.; Grecchi, R.; Simonetti, D.; Eva, H. D.; Achard, F.
2015-04-01
Monitoring tropical forest cover is central to biodiversity preservation, terrestrial carbon stocks, essential ecosystem and climate functions, and ultimately, sustainable economic development. The Amazon forest is the Earth's largest rainforest, and despite intensive studies on current deforestation rates, relatively little is known as to how these compare to historic (pre 1985) deforestation rates. We quantified land cover change between 1975 and 2014 in the so-called Arc of Deforestation of the Brazilian Amazon, covering the southern stretch of the Amazon forest and part of the Cerrado biome. We applied a consistent method that made use of data from Landsat sensors: Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+) and Operational Land Imager (OLI). We acquired suitable images from the US Geological Survey (USGS) for five epochs: 1975, 1990, 2000, 2010, and 2014. We then performed land cover analysis for each epoch using a systematic sample of 156 sites, each one covering 10 km x 10 km, located at the confluence point of integer degree latitudes and longitudes. An object-based classification of the images was performed with five land cover classes: tree cover, tree cover mosaic, other wooded land, other land cover, and water. The automatic classification results were corrected by visual interpretation, and, when available, by comparison with higher resolution imagery. Our results show a decrease of forest cover of 24.2% in the last 40 years in the Brazilian Arc of Deforestation, with an average yearly net forest cover change rate of -0.71% for the 39 years considered.
Zakeri-Milani, Parvin; Barzegar-Jalali, Mohammad; Azimi, Mandana; Valizadeh, Hadi
2009-09-01
The solubility and dissolution rate of active ingredients are of major importance in preformulation studies of pharmaceutical dosage forms. In the present study, passively absorbed drugs are classified based on their intrinsic dissolution rate (IDR) and their intestinal permeabilities. IDR was determined by measuring the dissolution of a non-disintegrating disk of drug, and effective intestinal permeability of tested drugs in rat jejunum was determined using single perfusion technique. The obtained intrinsic dissolution rate values were in the range of 0.035-56.8 mg/min/cm(2) for tested drugs. The minimum and maximum intestinal permeabilities in rat intestine were determined to be 1.6 x 10(-5) and 2 x 10(-4)cm/s, respectively. Four classes of drugs were defined: Category I: P(eff,rat)>5 x 10(-5) (cm/s) or P(eff,human)>4.7 x 10(-5) (cm/s), IDR>1(mg/min/cm(2)), Category II: P(eff,rat)>5 x 10(-5) (cm/s) or P(eff,human)>4.7 x 10(-5) (cm/s), IDR<1(mg/min/cm(2)), Category III: P(eff,rat)<5 x 10(-5) (cm/s) or P(eff,human)<4.7 x 10(-5) (cm/s), IDR>1 (mg/min/cm(2)) and Category IV: P(eff,rat)<5 x 10(-5) (cm/s) or P(eff,human)<4.7 x 10(-5) (cm/s), IDR<1(mg/min/cm(2)). According to the results obtained and proposed classification of drugs, it is concluded that drugs could be categorized correctly based on their IDR and intestinal permeability values.
Feature genes predicting the FLT3/ITD mutation in acute myeloid leukemia
LI, CHENGLONG; ZHU, BIAO; CHEN, JIAO; HUANG, XIAOBING
2016-01-01
In the present study, gene expression profiles of acute myeloid leukemia (AML) samples were analyzed to identify feature genes with the capacity to predict the mutation status of FLT3/ITD. Two machine learning models, namely the support vector machine (SVM) and random forest (RF) methods, were used for classification. Four datasets were downloaded from the European Bioinformatics Institute, two of which (containing 371 samples, including 281 FLT3/ITD mutation-negative and 90 mutation-positive samples) were randomly defined as the training group, while the other two datasets (containing 488 samples, including 350 FLT3/ITD mutation-negative and 138 mutation-positive samples) were defined as the test group. Differentially expressed genes (DEGs) were identified by significance analysis of the micro-array data by using the training samples. The classification efficiency of the SCM and RF methods was evaluated using the following parameters: Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and the area under the receiver operating characteristic curve. Functional enrichment analysis was performed for the feature genes with DAVID. A total of 585 DEGs were identified in the training group, of which 580 were upregulated and five were downregulated. The classification accuracy rates of the two methods for the training group, the test group and the combined group using the 585 feature genes were >90%. For the SVM and RF methods, the rates of correct determination, specificity and PPV were >90%, while the sensitivity and NPV were >80%. The SVM method produced a slightly better classification effect than the RF method. A total of 13 biological pathways were overrepresented by the feature genes, mainly involving energy metabolism, chromatin organization and translation. The feature genes identified in the present study may be used to predict the mutation status of FLT3/ITD in patients with AML. PMID:27177049
Ramírez, J; Górriz, J M; Ortiz, A; Martínez-Murcia, F J; Segovia, F; Salas-Gonzalez, D; Castillo-Barnes, D; Illán, I A; Puntonet, C G
2018-05-15
Alzheimer's disease (AD) is the most common cause of dementia in the elderly and affects approximately 30 million individuals worldwide. Mild cognitive impairment (MCI) is very frequently a prodromal phase of AD, and existing studies have suggested that people with MCI tend to progress to AD at a rate of about 10-15% per year. However, the ability of clinicians and machine learning systems to predict AD based on MRI biomarkers at an early stage is still a challenging problem that can have a great impact in improving treatments. The proposed system, developed by the SiPBA-UGR team for this challenge, is based on feature standardization, ANOVA feature selection, partial least squares feature dimension reduction and an ensemble of One vs. Rest random forest classifiers. With the aim of improving its performance when discriminating healthy controls (HC) from MCI, a second binary classification level was introduced that reconsiders the HC and MCI predictions of the first level. The system was trained and evaluated on an ADNI datasets that consist of T1-weighted MRI morphological measurements from HC, stable MCI, converter MCI and AD subjects. The proposed system yields a 56.25% classification score on the test subset which consists of 160 real subjects. The classifier yielded the best performance when compared to: (i) One vs. One (OvO), One vs. Rest (OvR) and error correcting output codes (ECOC) as strategies for reducing the multiclass classification task to multiple binary classification problems, (ii) support vector machines, gradient boosting classifier and random forest as base binary classifiers, and (iii) bagging ensemble learning. A robust method has been proposed for the international challenge on MCI prediction based on MRI data. The system yielded the second best performance during the competition with an accuracy rate of 56.25% when evaluated on the real subjects of the test set. Copyright © 2017 Elsevier B.V. All rights reserved.
Feature genes predicting the FLT3/ITD mutation in acute myeloid leukemia.
Li, Chenglong; Zhu, Biao; Chen, Jiao; Huang, Xiaobing
2016-07-01
In the present study, gene expression profiles of acute myeloid leukemia (AML) samples were analyzed to identify feature genes with the capacity to predict the mutation status of FLT3/ITD. Two machine learning models, namely the support vector machine (SVM) and random forest (RF) methods, were used for classification. Four datasets were downloaded from the European Bioinformatics Institute, two of which (containing 371 samples, including 281 FLT3/ITD mutation-negative and 90 mutation‑positive samples) were randomly defined as the training group, while the other two datasets (containing 488 samples, including 350 FLT3/ITD mutation-negative and 138 mutation-positive samples) were defined as the test group. Differentially expressed genes (DEGs) were identified by significance analysis of the microarray data by using the training samples. The classification efficiency of the SCM and RF methods was evaluated using the following parameters: Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and the area under the receiver operating characteristic curve. Functional enrichment analysis was performed for the feature genes with DAVID. A total of 585 DEGs were identified in the training group, of which 580 were upregulated and five were downregulated. The classification accuracy rates of the two methods for the training group, the test group and the combined group using the 585 feature genes were >90%. For the SVM and RF methods, the rates of correct determination, specificity and PPV were >90%, while the sensitivity and NPV were >80%. The SVM method produced a slightly better classification effect than the RF method. A total of 13 biological pathways were overrepresented by the feature genes, mainly involving energy metabolism, chromatin organization and translation. The feature genes identified in the present study may be used to predict the mutation status of FLT3/ITD in patients with AML.
Learning for VMM + WTA Embedded Classifiers
2016-03-31
enabling correct classification of each novel acoustic signal (generator, idle car , and idle truck). The classification structure requires, after...measured on our SoC FPAA IC. The test input is composed of signals from urban environment for 3 objects (generator, idle car , and idle truck...classifier results from a rural truck data set, an urban generator set, and urban idle car dataset. Solid lines represent our extracted background
Fourier-based classification of protein secondary structures.
Shu, Jian-Jun; Yong, Kian Yan
2017-04-15
The correct prediction of protein secondary structures is one of the key issues in predicting the correct protein folded shape, which is used for determining gene function. Existing methods make use of amino acids properties as indices to classify protein secondary structures, but are faced with a significant number of misclassifications. The paper presents a technique for the classification of protein secondary structures based on protein "signal-plotting" and the use of the Fourier technique for digital signal processing. New indices are proposed to classify protein secondary structures by analyzing hydrophobicity profiles. The approach is simple and straightforward. Results show that the more types of protein secondary structures can be classified by means of these newly-proposed indices. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hafizt, M.; Manessa, M. D. M.; Adi, N. S.; Prayudha, B.
2017-12-01
Benthic habitat mapping using satellite data is one challenging task for practitioners and academician as benthic objects are covered by light-attenuating water column obscuring object discrimination. One common method to reduce this water-column effect is by using depth-invariant index (DII) image. However, the application of the correction in shallow coastal areas is challenging as a dark object such as seagrass could have a very low pixel value, preventing its reliable identification and classification. This limitation can be solved by specifically applying a classification process to areas with different water depth levels. The water depth level can be extracted from satellite imagery using Relative Water Depth Index (RWDI). This study proposed a new approach to improve the mapping accuracy, particularly for benthic dark objects by combining the DII of Lyzenga’s water column correction method and the RWDI of Stumpt’s method. This research was conducted in Lintea Island which has a high variation of benthic cover using Sentinel-2A imagery. To assess the effectiveness of the proposed new approach for benthic habitat mapping two different classification procedures are implemented. The first procedure is the commonly applied method in benthic habitat mapping where DII image is used as input data to all coastal area for image classification process regardless of depth variation. The second procedure is the proposed new approach where its initial step begins with the separation of the study area into shallow and deep waters using the RWDI image. Shallow area was then classified using the sunglint-corrected image as input data and the deep area was classified using DII image as input data. The final classification maps of those two areas were merged as a single benthic habitat map. A confusion matrix was then applied to evaluate the mapping accuracy of the final map. The result shows that the new proposed mapping approach can be used to map all benthic objects in all depth ranges and shows a better accuracy compared to that of classification map produced using only with DII.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-31
... for Authorization of Additional Classification and Rate, Standard Form 1444 AGENCIES: Department of... Request for Authorization of Additional Classification and Rate, Standard Form 1444. A notice published in... personal and/or business confidential information provided. FOR FURTHER INFORMATION CONTACT: Ms. Clare...
Rock images classification by using deep convolution neural network
NASA Astrophysics Data System (ADS)
Cheng, Guojian; Guo, Wenhui
2017-08-01
Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.
Haylen, Bernard T; Lee, Joseph; Maher, Chris; Deprest, Jan; Freeman, Robert
2014-06-01
Results of interobserver reliability studies for the International Urogynecological Association-International Continence Society (IUGA-ICS) Complication Classification coding can be greatly influenced by study design factors such as participant instruction, motivation, and test-question clarity. We attempted to optimize these factors. After a 15-min instructional lecture with eight clinical case examples (including images) and with classification/coding charts available, those clinicians attending an IUGA Surgical Complications workshop were presented with eight similar-style test cases over 10 min and asked to code them using the Category, Time and Site classification. Answers were compared to predetermined correct codes obtained by five instigators of the IUGA-ICS prostheses and grafts complications classification. Prelecture and postquiz participant confidence levels using a five-step Likert scale were assessed. Complete sets of answers to the questions (24 codings) were provided by 34 respondents, only three of whom reported prior use of the charts. Average score [n (%)] out of eight, as well as median score (range) for each coding category were: (i) Category: 7.3 (91 %); 7 (4-8); (ii) Time: 7.8 (98 %); 7 (6-8); (iii) Site: 7.2 (90 %); 7 (5-8). Overall, the equivalent calculations (out of 24) were 22.3 (93 %) and 22 (18-24). Mean prelecture confidence was 1.37 (out of 5), rising to 3.85 postquiz. Urogynecologists had the highest correlation with correct coding, followed closely by fellows and general gynecologists. Optimizing training and study design can lead to excellent results for interobserver reliability of the IUGA-ICS Complication Classification coding, with increased participant confidence in complication-coding ability.
Classification of Birds and Bats Using Flight Tracks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cullinan, Valerie I.; Matzner, Shari; Duberstein, Corey A.
Classification of birds and bats that use areas targeted for offshore wind farm development and the inference of their behavior is essential to evaluating the potential effects of development. The current approach to assessing the number and distribution of birds at sea involves transect surveys using trained individuals in boats or airplanes or using high-resolution imagery. These approaches are costly and have safety concerns. Based on a limited annotated library extracted from a single-camera thermal video, we provide a framework for building models that classify birds and bats and their associated behaviors. As an example, we developed a discriminant modelmore » for theoretical flight paths and applied it to data (N = 64 tracks) extracted from 5-min video clips. The agreement between model- and observer-classified path types was initially only 41%, but it increased to 73% when small-scale jitter was censored and path types were combined. Classification of 46 tracks of bats, swallows, gulls, and terns on average was 82% accurate, based on a jackknife cross-validation. Model classification of bats and terns (N = 4 and 2, respectively) was 94% and 91% correct, respectively; however, the variance associated with the tracks from these targets is poorly estimated. Model classification of gulls and swallows (N ≥ 18) was on average 73% and 85% correct, respectively. The models developed here should be considered preliminary because they are based on a small data set both in terms of the numbers of species and the identified flight tracks. Future classification models would be greatly improved by including a measure of distance between the camera and the target.« less
1972-01-01
three species of Pseudoficalbia from New Guinea, While he was correct in his assignment of species, the characters, though they will separate a...and African material:, I have made no attempt to correct these errors, except in the Southeast Asian fauna, In a few cases, I have brought them to...current practice of lumping everything into one supposedly homogeneous genus.” While the statement may ultimately prove correct , I prefer to consider at
Li, Suyun; Li, Zhi; Hua, Wenbin; Wang, Kun; Li, Shuai; Zhang, Yunkun; Ye, Zhewei; Shao, Zengwu; Wu, Xinghuo; Yang, Cao
2017-12-01
Thoracic-lumbar vertebral fracture is very common in clinic, and late post-traumatic kyphosis is the main cause closely related to the patients' life quality, which has evocated extensive concern for the surgical treatment of the disease. This study aimed to analyze the clinical outcomes and surgical strategies for late post-traumatic kyphosis after failed thoracolumbar fracture operation. All patients presented back pain with kyphotic apex vertebrae between T12 and L3. According to Frankel classification grading system, among them, 3 patients were classified as grade D, with the ability to live independently. A systematic review of 12 case series of post-traumatic kyphosis after failed thoracolumbar fracture operation was involved. Wedge osteotomy was performed as indicated-posterior closing osteotomy correction in 5 patients and anterior open-posterior close correction in 7 patients.Postoperatively, thoracolumbar x-rays were obtained to evaluate the correction of kyphotic deformity, visual analog scales (VAS) and Frankel grading system were used for access the clinical outcomes. All the patients were followed up, with the average period of 38.5 months (range 24-56 months). The Kyphotic Cobb angle was improved from preoperative (28.65 ± 11.41) to postoperative (1.14 ± 2.79), with the correction rate of 96.02%. There was 1 case of intraoperative dural tear, without complications such as death, neurological injury, and wound infection. According to Frankel grading system, no patient suffered deteriorated neurological symptoms after surgery, and 2 patients (2/3) experienced significant relief after surgery. The main VAS score of back pain was improved from preoperative (4.41 ± 1.08) to postoperative (1.5 ± 0.91) at final follow-up, with an improvement rate of 65.89%. Surgical treatment of late post-traumatic kyphosis after failed thoracolumbar fracture operation can obtain good radiologic and clinical outcomes by kyphosis correction, decompression, and posterior stability.
Li, Suyun; Li, Zhi; Hua, Wenbin; Wang, Kun; Li, Shuai; Zhang, Yunkun; Ye, Zhewei; Shao, Zengwu; Wu, Xinghuo; Yang, Cao
2017-01-01
Abstract Rationale: Thoracic-lumbar vertebral fracture is very common in clinic, and late post-traumatic kyphosis is the main cause closely related to the patients’ life quality, which has evocated extensive concern for the surgical treatment of the disease. This study aimed to analyze the clinical outcomes and surgical strategies for late post-traumatic kyphosis after failed thoracolumbar fracture operation. Patient concerns: All patients presented back pain with kyphotic apex vertebrae between T12 and L3. According to Frankel classification grading system, among them, 3 patients were classified as grade D, with the ability to live independently. Diagnoses: A systematic review of 12 case series of post-traumatic kyphosis after failed thoracolumbar fracture operation was involved. Interventions: Wedge osteotomy was performed as indicated—posterior closing osteotomy correction in 5 patients and anterior open-posterior close correction in 7 patients.Postoperatively, thoracolumbar x-rays were obtained to evaluate the correction of kyphotic deformity, visual analog scales (VAS) and Frankel grading system were used for access the clinical outcomes. Outcomes: All the patients were followed up, with the average period of 38.5 months (range 24–56 months). The Kyphotic Cobb angle was improved from preoperative (28.65 ± 11.41) to postoperative (1.14 ± 2.79), with the correction rate of 96.02%. There was 1 case of intraoperative dural tear, without complications such as death, neurological injury, and wound infection. According to Frankel grading system, no patient suffered deteriorated neurological symptoms after surgery, and 2 patients (2/3) experienced significant relief after surgery. The main VAS score of back pain was improved from preoperative (4.41 ± 1.08) to postoperative (1.5 ± 0.91) at final follow-up, with an improvement rate of 65.89%. Lessons: Surgical treatment of late post-traumatic kyphosis after failed thoracolumbar fracture operation can obtain good radiologic and clinical outcomes by kyphosis correction, decompression, and posterior stability. PMID:29245233
NASA Technical Reports Server (NTRS)
Slater, P. N. (Principal Investigator)
1980-01-01
The feasibility of using a pointable imager to determine atmospheric parameters was studied. In particular the determination of the atmospheric extinction coefficient and the path radiance, the two quantities that have to be known in order to correct spectral signatures for atmospheric effects, was simulated. The study included the consideration of the geometry of ground irradiance and observation conditions for a pointable imager in a LANDSAT orbit as a function of time of year. A simulation study was conducted on the sensitivity of scene classification accuracy to changes in atmospheric condition. A two wavelength and a nonlinear regression method for determining the required atmospheric parameters were investigated. The results indicate the feasibility of using a pointable imaging system (1) for the determination of the atmospheric parameters required to improve classification accuracies in urban-rural transition zones and to apply in studies of bi-directional reflectance distribution function data and polarization effects; and (2) for the determination of the spectral reflectances of ground features.
Predicting mountain lion activity using radiocollars equipped with mercury tip-sensors
Janis, Michael W.; Clark, Joseph D.; Johnson, Craig
1999-01-01
Radiotelemetry collars with tip-sensors have long been used to monitor wildlife activity. However, comparatively few researchers have tested the reliability of the technique on the species being studied. To evaluate the efficacy of using tip-sensors to assess mountain lion (Puma concolor) activity, we radiocollared 2 hand-reared mountain lions and simultaneously recorded their behavior and the associated telemetry signal characteristics. We noted both the number of pulse-rate changes and the percentage of time the transmitter emitted a fast pulse rate (i.e., head up) within sampling intervals ranging from 1-5 minutes. Based on 27 hours of observations, we were able to correctly distinguish between active and inactive behaviors >93% of the time using a logistic regression model. We present several models to predict activity of mountain lions; the selection of which to us would depend on study objectives and logistics. Our results indicate that field protocols that use only pulse-rate changes to indicate activity can lead to significant classification errors.
Calès, Paul; Halfon, Philippe; Batisse, Dominique; Carrat, Fabrice; Perré, Philippe; Penaranda, Guillaume; Guyader, Dominique; d'Alteroche, Louis; Fouchard-Hubert, Isabelle; Michelet, Christian; Veillon, Pascal; Lambert, Jérôme; Weiss, Laurence; Salmon, Dominique; Cacoub, Patrice
2010-08-01
We compared 5 non-specific and 2 specific blood tests for liver fibrosis in HCV/HIV co-infection. Four hundred and sixty-seven patients were included into derivation (n=183) or validation (n=284) populations. Within these populations, the diagnostic target, significant fibrosis (Metavir F > or = 2), was found in 66% and 72% of the patients, respectively. Two new fibrosis tests, FibroMeter HICV and HICV test, were constructed in the derivation population. Unadjusted AUROCs in the derivation population were: APRI: 0.716, Fib-4: 0.722, Fibrotest: 0.778, Hepascore: 0.779, FibroMeter: 0.783, HICV test: 0.822, FibroMeter HICV: 0.828. AUROCs adjusted on classification and distribution of fibrosis stages in a reference population showed similar values in both populations. FibroMeter, FibroMeter HICV and HICV test had the highest correct classification rates in F0/1 and F3/4 (which account for high predictive values): 77-79% vs. 70-72% in the other tests (p=0.002). Reliable individual diagnosis based on predictive values > or = 90% distinguished three test categories: poorly reliable: Fib-4 (2.4% of patients), APRI (8.9%); moderately reliable: Fibrotest (25.4%), FibroMeter (26.6%), Hepascore (30.2%); acceptably reliable: HICV test (40.2%), FibroMeter HICV (45.6%) (p<10(-3) between tests). FibroMeter HICV classified all patients into four reliable diagnosis intervals (< or =F1, F1+/-1, > or =F1, > or =F2) with an overall accuracy of 93% vs. 79% (p<10(-3)) for a binary diagnosis of significant fibrosis. Tests designed for HCV infections are less effective in HIV/HCV infections. A specific test, like FibroMeter HICV, was the most interesting test for diagnostic accuracy, correct classification profile, and a reliable diagnosis. With reliable diagnosis intervals, liver biopsy can therefore be avoided in all patients. Copyright 2010 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Veiseth-Kent, Eva; Høst, Vibeke; Løvland, Atle
2017-01-01
The main objective of this work was to develop a method for rapid and non-destructive detection and grading of wooden breast (WB) syndrome in chicken breast fillets. Near-infrared (NIR) spectroscopy was chosen as detection method, and an industrial NIR scanner was applied and tested for large scale on-line detection of the syndrome. Two approaches were evaluated for discrimination of WB fillets: 1) Linear discriminant analysis based on NIR spectra only, and 2) a regression model for protein was made based on NIR spectra and the estimated concentrations of protein were used for discrimination. A sample set of 197 fillets was used for training and calibration. A test set was recorded under industrial conditions and contained spectra from 79 fillets. The classification methods obtained 99.5–100% correct classification of the calibration set and 100% correct classification of the test set. The NIR scanner was then installed in a commercial chicken processing plant and could detect incidence rates of WB in large batches of fillets. Examples of incidence are shown for three broiler flocks where a high number of fillets (9063, 6330 and 10483) were effectively measured. Prevalence of WB of 0.1%, 6.6% and 8.5% were estimated for these flocks based on the complete sample volumes. Such an on-line system can be used to alleviate the challenges WB represents to the poultry meat industry. It enables automatic quality sorting of chicken fillets to different product categories. Manual laborious grading can be avoided. Incidences of WB from different farms and flocks can be tracked and information can be used to understand and point out main causes for WB in the chicken production. This knowledge can be used to improve the production procedures and reduce today’s extensive occurrence of WB. PMID:28278170
Laser Raman detection for oral cancer based on a Gaussian process classification method
NASA Astrophysics Data System (ADS)
Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Zhang, Chijun; Chen, He; Luo, Yusheng; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming
2013-06-01
Oral squamous cell carcinoma is the most common neoplasm of the oral cavity. The incidence rate accounts for 80% of total oral cancer and shows an upward trend in recent years. It has a high degree of malignancy and is difficult to detect in terms of differential diagnosis, as a consequence of which the timing of treatment is always delayed. In this work, Raman spectroscopy was adopted to differentially diagnose oral squamous cell carcinoma and oral gland carcinoma. In total, 852 entries of raw spectral data which consisted of 631 items from 36 oral squamous cell carcinoma patients, 87 items from four oral gland carcinoma patients and 134 items from five normal people were collected by utilizing an optical method on oral tissues. The probability distribution of the datasets corresponding to the spectral peaks of the oral squamous cell carcinoma tissue was analyzed and the experimental result showed that the data obeyed a normal distribution. Moreover, the distribution characteristic of the noise was also in compliance with a Gaussian distribution. A Gaussian process (GP) classification method was utilized to distinguish the normal people and the oral gland carcinoma patients from the oral squamous cell carcinoma patients. The experimental results showed that all the normal people could be recognized. 83.33% of the oral squamous cell carcinoma patients could be correctly diagnosed and the remaining ones would be diagnosed as having oral gland carcinoma. For the classification process of oral gland carcinoma and oral squamous cell carcinoma, the correct ratio was 66.67% and the erroneously diagnosed percentage was 33.33%. The total sensitivity was 80% and the specificity was 100% with the Matthews correlation coefficient (MCC) set to 0.447 213 595. Considering the numerical results above, the application prospects and clinical value of this technique are significantly impressive.
C-fuzzy variable-branch decision tree with storage and classification error rate constraints
NASA Astrophysics Data System (ADS)
Yang, Shiueng-Bien
2009-10-01
The C-fuzzy decision tree (CFDT), which is based on the fuzzy C-means algorithm, has recently been proposed. The CFDT is grown by selecting the nodes to be split according to its classification error rate. However, the CFDT design does not consider the classification time taken to classify the input vector. Thus, the CFDT can be improved. We propose a new C-fuzzy variable-branch decision tree (CFVBDT) with storage and classification error rate constraints. The design of the CFVBDT consists of two phases-growing and pruning. The CFVBDT is grown by selecting the nodes to be split according to the classification error rate and the classification time in the decision tree. Additionally, the pruning method selects the nodes to prune based on the storage requirement and the classification time of the CFVBDT. Furthermore, the number of branches of each internal node is variable in the CFVBDT. Experimental results indicate that the proposed CFVBDT outperforms the CFDT and other methods.
Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning
ERIC Educational Resources Information Center
Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan
2009-01-01
In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…
Cognitive and motor function of neurologically impaired extremely low birth weight children.
Bernardo, Janine; Friedman, Harriet; Minich, Nori; Taylor, H Gerry; Wilson-Costello, Deanne; Hack, Maureen
2015-01-01
Rates of neurological impairment among extremely low birth weight children (ELBW [<1 kg]) have decreased since 2000; however, their functioning is unexamined. To compare motor and cognitive functioning of ELBW children with neurological impairment, including cerebral palsy and severe hypotonia/hypertonia, between two periods: 1990 to 1999 (n=83) and 2000 to 2005 (n=34). Measures of function at 20 months corrected age included the Mental and Psychomotor Developmental Indexes of the Bayley Scales of Infant Development and the Gross Motor Functional Classification System as primary outcomes and individual motor function items as secondary outcomes. Analysis failed to reveal significant differences for the primary outcomes, although during 2000 to 2005, sitting significantly improved in children with neurological impairment (P=0.003). Decreases in rates of neurological impairment among ELBW children have been accompanied by a suggestion of improved motor function, although cognitive function has not changed.
Automated classification of four types of developmental odontogenic cysts.
Frydenlund, A; Eramian, M; Daley, T
2014-04-01
Odontogenic cysts originate from remnants of the tooth forming epithelium in the jaws and gingiva. There are various kinds of such cysts with different biological behaviours that carry different patient risks and require different treatment plans. Types of odontogenic cysts can be distinguished by the properties of their epithelial layers in H&E stained samples. Herein we detail a set of image features for automatically distinguishing between four types of odontogenic cyst in digital micrographs and evaluate their effectiveness using two statistical classifiers - a support vector machine (SVM) and bagging with logistic regression as the base learner (BLR). Cyst type was correctly predicted from among four classes of odontogenic cysts between 83.8% and 92.3% of the time with an SVM and between 90 ± 0.92% and 95.4 ± 1.94% with a BLR. One particular cyst type was associated with the majority of misclassifications. Omission of this cyst type from the data set improved the classification rate for the remaining three cyst types to 96.2% for both SVM and BLR. Copyright © 2013 Elsevier Ltd. All rights reserved.
Oesophageal diverticula: principles of management and appraisal of classification.
Borrie, J; Wilson, R L
1980-01-01
In this paper we review a consecutive series of 50 oesophageal diverticula, appraise clinical features and methods of management, and suggest an improvement on the World Health Organization classification. The link between oesophageal diverticula and motor disorders as assessed by oesophageal manometry is stressed. It is necessary to correct the functional disorder as well as the diverticulum if it is causing symptoms. A revised classification could be as follows: congenital--single or multiple; acquired--single (cricopharyngeal, mid-oesophageal, epiphrenic other) or multiple (for example, when cricopharyngeal and mid-oesophageal present together, or when there is intramural diverticulosis. Images PMID:6781091
Malinovsky, Yaakov; Albert, Paul S; Roy, Anindya
2016-03-01
In the context of group testing screening, McMahan, Tebbs, and Bilder (2012, Biometrics 68, 287-296) proposed a two-stage procedure in a heterogenous population in the presence of misclassification. In earlier work published in Biometrics, Kim, Hudgens, Dreyfuss, Westreich, and Pilcher (2007, Biometrics 63, 1152-1162) also proposed group testing algorithms in a homogeneous population with misclassification. In both cases, the authors evaluated performance of the algorithms based on the expected number of tests per person, with the optimal design being defined by minimizing this quantity. The purpose of this article is to show that although the expected number of tests per person is an appropriate evaluation criteria for group testing when there is no misclassification, it may be problematic when there is misclassification. Specifically, a valid criterion needs to take into account the amount of correct classification and not just the number of tests. We propose, a more suitable objective function that accounts for not only the expected number of tests, but also the expected number of correct classifications. We then show how using this objective function that accounts for correct classification is important for design when considering group testing under misclassification. We also present novel analytical results which characterize the optimal Dorfman (1943) design under the misclassification. © 2015, The International Biometric Society.
Use of genetic algorithm for the selection of EEG features
NASA Astrophysics Data System (ADS)
Asvestas, P.; Korda, A.; Kostopoulos, S.; Karanasiou, I.; Ouzounoglou, A.; Sidiropoulos, K.; Ventouras, E.; Matsopoulos, G.
2015-09-01
Genetic Algorithm (GA) is a popular optimization technique that can detect the global optimum of a multivariable function containing several local optima. GA has been widely used in the field of biomedical informatics, especially in the context of designing decision support systems that classify biomedical signals or images into classes of interest. The aim of this paper is to present a methodology, based on GA, for the selection of the optimal subset of features that can be used for the efficient classification of Event Related Potentials (ERPs), which are recorded during the observation of correct or incorrect actions. In our experiment, ERP recordings were acquired from sixteen (16) healthy volunteers who observed correct or incorrect actions of other subjects. The brain electrical activity was recorded at 47 locations on the scalp. The GA was formulated as a combinatorial optimizer for the selection of the combination of electrodes that maximizes the performance of the Fuzzy C Means (FCM) classification algorithm. In particular, during the evolution of the GA, for each candidate combination of electrodes, the well-known (Σ, Φ, Ω) features were calculated and were evaluated by means of the FCM method. The proposed methodology provided a combination of 8 electrodes, with classification accuracy 93.8%. Thus, GA can be the basis for the selection of features that discriminate ERP recordings of observations of correct or incorrect actions.
Tapper, Elliot B; Hunink, M G Myriam; Afdhal, Nezam H; Lai, Michelle; Sengupta, Neil
2016-01-01
The complications of Nonalcoholic Fatty Liver Disease (NAFLD) are dependent on the presence of advanced fibrosis. Given the high prevalence of NAFLD in the US, the optimal evaluation of NAFLD likely involves triage by a primary care physician (PCP) with advanced disease managed by gastroenterologists. We compared the cost-effectiveness of fibrosis risk-assessment strategies in a cohort of 10,000 simulated American patients with NAFLD performed in either PCP or referral clinics using a decision analytical microsimulation state-transition model. The strategies included use of vibration-controlled transient elastography (VCTE), the NAFLD fibrosis score (NFS), combination testing with NFS and VCTE, and liver biopsy (usual care by a specialist only). NFS and VCTE performance was obtained from a prospective cohort of 164 patients with NAFLD. Outcomes included cost per quality adjusted life year (QALY) and correct classification of fibrosis. Risk-stratification by the PCP using the NFS alone costs $5,985 per QALY while usual care costs $7,229/QALY. In the microsimulation, at a willingness-to-pay threshold of $100,000, the NFS alone in PCP clinic was the most cost-effective strategy in 94.2% of samples, followed by combination NFS/VCTE in the PCP clinic (5.6%) and usual care in 0.2%. The NFS based strategies yield the best biopsy-correct classification ratios (3.5) while the NFS/VCTE and usual care strategies yield more correct-classifications of advanced fibrosis at the cost of 3 and 37 additional biopsies per classification. Risk-stratification of patients with NAFLD primary care clinic is a cost-effective strategy that should be formally explored in clinical practice.
NASA Astrophysics Data System (ADS)
Kazama, Yoriko; Yamamoto, Tomonori
2017-10-01
Bathymetry at shallow water especially shallower than 15m is an important area for environmental monitoring and national defense. Because the depth of shallow water is changeable by the sediment deposition and the ocean waves, the periodic monitoring at shoe area is needed. Utilization of satellite images are well matched for widely and repeatedly monitoring at sea area. Sea bottom terrain model using by remote sensing data have been developed and these methods based on the radiative transfer model of the sun irradiance which is affected by the atmosphere, water, and sea bottom. We adopted that general method of the sea depth extraction to the satellite imagery, WorldView-2; which has very fine spatial resolution (50cm/pix) and eight bands at visible to near-infrared wavelengths. From high-spatial resolution satellite images, there is possibility to know the coral reefs and the rock area's detail terrain model which offers important information for the amphibious landing. In addition, the WorldView-2 satellite sensor has the band at near the ultraviolet wavelength that is transmitted through the water. On the other hand, the previous study showed that the estimation error by the satellite imagery was related to the sea bottom materials such as sand, coral reef, sea alga, and rocks. Therefore, in this study, we focused on sea bottom materials, and tried to improve the depth estimation accuracy. First, we classified the sea bottom materials by the SVM method, which used the depth data acquired by multi-beam sonar as supervised data. Then correction values in the depth estimation equation were calculated applying the classification results. As a result, the classification accuracy of sea bottom materials was 93%, and the depth estimation error using the correction by the classification result was within 1.2m.
Statistical classification techniques for engineering and climatic data samples
NASA Technical Reports Server (NTRS)
Temple, E. C.; Shipman, J. R.
1981-01-01
Fisher's sample linear discriminant function is modified through an appropriate alteration of the common sample variance-covariance matrix. The alteration consists of adding nonnegative values to the eigenvalues of the sample variance covariance matrix. The desired results of this modification is to increase the number of correct classifications by the new linear discriminant function over Fisher's function. This study is limited to the two-group discriminant problem.
[From new genetic and histological classifications to direct treatment].
Compérat, Eva; Furudoï, Adeline; Varinot, Justine; Rioux-Leclerq, Nathalie
2016-08-01
The most important criterion for optimal cancer treatment is a correct classification of the tumour. During the last three years, several very important progresses have been made with a better definition of urothelial carcinoma (UC), especially from a molecular point of view. We start having a global understanding of UC, although many details are still not completely understood. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
The EB factory project. II. Validation with the Kepler field in preparation for K2 and TESS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parvizi, Mahmoud; Paegert, Martin; Stassun, Keivan G., E-mail: mahmoud.parvizi@vanderbilt.edu
Large repositories of high precision light curve data, such as the Kepler data set, provide the opportunity to identify astrophysically important eclipsing binary (EB) systems in large quantities. However, the rate of classical “by eye” human analysis restricts complete and efficient mining of EBs from these data using classical techniques. To prepare for mining EBs from the upcoming K2 mission as well as other current missions, we developed an automated end-to-end computational pipeline—the Eclipsing Binary Factory (EBF)—that automatically identifies EBs and classifies them into morphological types. The EBF has been previously tested on ground-based light curves. To assess the performancemore » of the EBF in the context of space-based data, we apply the EBF to the full set of light curves in the Kepler “Q3” Data Release. We compare the EBs identified from this automated approach against the human generated Kepler EB Catalog of ∼2600 EBs. When we require EB classification with ⩾90% confidence, we find that the EBF correctly identifies and classifies eclipsing contact (EC), eclipsing semi-detached (ESD), and eclipsing detached (ED) systems with a false positive rate of only 4%, 4%, and 8%, while complete to 64%, 46%, and 32%, respectively. When classification confidence is relaxed, the EBF identifies and classifies ECs, ESDs, and EDs with a slightly higher false positive rate of 6%, 16%, and 8%, while much more complete to 86%, 74%, and 62%, respectively. Through our processing of the entire Kepler “Q3” data set, we also identify 68 new candidate EBs that may have been missed by the human generated Kepler EB Catalog. We discuss the EBF's potential application to light curve classification for periodic variable stars more generally for current and upcoming surveys like K2 and the Transiting Exoplanet Survey Satellite.« less
Moga, Tudor Voicu; Popescu, Alina; Sporea, Ioan; Danila, Mirela; David, Ciprian; Gui, Vasile; Iacob, Nicoleta; Miclaus, Gratian; Sirli, Roxana
2017-08-23
Contrast enhanced ultrasound (CEUS) improved the characterization of focal liver lesions (FLLs), but is an operatordependent method. The goal of this paper was to test a computer assisted diagnosis (CAD) prototype and to see its benefit in assisting a beginner in the evaluation of FLLs. Our cohort included 97 good quality CEUS videos[34% hepatocellular carcinomas (HCC), 12.3% hypervascular metastases (HiperM), 11.3% hypovascular metastases (HipoM), 24.7% hemangiomas (HMG), 17.5% focal nodular hyperplasia (FNH)] that were used to develop a CAD prototype based on an algorithm that tested a binary decision based classifier. Two young medical doctors (1 year CEUS experience), two experts and the CAD prototype, reevaluated 50 FLLs CEUS videos (diagnosis of benign vs. malignant) first blinded to clinical data, in order to evaluate the diagnostic gap beginner vs. expert. The CAD classifier managed a 75.2% overall (benign vs. malignant) correct classification rate. The overall classification rates for the evaluators, before and after clinical data were: first beginner-78%; 94%; second beginner-82%; 96%; first expert-94%; 100%; second expert-96%; 98%. For both beginners, the malignant vs. benign diagnosis significantly improved after knowing the clinical data (p=0.005; p=0,008). The expert was better than the beginner (p=0.04) and better than the CAD (p=0.001). CAD in addition to the beginner can reach the expert diagnosis. The most frequent lesions misdiagnosed at CEUS were FNH and HCC. The CAD prototype is a good comparing tool for a beginner operator that can be developed to assist the diagnosis. In order to increase the classification rate, the CAD system for FLL in CEUS must integrate the clinical data.
Computer-aided diagnosis of contrast-enhanced spectral mammography: A feasibility study.
Patel, Bhavika K; Ranjbar, Sara; Wu, Teresa; Pockaj, Barbara A; Li, Jing; Zhang, Nan; Lobbes, Mark; Zhang, Bin; Mitchell, J Ross
2018-01-01
To evaluate whether the use of a computer-aided diagnosis-contrast-enhanced spectral mammography (CAD-CESM) tool can further increase the diagnostic performance of CESM compared with that of experienced radiologists. This IRB-approved retrospective study analyzed 50 lesions described on CESM from August 2014 to December 2015. Histopathologic analyses, used as the criterion standard, revealed 24 benign and 26 malignant lesions. An expert breast radiologist manually outlined lesion boundaries on the different views. A set of morphologic and textural features were then extracted from the low-energy and recombined images. Machine-learning algorithms with feature selection were used along with statistical analysis to reduce, select, and combine features. Selected features were then used to construct a predictive model using a support vector machine (SVM) classification method in a leave-one-out-cross-validation approach. The classification performance was compared against the diagnostic predictions of 2 breast radiologists with access to the same CESM cases. Based on the SVM classification, CAD-CESM correctly identified 45 of 50 lesions in the cohort, resulting in an overall accuracy of 90%. The detection rate for the malignant group was 88% (3 false-negative cases) and 92% for the benign group (2 false-positive cases). Compared with the model, radiologist 1 had an overall accuracy of 78% and a detection rate of 92% (2 false-negative cases) for the malignant group and 62% (10 false-positive cases) for the benign group. Radiologist 2 had an overall accuracy of 86% and a detection rate of 100% for the malignant group and 71% (8 false-positive cases) for the benign group. The results of our feasibility study suggest that a CAD-CESM tool can provide complementary information to radiologists, mainly by reducing the number of false-positive findings. Copyright © 2017 Elsevier B.V. All rights reserved.
The Eb Factory Project. Ii. Validation With the Kepler Field in Preparation for K2 and Tess
NASA Astrophysics Data System (ADS)
Parvizi, Mahmoud; Paegert, Martin; Stassun, Keivan G.
2014-12-01
Large repositories of high precision light curve data, such as the Kepler data set, provide the opportunity to identify astrophysically important eclipsing binary (EB) systems in large quantities. However, the rate of classical “by eye” human analysis restricts complete and efficient mining of EBs from these data using classical techniques. To prepare for mining EBs from the upcoming K2 mission as well as other current missions, we developed an automated end-to-end computational pipeline—the Eclipsing Binary Factory (EBF)—that automatically identifies EBs and classifies them into morphological types. The EBF has been previously tested on ground-based light curves. To assess the performance of the EBF in the context of space-based data, we apply the EBF to the full set of light curves in the Kepler “Q3” Data Release. We compare the EBs identified from this automated approach against the human generated Kepler EB Catalog of ˜ 2600 EBs. When we require EB classification with ≥slant 90% confidence, we find that the EBF correctly identifies and classifies eclipsing contact (EC), eclipsing semi-detached (ESD), and eclipsing detached (ED) systems with a false positive rate of only 4%, 4%, and 8%, while complete to 64%, 46%, and 32%, respectively. When classification confidence is relaxed, the EBF identifies and classifies ECs, ESDs, and EDs with a slightly higher false positive rate of 6%, 16%, and 8%, while much more complete to 86%, 74%, and 62%, respectively. Through our processing of the entire Kepler “Q3” data set, we also identify 68 new candidate EBs that may have been missed by the human generated Kepler EB Catalog. We discuss the EBF's potential application to light curve classification for periodic variable stars more generally for current and upcoming surveys like K2 and the Transiting Exoplanet Survey Satellite.
Noninvasive differential diagnosis of dental periapical lesions in cone-beam CT scans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okada, Kazunori, E-mail: kazokada@sfsu.edu; Rysavy, Steven; Flores, Arturo
Purpose: This paper proposes a novel application of computer-aided diagnosis (CAD) to an everyday clinical dental challenge: the noninvasive differential diagnosis of periapical lesions between periapical cysts and granulomas. A histological biopsy is the most reliable method currently available for this differential diagnosis; however, this invasive procedure prevents the lesions from healing noninvasively despite a report that they may heal without surgical treatment. A CAD using cone-beam computed tomography (CBCT) offers an alternative noninvasive diagnostic tool which helps to avoid potentially unnecessary surgery and to investigate the unknown healing process and rate for the lesions. Methods: The proposed semiautomatic solutionmore » combines graph-based random walks segmentation with machine learning-based boosted classifiers and offers a robust clinical tool with minimal user interaction. As part of this CAD framework, the authors provide two novel technical contributions: (1) probabilistic extension of the random walks segmentation with likelihood ratio test and (2) LDA-AdaBoost: a new integration of weighted linear discriminant analysis to AdaBoost. Results: A dataset of 28 CBCT scans is used to validate the approach and compare it with other popular segmentation and classification methods. The results show the effectiveness of the proposed method with 94.1% correct classification rate and an improvement of the performance by comparison with the Simon’s state-of-the-art method by 17.6%. The authors also compare classification performances with two independent ground-truth sets from the histopathology and CBCT diagnoses provided by endodontic experts. Conclusions: Experimental results of the authors show that the proposed CAD system behaves in clearer agreement with the CBCT ground-truth than with histopathology, supporting the Simon’s conjecture that CBCT diagnosis can be as accurate as histopathology for differentiating the periapical lesions.« less
Noninvasive differential diagnosis of dental periapical lesions in cone-beam CT scans.
Okada, Kazunori; Rysavy, Steven; Flores, Arturo; Linguraru, Marius George
2015-04-01
This paper proposes a novel application of computer-aided diagnosis (CAD) to an everyday clinical dental challenge: the noninvasive differential diagnosis of periapical lesions between periapical cysts and granulomas. A histological biopsy is the most reliable method currently available for this differential diagnosis; however, this invasive procedure prevents the lesions from healing noninvasively despite a report that they may heal without surgical treatment. A CAD using cone-beam computed tomography (CBCT) offers an alternative noninvasive diagnostic tool which helps to avoid potentially unnecessary surgery and to investigate the unknown healing process and rate for the lesions. The proposed semiautomatic solution combines graph-based random walks segmentation with machine learning-based boosted classifiers and offers a robust clinical tool with minimal user interaction. As part of this CAD framework, the authors provide two novel technical contributions: (1) probabilistic extension of the random walks segmentation with likelihood ratio test and (2) LDA-AdaBoost: a new integration of weighted linear discriminant analysis to AdaBoost. A dataset of 28 CBCT scans is used to validate the approach and compare it with other popular segmentation and classification methods. The results show the effectiveness of the proposed method with 94.1% correct classification rate and an improvement of the performance by comparison with the Simon's state-of-the-art method by 17.6%. The authors also compare classification performances with two independent ground-truth sets from the histopathology and CBCT diagnoses provided by endodontic experts. Experimental results of the authors show that the proposed CAD system behaves in clearer agreement with the CBCT ground-truth than with histopathology, supporting the Simon's conjecture that CBCT diagnosis can be as accurate as histopathology for differentiating the periapical lesions.
Multi-site evaluation of IKONOS data for classification of tropical coral reef environments
Andrefouet, S.; Kramer, Philip; Torres-Pulliza, D.; Joyce, K.E.; Hochberg, E.J.; Garza-Perez, R.; Mumby, P.J.; Riegl, Bernhard; Yamano, H.; White, W.H.; Zubia, M.; Brock, J.C.; Phinn, S.R.; Naseer, A.; Hatcher, B.G.; Muller-Karger, F. E.
2003-01-01
Ten IKONOS images of different coral reef sites distributed around the world were processed to assess the potential of 4-m resolution multispectral data for coral reef habitat mapping. Complexity of reef environments, established by field observation, ranged from 3 to 15 classes of benthic habitats containing various combinations of sediments, carbonate pavement, seagrass, algae, and corals in different geomorphologic zones (forereef, lagoon, patch reef, reef flats). Processing included corrections for sea surface roughness and bathymetry, unsupervised or supervised classification, and accuracy assessment based on ground-truth data. IKONOS classification results were compared with classified Landsat 7 imagery for simple to moderate complexity of reef habitats (5-11 classes). For both sensors, overall accuracies of the classifications show a general linear trend of decreasing accuracy with increasing habitat complexity. The IKONOS sensor performed better, with a 15-20% improvement in accuracy compared to Landsat. For IKONOS, overall accuracy was 77% for 4-5 classes, 71% for 7-8 classes, 65% in 9-11 classes, and 53% for more than 13 classes. The Landsat classification accuracy was systematically lower, with an average of 56% for 5-10 classes. Within this general trend, inter-site comparisons and specificities demonstrate the benefits of different approaches. Pre-segmentation of the different geomorphologic zones and depth correction provided different advantages in different environments. Our results help guide scientists and managers in applying IKONOS-class data for coral reef mapping applications. ?? 2003 Elsevier Inc. All rights reserved.
Liu, Fei; Wang, Yuan-zhong; Yang, Chun-yan; Jin, Hang
2015-01-01
The genuineness and producing area of Panax notoginseng were studied based on infrared spectroscopy combined with discriminant analysis. The infrared spectra of 136 taproots of P. notoginseng from 13 planting point in 11 counties were collected and the second derivate spectra were calculated by Omnic 8. 0 software. The infrared spectra and their second derivate spectra in the range 1 800 - 700 cm-1 were used to build model by stepwise discriminant analysis, which was in order to distinguish study on the genuineness of P. notoginseng. The model built based on the second derivate spectra showed the better recognition effect for the genuineness of P. notoginseng. The correct rate of returned classification reached to 100%, and the prediction accuracy was 93. 4%. The stability of model was tested by cross validation and the method was performed extrapolation validation. The second derivate spectra combined with the same discriminant analysis method were used to distinguish the producing area of P. notoginseng. The recognition effect of models built based on different range of spectrum and different numbers of samples were compared and found that when the model was built by collecting 8 samples from each planting point as training sample and the spectrum in the range 1 500 - 1 200 cm-1 , the recognition effect was better, with the correct rate of returned classification reached to 99. 0%, and the prediction accuracy was 76. 5%. The results indicated that infrared spectroscopy combined with discriminant analysis showed good recognition effect for the genuineness of P. notoginseng. The method might be a hopeful new method for identification of genuineness of P. notoginseng in practice. The method could recognize the producing area of P. notoginseng to some extent and could be a new thought for identification of the producing area of P. natoginseng.
Sugarman, J R; Soderberg, R; Gordon, J E; Rivara, F P
1993-01-01
OBJECTIVES. We assessed the extent to which injury rates among American Indians in Oregon are underestimated owing to misclassification of race in a surveillance system. METHODS. The Oregon Injury Registry, a population-based surveillance system, was linked with the Indian Health Service patient registration file from Oregon, and injury rates for American Indians were calculated before and after correcting for racial misclassification. RESULTS. In 1989 and 1990, 301 persons in the Oregon registry were coded as American Indian. An additional 89 injured persons who were coded as a race other than American Indian in the registry were listed as American Indian in the Indian Health Service records. The age-adjusted annual injury rate for health service-registered American Indians was 6.9/1000, 68% higher than the rate calculated before data linkage. American Indian ancestry, female sex, and residence in metropolitan counties were associated with a higher likelihood of concordant racial classification in both data sets. CONCLUSION. Injury rates among American Indians in an Oregon surveillance system are substantially underestimated owing to racial misclassification. Linkage of disease registries and vital records with Indian Health Service records in other states may improve health-related data regarding American Indians. PMID:8484448
Minimalist approach to the classification of symmetry protected topological phases
NASA Astrophysics Data System (ADS)
Xiong, Zhaoxi
A number of proposals with differing predictions (e.g. group cohomology, cobordisms, group supercohomology, spin cobordisms, etc.) have been made for the classification of symmetry protected topological (SPT) phases. Here we treat various proposals on equal footing and present rigorous, general results that are independent of which proposal is correct. We do so by formulating a minimalist Generalized Cohomology Hypothesis, which is satisfied by existing proposals and captures essential aspects of SPT classification. From this Hypothesis alone, formulas relating classifications in different dimensions and/or protected by different symmetry groups are derived. Our formalism is expected to work for fermionic as well as bosonic phases, Floquet as well as stationary phases, and spatial as well as on-site symmetries.
Classifying seismic waveforms from scratch: a case study in the alpine environment
NASA Astrophysics Data System (ADS)
Hammer, C.; Ohrnberger, M.; Fäh, D.
2013-01-01
Nowadays, an increasing amount of seismic data is collected by daily observatory routines. The basic step for successfully analyzing those data is the correct detection of various event types. However, the visually scanning process is a time-consuming task. Applying standard techniques for detection like the STA/LTA trigger still requires the manual control for classification. Here, we present a useful alternative. The incoming data stream is scanned automatically for events of interest. A stochastic classifier, called hidden Markov model, is learned for each class of interest enabling the recognition of highly variable waveforms. In contrast to other automatic techniques as neural networks or support vector machines the algorithm allows to start the classification from scratch as soon as interesting events are identified. Neither the tedious process of collecting training samples nor a time-consuming configuration of the classifier is required. An approach originally introduced for the volcanic task force action allows to learn classifier properties from a single waveform example and some hours of background recording. Besides a reduction of required workload this also enables to detect very rare events. Especially the latter feature provides a milestone point for the use of seismic devices in alpine warning systems. Furthermore, the system offers the opportunity to flag new signal classes that have not been defined before. We demonstrate the application of the classification system using a data set from the Swiss Seismological Survey achieving very high recognition rates. In detail we document all refinements of the classifier providing a step-by-step guide for the fast set up of a well-working classification system.
Del Rio, M; Mollevi, C; Bibeau, F; Vie, N; Selves, J; Emile, J-F; Roger, P; Gongora, C; Robert, J; Tubiana-Mathieu, N; Ychou, M; Martineau, P
2017-05-01
Currently, metastatic colorectal cancer is treated as a homogeneous disease and only RAS mutational status has been approved as a negative predictive factor in patients treated with cetuximab. The aim of this study was to evaluate if recently identified molecular subtypes of colon cancer are associated with response of metastatic patients to first-line therapy. We collected and analysed 143 samples of human colorectal tumours with complete clinical annotations, including the response to treatment. Gene expression profiling was used to classify patients in three to six classes using four different molecular classifications. Correlations between molecular subtypes, response to treatment, progression-free and overall survival were analysed. We first demonstrated that the four previously described molecular classifications of colorectal cancer defined in non-metastatic patients also correctly classify stage IV patients. One of the classifications is strongly associated with response to FOLFIRI (P=0.003), but not to FOLFOX (P=0.911) and FOLFIRI + Bevacizumab (P=0.190). In particular, we identify a molecular subtype representing 28% of the patients that shows an exceptionally high response rate to FOLFIRI (87.5%). These patients have a two-fold longer overall survival (40.1 months) when treated with FOLFIRI, as first-line regimen, instead of FOLFOX (18.6 months). Our results demonstrate the interest of molecular classifications to develop tailored therapies for patients with metastatic colorectal cancer and a strong impact of the first-line regimen on the overall survival of some patients. This however remains to be confirmed in a large prospective clinical trial. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mapping and Change Analysis in Mangrove Forest by Using Landsat Imagery
NASA Astrophysics Data System (ADS)
Dan, T. T.; Chen, C. F.; Chiang, S. H.; Ogawa, S.
2016-06-01
Mangrove is located in the tropical and subtropical regions and brings good services for native people. Mangrove in the world has been lost with a rapid rate. Therefore, monitoring a spatiotemporal distribution of mangrove is thus critical for natural resource management. This research objectives were: (i) to map the current extent of mangrove in the West and Central Africa and in the Sundarbans delta, and (ii) to identify change of mangrove using Landsat data. The data were processed through four main steps: (1) data pre-processing including atmospheric correction and image normalization, (2) image classification using supervised classification approach, (3) accuracy assessment for the classification results, and (4) change detection analysis. Validation was made by comparing the classification results with the ground reference data, which yielded satisfactory agreement with overall accuracy 84.1% and Kappa coefficient of 0.74 in the West and Central Africa and 83.0% and 0.73 in the Sundarbans, respectively. The result shows that mangrove areas have changed significantly. In the West and Central Africa, mangrove loss from 1988 to 2014 was approximately 16.9%, and only 2.5% was recovered or newly planted at the same time, while the overall change of mangrove in the Sundarbans increased approximately by 900 km2 of total mangrove area. Mangrove declined due to deforestation, natural catastrophes deforestation and mangrove rehabilitation programs. The overall efforts in this study demonstrated the effectiveness of the proposed method used for investigating spatiotemporal changes of mangrove and the results could provide planners with invaluable quantitative information for sustainable management of mangrove ecosystems in these regions.
An electronic nose for reliable measurement and correct classification of beverages.
Mamat, Mazlina; Samad, Salina Abdul; Hannan, Mahammad A
2011-01-01
This paper reports the design of an electronic nose (E-nose) prototype for reliable measurement and correct classification of beverages. The prototype was developed and fabricated in the laboratory using commercially available metal oxide gas sensors and a temperature sensor. The repeatability, reproducibility and discriminative ability of the developed E-nose prototype were tested on odors emanating from different beverages such as blackcurrant juice, mango juice and orange juice, respectively. Repeated measurements of three beverages showed very high correlation (r > 0.97) between the same beverages to verify the repeatability. The prototype also produced highly correlated patterns (r > 0.97) in the measurement of beverages using different sensor batches to verify its reproducibility. The E-nose prototype also possessed good discriminative ability whereby it was able to produce different patterns for different beverages, different milk heat treatments (ultra high temperature, pasteurization) and fresh and spoiled milks. The discriminative ability of the E-nose was evaluated using Principal Component Analysis and a Multi Layer Perception Neural Network, with both methods showing good classification results.
An Electronic Nose for Reliable Measurement and Correct Classification of Beverages
Mamat, Mazlina; Samad, Salina Abdul; Hannan, Mahammad A.
2011-01-01
This paper reports the design of an electronic nose (E-nose) prototype for reliable measurement and correct classification of beverages. The prototype was developed and fabricated in the laboratory using commercially available metal oxide gas sensors and a temperature sensor. The repeatability, reproducibility and discriminative ability of the developed E-nose prototype were tested on odors emanating from different beverages such as blackcurrant juice, mango juice and orange juice, respectively. Repeated measurements of three beverages showed very high correlation (r > 0.97) between the same beverages to verify the repeatability. The prototype also produced highly correlated patterns (r > 0.97) in the measurement of beverages using different sensor batches to verify its reproducibility. The E-nose prototype also possessed good discriminative ability whereby it was able to produce different patterns for different beverages, different milk heat treatments (ultra high temperature, pasteurization) and fresh and spoiled milks. The discriminative ability of the E-nose was evaluated using Principal Component Analysis and a Multi Layer Perception Neural Network, with both methods showing good classification results. PMID:22163964
Document image improvement for OCR as a classification problem
NASA Astrophysics Data System (ADS)
Summers, Kristen M.
2003-01-01
In support of the goal of automatically selecting methods of enhancing an image to improve the accuracy of OCR on that image, we consider the problem of determining whether to apply each of a set of methods as a supervised classification problem for machine learning. We characterize each image according to a combination of two sets of measures: a set that are intended to reflect the degree of particular types of noise present in documents in a single font of Roman or similar script and a more general set based on connected component statistics. We consider several potential methods of image improvement, each of which constitutes its own 2-class classification problem, according to whether transforming the image with this method improves the accuracy of OCR. In our experiments, the results varied for the different image transformation methods, but the system made the correct choice in 77% of the cases in which the decision affected the OCR score (in the range [0,1]) by at least .01, and it made the correct choice 64% of the time overall.
NASA Astrophysics Data System (ADS)
Uríčková, Veronika; Sádecká, Jana
2015-09-01
The identification of the geographical origin of beverages is one of the most important issues in food chemistry. Spectroscopic methods provide a relative rapid and low cost alternative to traditional chemical composition or sensory analyses. This paper reviews the current state of development of ultraviolet (UV), visible (Vis), near infrared (NIR) and mid infrared (MIR) spectroscopic techniques combined with pattern recognition methods for determining geographical origin of both wines and distilled drinks. UV, Vis, and NIR spectra contain broad band(s) with weak spectral features limiting their discrimination ability. Despite this expected shortcoming, each of the three spectroscopic ranges (NIR, Vis/NIR and UV/Vis/NIR) provides average correct classification higher than 82%. Although average correct classification is similar for NIR and MIR regions, in some instances MIR data processing improves prediction. Advantage of using MIR is that MIR peaks are better defined and more easily assigned than NIR bands. In general, success in a classification depends on both spectral range and pattern recognition methods. The main problem still remains the construction of databanks needed for all of these methods.
The Immune System as a Model for Pattern Recognition and Classification
Carter, Jerome H.
2000-01-01
Objective: To design a pattern recognition engine based on concepts derived from mammalian immune systems. Design: A supervised learning system (Immunos-81) was created using software abstractions of T cells, B cells, antibodies, and their interactions. Artificial T cells control the creation of B-cell populations (clones), which compete for recognition of “unknowns.” The B-cell clone with the “simple highest avidity” (SHA) or “relative highest avidity” (RHA) is considered to have successfully classified the unknown. Measurement: Two standard machine learning data sets, consisting of eight nominal and six continuous variables, were used to test the recognition capabilities of Immunos-81. The first set (Cleveland), consisting of 303 cases of patients with suspected coronary artery disease, was used to perform a ten-way cross-validation. After completing the validation runs, the Cleveland data set was used as a training set prior to presentation of the second data set, consisting of 200 unknown cases. Results: For cross-validation runs, correct recognition using SHA ranged from a high of 96 percent to a low of 63.2 percent. The average correct classification for all runs was 83.2 percent. Using the RHA metric, 11.2 percent were labeled “too close to determine” and no further attempt was made to classify them. Of the remaining cases, 85.5 percent were correctly classified. When the second data set was presented, correct classification occurred in 73.5 percent of cases when SHA was used and in 80.3 percent of cases when RHA was used. Conclusions: The immune system offers a viable paradigm for the design of pattern recognition systems. Additional research is required to fully exploit the nuances of immune computation. PMID:10641961
Neurons from the adult human dentate nucleus: neural networks in the neuron classification.
Grbatinić, Ivan; Marić, Dušica L; Milošević, Nebojša T
2015-04-07
Topological (central vs. border neuron type) and morphological classification of adult human dentate nucleus neurons according to their quantified histomorphological properties using neural networks on real and virtual neuron samples. In the real sample 53.1% and 14.1% of central and border neurons, respectively, are classified correctly with total of 32.8% of misclassified neurons. The most important result present 62.2% of misclassified neurons in border neurons group which is even greater than number of correctly classified neurons (37.8%) in that group, showing obvious failure of network to classify neurons correctly based on computational parameters used in our study. On the virtual sample 97.3% of misclassified neurons in border neurons group which is much greater than number of correctly classified neurons (2.7%) in that group, again confirms obvious failure of network to classify neurons correctly. Statistical analysis shows that there is no statistically significant difference in between central and border neurons for each measured parameter (p>0.05). Total of 96.74% neurons are morphologically classified correctly by neural networks and each one belongs to one of the four histomorphological types: (a) neurons with small soma and short dendrites, (b) neurons with small soma and long dendrites, (c) neuron with large soma and short dendrites, (d) neurons with large soma and long dendrites. Statistical analysis supports these results (p<0.05). Human dentate nucleus neurons can be classified in four neuron types according to their quantitative histomorphological properties. These neuron types consist of two neuron sets, small and large ones with respect to their perykarions with subtypes differing in dendrite length i.e. neurons with short vs. long dendrites. Besides confirmation of neuron classification on small and large ones, already shown in literature, we found two new subtypes i.e. neurons with small soma and long dendrites and with large soma and short dendrites. These neurons are most probably equally distributed throughout the dentate nucleus as no significant difference in their topological distribution is observed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tamboer, P; Vorst, H C M; Ghebreab, S; Scholte, H S
2016-01-01
Meta-analytic studies suggest that dyslexia is characterized by subtle and spatially distributed variations in brain anatomy, although many variations failed to be significant after corrections of multiple comparisons. To circumvent issues of significance which are characteristic for conventional analysis techniques, and to provide predictive value, we applied a machine learning technique--support vector machine--to differentiate between subjects with and without dyslexia. In a sample of 22 students with dyslexia (20 women) and 27 students without dyslexia (25 women) (18-21 years), a classification performance of 80% (p < 0.001; d-prime = 1.67) was achieved on the basis of differences in gray matter (sensitivity 82%, specificity 78%). The voxels that were most reliable for classification were found in the left occipital fusiform gyrus (LOFG), in the right occipital fusiform gyrus (ROFG), and in the left inferior parietal lobule (LIPL). Additionally, we found that classification certainty (e.g. the percentage of times a subject was correctly classified) correlated with severity of dyslexia (r = 0.47). Furthermore, various significant correlations were found between the three anatomical regions and behavioural measures of spelling, phonology and whole-word-reading. No correlations were found with behavioural measures of short-term memory and visual/attentional confusion. These data indicate that the LOFG, ROFG and the LIPL are neuro-endophenotype and potentially biomarkers for types of dyslexia related to reading, spelling and phonology. In a second and independent sample of 876 young adults of a general population, the trained classifier of the first sample was tested, resulting in a classification performance of 59% (p = 0.07; d-prime = 0.65). This decline in classification performance resulted from a large percentage of false alarms. This study provided support for the use of machine learning in anatomical brain imaging.
Exploiting ensemble learning for automatic cataract detection and grading.
Yang, Ji-Jiang; Li, Jianqiang; Shen, Ruifang; Zeng, Yang; He, Jian; Bi, Jing; Li, Yong; Zhang, Qinyan; Peng, Lihui; Wang, Qing
2016-02-01
Cataract is defined as a lenticular opacity presenting usually with poor visual acuity. It is one of the most common causes of visual impairment worldwide. Early diagnosis demands the expertise of trained healthcare professionals, which may present a barrier to early intervention due to underlying costs. To date, studies reported in the literature utilize a single learning model for retinal image classification in grading cataract severity. We present an ensemble learning based approach as a means to improving diagnostic accuracy. Three independent feature sets, i.e., wavelet-, sketch-, and texture-based features, are extracted from each fundus image. For each feature set, two base learning models, i.e., Support Vector Machine and Back Propagation Neural Network, are built. Then, the ensemble methods, majority voting and stacking, are investigated to combine the multiple base learning models for final fundus image classification. Empirical experiments are conducted for cataract detection (two-class task, i.e., cataract or non-cataractous) and cataract grading (four-class task, i.e., non-cataractous, mild, moderate or severe) tasks. The best performance of the ensemble classifier is 93.2% and 84.5% in terms of the correct classification rates for cataract detection and grading tasks, respectively. The results demonstrate that the ensemble classifier outperforms the single learning model significantly, which also illustrates the effectiveness of the proposed approach. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Brain tumor classification of microscopy images using deep residual learning
NASA Astrophysics Data System (ADS)
Ishikawa, Yota; Washiya, Kiyotada; Aoki, Kota; Nagahashi, Hiroshi
2016-12-01
The crisis rate of brain tumor is about one point four in ten thousands. In general, cytotechnologists take charge of cytologic diagnosis. However, the number of cytotechnologists who can diagnose brain tumors is not sufficient, because of the necessity of highly specialized skill. Computer-Aided Diagnosis by computational image analysis may dissolve the shortage of experts and support objective pathological examinations. Our purpose is to support a diagnosis from a microscopy image of brain cortex and to identify brain tumor by medical image processing. In this study, we analyze Astrocytes that is a type of glia cell of central nerve system. It is not easy for an expert to discriminate brain tumor correctly since the difference between astrocytes and low grade astrocytoma (tumors formed from Astrocyte) is very slight. In this study, we present a novel method to segment cell regions robustly using BING objectness estimation and to classify brain tumors using deep convolutional neural networks (CNNs) constructed by deep residual learning. BING is a fast object detection method and we use pretrained BING model to detect brain cells. After that, we apply a sequence of post-processing like Voronoi diagram, binarization, watershed transform to obtain fine segmentation. For classification using CNNs, a usual way of data argumentation is applied to brain cells database. Experimental results showed 98.5% accuracy of classification and 98.2% accuracy of segmentation.
[Research on spectra recognition method for cabbages and weeds based on PCA and SIMCA].
Zu, Qin; Deng, Wei; Wang, Xiu; Zhao, Chun-Jiang
2013-10-01
In order to improve the accuracy and efficiency of weed identification, the difference of spectral reflectance was employed to distinguish between crops and weeds. Firstly, the different combinations of Savitzky-Golay (SG) convolutional derivation and multiplicative scattering correction (MSC) method were applied to preprocess the raw spectral data. Then the clustering analysis of various types of plants was completed by using principal component analysis (PCA) method, and the feature wavelengths which were sensitive for classifying various types of plants were extracted according to the corresponding loading plots of the optimal principal components in PCA results. Finally, setting the feature wavelengths as the input variables, the soft independent modeling of class analogy (SIMCA) classification method was used to identify the various types of plants. The experimental results of classifying cabbages and weeds showed that on the basis of the optimal pretreatment by a synthetic application of MSC and SG convolutional derivation with SG's parameters set as 1rd order derivation, 3th degree polynomial and 51 smoothing points, 23 feature wavelengths were extracted in accordance with the top three principal components in PCA results. When SIMCA method was used for classification while the previously selected 23 feature wavelengths were set as the input variables, the classification rates of the modeling set and the prediction set were respectively up to 98.6% and 100%.
Phaeochromocytoma [corrected] crisis.
Whitelaw, B C; Prague, J K; Mustafa, O G; Schulte, K-M; Hopkins, P A; Gilbert, J A; McGregor, A M; Aylwin, S J B
2014-01-01
Phaeochromocytoma [corrected] crisis is an endocrine emergency associated with significant mortality. There is little published guidance on the management of phaeochromocytoma [corrected] crisis. This clinical practice update summarizes the relevant published literature, including a detailed review of cases published in the past 5 years, and a proposed classification system. We review the recommended management of phaeochromocytoma [corrected] crisis including the use of alpha-blockade, which is strongly associated with survival of a crisis. Mechanical circulatory supportive therapy (including intra-aortic balloon pump or extra-corporeal membrane oxygenation) is strongly recommended for patients with sustained hypotension. Surgical intervention should be deferred until medical stabilization is achieved. © 2013 John Wiley & Sons Ltd.
Deep learning application: rubbish classification with aid of an android device
NASA Astrophysics Data System (ADS)
Liu, Sijiang; Jiang, Bo; Zhan, Jie
2017-06-01
Deep learning is a very hot topic currently in pattern recognition and artificial intelligence researches. Aiming at the practical problem that people usually don't know correct classifications some rubbish should belong to, based on the powerful image classification ability of the deep learning method, we have designed a prototype system to help users to classify kinds of rubbish. Firstly the CaffeNet Model was adopted for our classification network training on the ImageNet dataset, and the trained network was deployed on a web server. Secondly an android app was developed for users to capture images of unclassified rubbish, upload images to the web server for analyzing backstage and retrieve the feedback, so that users can obtain the classification guide by an android device conveniently. Tests on our prototype system of rubbish classification show that: an image of one single type of rubbish with origin shape can be better used to judge its classification, while an image containing kinds of rubbish or rubbish with changed shape may fail to help users to decide rubbish's classification. However, the system still shows promising auxiliary function for rubbish classification if the network training strategy can be optimized further.
An experiment in multispectral, multitemporal crop classification using relaxation techniques
NASA Technical Reports Server (NTRS)
Davis, L. S.; Wang, C.-Y.; Xie, H.-C
1983-01-01
The paper describes the result of an experimental study concerning the use of probabilistic relaxation for improving pixel classification rates. Two LACIE sites were used in the study and in both cases, relaxation resulted in a marked improvement in classification rates.
Authorship Discovery in Blogs Using Bayesian Classification with Corrective Scaling
2008-06-01
4 2.3 W. Fucks ’ Diagram of n-Syllable Word Frequencies . . . . . . . . . . . . . . 5 3.1 Confusion Matrix for All Test Documents of 500...of the books which scholars believed he had. • Wilhelm Fucks discriminated between authors using the average number of syllables per word and average...distance between equal-syllabled words [8]. Fucks , too, concluded that a study such as his reveals a “possibility of a quantitative classification
Human-Centered Planning for Effective Task Autonomy
2012-05-01
observation o 6= onull:∑ o 6=onull p(o|s, aask) = αs (3.1) When the occupant is not available, we say it results in observation onull: p(onull|s, aask...they are paying attention to what it says . Uncertainty Many classification and inference algorithms give a measure of uncertainty - the probability...provide corrective feedback for handwriting recognition, email classification, and other domains (e.g., Mankoff, Abowd, and Hudson (2000); Scaffidi (2009
Magallón-Neri, Ernesto; González, Esther; Canalda, Gloria; Forns, Maria; De La Fuente, J Eugenio; Martínez, Estebán; García, Raquel; Lara, Anais; Vallès, Antoni; Castro-Fornieles, Josefina
2014-05-01
The objective of this study is to explore and compare the prevalence of categorical and dimensional personality disorders (PDs) and their severity in Spanish adolescents with Eating Disorders (EDs). Diagnostic and Statistical Manual of Mental Disorders Fourth Edition and International Classification of Diseases, Tenth Revision-10 modules of the International Personality Disorder Examination were administered to a sample of 100 female adolescents with EDs (mean age=15.8 years, SD=0.9). 'Thirty-three per cent of the sample had at least one PD, in most cases a simple PD. The rate of PDs was 64-76% in bulimia patients, 22-28% in anorexia and 25% in EDs not otherwise specified. The highest dimensional scores were observed in bulimia, [corrected] mainly in borderline and histrionic PDs, and higher scores for anankastic PD in anorexia than in the other ED diagnoses. Overall, purging type EDs had higher cluster B personality pathology scores than restrictive type.' [corrected] The Publisher would like to apologize for this error and any confusion it may have caused. [corrected]. Adolescent female patients with ED have a risk of presenting a comorbid PD, especially patients with bulimia and purging type EDs. Copyright © 2013 John Wiley & Sons, Ltd and Eating Disorders Association.
Mocz, G.
1995-01-01
Fuzzy cluster analysis has been applied to the 20 amino acids by using 65 physicochemical properties as a basis for classification. The clustering products, the fuzzy sets (i.e., classical sets with associated membership functions), have provided a new measure of amino acid similarities for use in protein folding studies. This work demonstrates that fuzzy sets of simple molecular attributes, when assigned to amino acid residues in a protein's sequence, can predict the secondary structure of the sequence with reasonable accuracy. An approach is presented for discriminating standard folding states, using near-optimum information splitting in half-overlapping segments of the sequence of assigned membership functions. The method is applied to a nonredundant set of 252 proteins and yields approximately 73% matching for correctly predicted and correctly rejected residues with approximately 60% overall success rate for the correctly recognized ones in three folding states: alpha-helix, beta-strand, and coil. The most useful attributes for discriminating these states appear to be related to size, polarity, and thermodynamic factors. Van der Waals volume, apparent average thickness of surrounding molecular free volume, and a measure of dimensionless surface electron density can explain approximately 95% of prediction results. hydrogen bonding and hydrophobicity induces do not yet enable clear clustering and prediction. PMID:7549882
Classification of cardiac patient states using artificial neural networks
Kannathal, N; Acharya, U Rajendra; Lim, Choo Min; Sadasivan, PK; Krishnan, SM
2003-01-01
Electrocardiogram (ECG) is a nonstationary signal; therefore, the disease indicators may occur at random in the time scale. This may require the patient be kept under observation for long intervals in the intensive care unit of hospitals for accurate diagnosis. The present study examined the classification of the states of patients with certain diseases in the intensive care unit using their ECG and an Artificial Neural Networks (ANN) classification system. The states were classified into normal, abnormal and life threatening. Seven significant features extracted from the ECG were fed as input parameters to the ANN for classification. Three neural network techniques, namely, back propagation, self-organizing maps and radial basis functions, were used for classification of the patient states. The ANN classifier in this case was observed to be correct in approximately 99% of the test cases. This result was further improved by taking 13 features of the ECG as input for the ANN classifier. PMID:19649222
NASA Astrophysics Data System (ADS)
Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.
2014-12-01
Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.
7 CFR 400.304 - Nonstandard Classification determinations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... changes are necessary in assigned yields or premium rates under the conditions set forth in § 400.304(f... Classification determinations. (a) Nonstandard Classification determinations can affect a change in assigned yields, premium rates, or both from those otherwise prescribed by the insurance actuarial tables. (b...
Improving crop classification through attention to the timing of airborne radar acquisitions
NASA Technical Reports Server (NTRS)
Brisco, B.; Ulaby, F. T.; Protz, R.
1984-01-01
Radar remote sensors may provide valuable input to crop classification procedures because of (1) their independence of weather conditions and solar illumination, and (2) their ability to respond to differences in crop type. Manual classification of multidate synthetic aperture radar (SAR) imagery resulted in an overall accuracy of 83 percent for corn, forest, grain, and 'other' cover types. Forests and corn fields were identified with accuracies approaching or exceeding 90 percent. Grain fields and 'other' fields were often confused with each other, resulting in classification accuracies of 51 and 66 percent, respectively. The 83 percent correct classification represents a 10 percent improvement when compared to similar SAR data for the same area collected at alternate time periods in 1978. These results demonstrate that improvements in crop classification accuracy can be achieved with SAR data by synchronizing data collection times with crop growth stages in order to maximize differences in the geometric and dielectric properties of the cover types of interest.
NASA Technical Reports Server (NTRS)
Hill, C. L.
1984-01-01
A computer-implemented classification has been derived from Landsat-4 Thematic Mapper data acquired over Baldwin County, Alabama on January 15, 1983. One set of spectral signatures was developed from the data by utilizing a 3x3 pixel sliding window approach. An analysis of the classification produced from this technique identified forested areas. Additional information regarding only the forested areas. Additional information regarding only the forested areas was extracted by employing a pixel-by-pixel signature development program which derived spectral statistics only for pixels within the forested land covers. The spectral statistics from both approaches were integrated and the data classified. This classification was evaluated by comparing the spectral classes produced from the data against corresponding ground verification polygons. This iterative data analysis technique resulted in an overall classification accuracy of 88.4 percent correct for slash pine, young pine, loblolly pine, natural pine, and mixed hardwood-pine. An accuracy assessment matrix has been produced for the classification.
Selih, Vid S; Sala, Martin; Drgan, Viktor
2014-06-15
Inductively coupled plasma mass spectrometry and optical emission were used to determine the multi-element composition of 272 bottled Slovenian wines. To achieve geographical classification of the wines by their elemental composition, principal component analysis (PCA) and counter-propagation artificial neural networks (CPANN) have been used. From 49 elements measured, 19 were used to build the final classification models. CPANN was used for the final predictions because of its superior results. The best model gave 82% correct predictions for external set of the white wine samples. Taking into account the small size of whole Slovenian wine growing regions, we consider the classification results were very good. For the red wines, which were mostly represented from one region, even-sub region classification was possible with great precision. From the level maps of the CPANN model, some of the most important elements for classification were identified. Copyright © 2013 Elsevier Ltd. All rights reserved.
Paudel, M R; Mackenzie, M; Fallone, B G; Rathee, S
2013-08-01
To evaluate the metal artifacts in kilovoltage computed tomography (kVCT) images that are corrected using a normalized metal artifact reduction (NMAR) method with megavoltage CT (MVCT) prior images. Tissue characterization phantoms containing bilateral steel inserts are used in all experiments. Two MVCT images, one without any metal artifact corrections and the other corrected using a modified iterative maximum likelihood polychromatic algorithm for CT (IMPACT) are translated to pseudo-kVCT images. These are then used as prior images without tissue classification in an NMAR technique for correcting the experimental kVCT image. The IMPACT method in MVCT included an additional model for the pair∕triplet production process and the energy dependent response of the MVCT detectors. An experimental kVCT image, without the metal inserts and reconstructed using the filtered back projection (FBP) method, is artificially patched with the known steel inserts to get a reference image. The regular NMAR image containing the steel inserts that uses tissue classified kVCT prior and the NMAR images reconstructed using MVCT priors are compared with the reference image for metal artifact reduction. The Eclipse treatment planning system is used to calculate radiotherapy dose distributions on the corrected images and on the reference image using the Anisotropic Analytical Algorithm with 6 MV parallel opposed 5×10 cm2 fields passing through the bilateral steel inserts, and the results are compared. Gafchromic film is used to measure the actual dose delivered in a plane perpendicular to the beams at the isocenter. The streaking and shading in the NMAR image using tissue classifications are significantly reduced. However, the structures, including metal, are deformed. Some uniform regions appear to have eroded from one side. There is a large variation of attenuation values inside the metal inserts. Similar results are seen in commercially corrected image. Use of MVCT prior images without tissue classification in NMAR significantly reduces these problems. The radiation dose calculated on the reference image is close to the dose measured using the film. Compared to the reference image, the calculated dose difference in the conventional NMAR image, the corrected images using uncorrected MVCT image, and IMPACT corrected MVCT image as priors is ∼15.5%, ∼5%, and ∼2.7%, respectively, at the isocenter. The deformation and erosion of the structures present in regular NMAR corrected images can be largely reduced by using MVCT priors without tissue segmentation. The attenuation value of metal being incorrect, large dose differences relative to the true value can result when using the conventional NMAR image. This difference can be significantly reduced if MVCT images are used as priors. Reduced tissue deformation, better tissue visualization, and correct information about the electron density of the tissues and metals in the artifact corrected images could help delineate the structures better, as well as calculate radiation dose more correctly, thus enhancing the quality of the radiotherapy treatment planning.
Blind identification of image manipulation type using mixed statistical moments
NASA Astrophysics Data System (ADS)
Jeong, Bo Gyu; Moon, Yong Ho; Eom, Il Kyu
2015-01-01
We present a blind identification of image manipulation types such as blurring, scaling, sharpening, and histogram equalization. Motivated by the fact that image manipulations can change the frequency characteristics of an image, we introduce three types of feature vectors composed of statistical moments. The proposed statistical moments are generated from separated wavelet histograms, the characteristic functions of the wavelet variance, and the characteristic functions of the spatial image. Our method can solve the n-class classification problem. Through experimental simulations, we demonstrate that our proposed method can achieve high performance in manipulation type detection. The average rate of the correctly identified manipulation types is as high as 99.22%, using 10,800 test images and six manipulation types including the authentic image.
NASA Astrophysics Data System (ADS)
Warren, Sean N.; Kallu, Raj R.; Barnard, Chase K.
2016-11-01
Underground gold mines in Nevada are exploiting increasingly deeper ore bodies comprised of weak to very weak rock masses. The Rock Mass Rating (RMR) classification system is widely used at underground gold mines in Nevada and is applicable in fair to good-quality rock masses, but is difficult to apply and loses reliability in very weak rock mass to soil-like material. Because very weak rock masses are transition materials that border engineering rock mass and soil classification systems, soil classification may sometimes be easier and more appropriate to provide insight into material behavior and properties. The Unified Soil Classification System (USCS) is the most likely choice for the classification of very weak rock mass to soil-like material because of its accepted use in tunnel engineering projects and its ability to predict soil-like material behavior underground. A correlation between the RMR and USCS systems was developed by comparing underground geotechnical RMR mapping to laboratory testing of bulk samples from the same locations, thereby assigning a numeric RMR value to the USCS classification that can be used in spreadsheet calculations and geostatistical analyses. The geotechnical classification system presented in this paper including a USCS-RMR correlation, RMR rating equations, and the Geo-Pick Strike Index is collectively introduced as the Weak Rock Mass Rating System (W-RMR). It is the authors' hope that this system will aid in the classification of weak rock masses and more usable design tools based on the RMR system. More broadly, the RMR-USCS correlation and the W-RMR system help define the transition between engineering soil and rock mass classification systems and may provide insight for geotechnical design in very weak rock masses.
77 FR 39747 - Changes in Postal Rates
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-05
... with the Commission of a proposal characterized as a minor classification change under 39 CFR parts 3090 and 3091, along with a conforming revision to the Mail Classification Schedule (MCS).\\1\\ The... Flat Rate Envelope options. \\1\\ Notice of United States Postal Service of Classification Changes, June...
Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin
2017-05-08
Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.
Moran, Lara; Andres, Sonia; Allen, Paul; Moloney, Aidan P
2018-08-01
Visible-near infrared spectroscopy (Vis-NIRS) has been suggested to have potential for authentication of food products. The aim of the present preliminary study was to assess if this technology can be used to authenticate the ageing time (3, 7, 14 and 21 days post mortem) of beef steaks from three different muscles (M. Longissimus thoracis, M. Gluteus medius and M. Semitendinosus). Various mathematical pre-treatments were applied to the spectra to correct scattering and overlapping effects, and then partial least squares-discrimination analysis (PLS-DA) procedures applied. The best models were specific for each muscle, and the ability of prediction of ageing time was validated using full (leave-one-out) cross-validation, whereas authentication performance was evaluated using the parameters of sensitivity, specificity and overall correct classification. The results indicate that overall correct classification ranging from 94.2 to 100% was achieved, depending on the muscle. In conclusion, Vis-NIRS technology seems a valid tool for the authentication of ageing time of beef steaks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cohen, Aaron M
2008-01-01
We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.
NASA Astrophysics Data System (ADS)
Morrow, Andrew N.; Matthews, Kenneth L., II; Bujenovic, Steven
2008-03-01
Positron emission tomography (PET) and computed tomography (CT) together are a powerful diagnostic tool, but imperfect image quality allows false positive and false negative diagnoses to be made by any observer despite experience and training. This work investigates PET acquisition mode, reconstruction method and a standard uptake value (SUV) correction scheme on the classification of lesions as benign or malignant in PET/CT images, in an anthropomorphic phantom. The scheme accounts for partial volume effect (PVE) and PET resolution. The observer draws a region of interest (ROI) around the lesion using the CT dataset. A simulated homogenous PET lesion of the same shape as the drawn ROI is blurred with the point spread function (PSF) of the PET scanner to estimate the PVE, providing a scaling factor to produce a corrected SUV. Computer simulations showed that the accuracy of the corrected PET values depends on variations in the CT-drawn boundary and the position of the lesion with respect to the PET image matrix, especially for smaller lesions. Correction accuracy was affected slightly by mismatch of the simulation PSF and the actual scanner PSF. The receiver operating characteristic (ROC) study resulted in several observations. Using observer drawn ROIs, scaled tumor-background ratios (TBRs) more accurately represented actual TBRs than unscaled TBRs. For the PET images, 3D OSEM outperformed 2D OSEM, 3D OSEM outperformed 3D FBP, and 2D OSEM outperformed 2D FBP. The correction scheme significantly increased sensitivity and slightly increased accuracy for all acquisition and reconstruction modes at the cost of a small decrease in specificity.
An Analysis and Classification of Dying AGB Stars Transitioning to Pre-Planetary Nebulae
NASA Technical Reports Server (NTRS)
Blake, Adam C.
2011-01-01
The principal objective of the project is to understand part of the life and death process of a star. During the end of a star's life, it expels its mass at a very rapid rate. We want to understand how these Asymptotic Giant Branch (AGB) stars begin forming asymmetric structures as they start evolving towards the planetary nebula phase and why planetary nebulae show a very large variety of non-round geometrical shapes. To do this, we analyzed images of just-forming pre-planetary nebula from Hubble surveys. These images were run through various image correction processes like saturation correction and cosmic ray removal using in-house software to bring out the circumstellar structure. We classified the visible structure based on qualitative data such as lobe, waist, halo, and other structures. Radial and azimuthal intensity cuts were extracted from the images to quantitatively examine the circumstellar structure and measure departures from the smooth spherical outflow expected during most of the AGB mass-loss phase. By understanding the asymmetrical structure, we hope to understand the mechanisms that drive this stellar evolution.
Stefano, A; Gallivanone, F; Messa, C; Gilardi, M C; Gastiglioni, I
2014-12-01
The aim of this work is to evaluate the metabolic impact of Partial Volume Correction (PVC) on the measurement of the Standard Uptake Value (SUV) from [18F]FDG PET-CT oncological studies for treatment monitoring purpose. Twenty-nine breast cancer patients with bone lesions (42 lesions in total) underwent [18F]FDG PET-CT studies after surgical resection of breast cancer primitives, and before (PET-II) chemotherapy and hormone treatment. PVC of bone lesion uptake was performed on the two [18F]FDG PET-CT studies, using a method based on Recovery Coefficients (RC) and on an automatic measurement of lesion metabolic volume. Body-weight average SUV was calculated for each lesion, with and without PVC. The accuracy, reproducibility, clinical feasibility and the metabolic impact on treatment response of the considered PVC method was evaluated. The PVC method was found clinically feasible in bone lesions, with an accuracy of 93% for lesion sphere-equivalent diameter >1 cm. Applying PVC, average SUV values increased, from 7% up to 154% considering both PET-I and PET-II studies, proving the need of the correction. As main finding, PVC modified the therapy response classification in 6 cases according to EORTC 1999 classification and in 5 cases according to PERCIST 1.0 classification. PVC has an important metabolic impact on the assessment of tumor response to treatment by [18F]FDG PET-CT oncological studies.
Does ASA classification impact success rates of endovascular aneurysm repairs?
Conners, Michael S; Tonnessen, Britt H; Sternbergh, W Charles; Carter, Glen; Yoselevitz, Moises; Money, Samuel R
2002-09-01
The purpose of this study was to evaluate the technical success, clinical success, postoperative complication rate, need for a secondary procedure, and mortality rate with endovascular aneurysm repair (EAR), based on the physical status classification scheme advocated by the American Society of Anesthesiologists (ASA). At a single institution 167 patients underwent attempted EAR. Query of a prospectively maintained database supplemented with a retrospective review of medical records was used to gather statistics pertaining to patient demographics and outcome. In patients selected for EAR on the basis of acceptable anatomy, technical and clinical success rates were not significantly different among the different ASA classifications. Importantly, postoperative complication and 30-day mortality rates do not appear to significantly differ among the different ASA classifications in this patient population.
Fractures of the cervical spine
Marcon, Raphael Martus; Cristante, Alexandre Fogaça; Teixeira, William Jacobsen; Narasaki, Douglas Kenji; Oliveira, Reginaldo Perilo; de Barros Filho, Tarcísio Eloy Pessoa
2013-01-01
OBJECTIVES: The aim of this study was to review the literature on cervical spine fractures. METHODS: The literature on the diagnosis, classification, and treatment of lower and upper cervical fractures and dislocations was reviewed. RESULTS: Fractures of the cervical spine may be present in polytraumatized patients and should be suspected in patients complaining of neck pain. These fractures are more common in men approximately 30 years of age and are most often caused by automobile accidents. The cervical spine is divided into the upper cervical spine (occiput-C2) and the lower cervical spine (C3-C7), according to anatomical differences. Fractures in the upper cervical spine include fractures of the occipital condyle and the atlas, atlanto-axial dislocations, fractures of the odontoid process, and hangman's fractures in the C2 segment. These fractures are characterized based on specific classifications. In the lower cervical spine, fractures follow the same pattern as in other segments of the spine; currently, the most widely used classification is the SLIC (Subaxial Injury Classification), which predicts the prognosis of an injury based on morphology, the integrity of the disc-ligamentous complex, and the patient's neurological status. It is important to correctly classify the fracture to ensure appropriate treatment. Nerve or spinal cord injuries, pseudarthrosis or malunion, and postoperative infection are the main complications of cervical spine fractures. CONCLUSIONS: Fractures of the cervical spine are potentially serious and devastating if not properly treated. Achieving the correct diagnosis and classification of a lesion is the first step toward identifying the most appropriate treatment, which can be either surgical or conservative. PMID:24270959
[Differentiation between moisture lesions and pressure ulcers using photographs in a critical area].
Valls-Matarín, Josefa; Del Cotillo-Fuente, Mercedes; Pujol-Vila, María; Ribal-Prior, Rosa; Sandalinas-Mulero, Inmaculada
2016-01-01
To identify difficulties for nurses in differentiating between moisture lesions and pressure ulcers, proper classification of pressure ulcers to assess the adequate classification of the Grupo Nacional para el Estudio y Asesoramiento de Úlceras por Presión y Heridas Crónicas (GNEAUPP) and the degree of agreement in the correct assessment by type and category of injury. Cross-sectional study in a critical area during 2014. All nurses who agreed to participate were included. They performed a questionnaire with 14 photographs validated by experts of moisture lesions or pressure ulcers in the sacral area and buttocks, with 6 possible answers: Pressure ulcer category I, II, III, IV, moisture lesions and unknown. Demographics and knowledge of the classification system of the pressure ulcers were collected according to GNEAUPP. It involved 98% of the population (n=56); 98.2% knew the classification system of the GNEAUPP; 35.2% of moisture lesions were considered as pressure ulcers, most of them as a category II (18.9%). The 14.8% of the pressure ulcers photographs were identified as moisture lesions and 16.1% were classified in another category. The agreement between nurses earned a global Kappa index of .38 (95% CI: .29-.57). There are difficulties differentiating between pressure ulcers and moisture lesions, especially within initial categories. Nurses have the perception they know the pressure ulcers classification, but they do not classify them correctly. The degree of concordance in the diagnosis of skin lesions was low. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Paradella, W. R.; Vitorello, I.
1982-01-01
Several aspects of computer-assisted analysis techniques for image enhancement and thematic classification by which LANDSAT MSS imagery may be treated quantitatively are explained. On geological applications, computer processing of digital data allows, possibly, the fullest use of LANDSAT data, by displaying enhanced and corrected data for visual analysis and by evaluating and assigning each spectral pixel information to a given class.
Effect of the atmosphere on the classification of LANDSAT data. [Identifying sugar canes in Brazil
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Morimoto, T.; Kumar, R.; Molion, L. C. B.
1979-01-01
The author has identified the following significant results. In conjunction with Turner's model for the correction of satellite data for atmospheric interference, the LOWTRAN-3 computer was used to calculate the atmospheric interference. Use of the program improved the contrast between different natural targets in the MSS LANDSAT data of Brasilia, Brazil. The classification accuracy of sugar canes was improved by about 9% in the multispectral data of Ribeirao Preto, Sao Paulo.
NASA Technical Reports Server (NTRS)
Haralick, R. H. (Principal Investigator); Bosley, R. J.
1974-01-01
The author has identified the following significant results. A procedure was developed to extract cross-band textural features from ERTS MSS imagery. Evolving from a single image texture extraction procedure which uses spatial dependence matrices to measure relative co-occurrence of nearest neighbor grey tones, the cross-band texture procedure uses the distribution of neighboring grey tone N-tuple differences to measure the spatial interrelationships, or co-occurrences, of the grey tone N-tuples present in a texture pattern. In both procedures, texture is characterized in such a way as to be invariant under linear grey tone transformations. However, the cross-band procedure complements the single image procedure by extracting texture information and spectral information contained in ERTS multi-images. Classification experiments show that when used alone, without spectral processing, the cross-band texture procedure extracts more information than the single image texture analysis. Results show an improvement in average correct classification from 86.2% to 88.8% for ERTS image no. 1021-16333 with the cross-band texture procedure. However, when used together with spectral features, the single image texture plus spectral features perform better than the cross-band texture plus spectral features, with an average correct classification of 93.8% and 91.6%, respectively.
Reproducibility of neuroimaging analyses across operating systems
Glatard, Tristan; Lewis, Lindsay B.; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C.
2015-01-01
Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed. PMID:25964757
Reproducibility of neuroimaging analyses across operating systems.
Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C
2015-01-01
Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.
Compensatory neurofuzzy model for discrete data classification in biomedical
NASA Astrophysics Data System (ADS)
Ceylan, Rahime
2015-03-01
Biomedical data is separated to two main sections: signals and discrete data. So, studies in this area are about biomedical signal classification or biomedical discrete data classification. There are artificial intelligence models which are relevant to classification of ECG, EMG or EEG signals. In same way, in literature, many models exist for classification of discrete data taken as value of samples which can be results of blood analysis or biopsy in medical process. Each algorithm could not achieve high accuracy rate on classification of signal and discrete data. In this study, compensatory neurofuzzy network model is presented for classification of discrete data in biomedical pattern recognition area. The compensatory neurofuzzy network has a hybrid and binary classifier. In this system, the parameters of fuzzy systems are updated by backpropagation algorithm. The realized classifier model is conducted to two benchmark datasets (Wisconsin Breast Cancer dataset and Pima Indian Diabetes dataset). Experimental studies show that compensatory neurofuzzy network model achieved 96.11% accuracy rate in classification of breast cancer dataset and 69.08% accuracy rate was obtained in experiments made on diabetes dataset with only 10 iterations.
[Classification in medicine. An introductory reflection on its aim and object].
Giere, W
2007-07-01
Human beings are born with the ability to recognize Gestalt and to classify. However, all classifications depend on their circumstances and intentions. There is no ultimate classification, and there is no one correct classification in medicine either. Examples for classifications of diagnoses, symptoms and procedures are discussed. The path to gaining knowledge and the basic difference between collecting data (patient file) and sorting data (register) will be illustrated using the BAIK information model. Additionally the model shows how the doctor can profit from the active electronic patient file which automatically offers him other relevant information for his current decision and saves time. "Without classification no new knowledge, no new knowledge through classification". This paradox will be solved eventually: a change of paradigms requires the overcoming of the currently valid classification system in medicine as well. Finally more precise recommendations will be given on how doctors can be freed from the burden of the need to classify and how the whole health system can gain much more valid data without limiting the doctors' freedom and creativity through co-ordinated use of IT, all while saving money at the same time.
Autonomous target recognition using remotely sensed surface vibration measurements
NASA Astrophysics Data System (ADS)
Geurts, James; Ruck, Dennis W.; Rogers, Steven K.; Oxley, Mark E.; Barr, Dallas N.
1993-09-01
The remotely measured surface vibration signatures of tactical military ground vehicles are investigated for use in target classification and identification friend or foe (IFF) systems. The use of remote surface vibration sensing by a laser radar reduces the effects of partial occlusion, concealment, and camouflage experienced by automatic target recognition systems using traditional imagery in a tactical battlefield environment. Linear Predictive Coding (LPC) efficiently represents the vibration signatures and nearest neighbor classifiers exploit the LPC feature set using a variety of distortion metrics. Nearest neighbor classifiers achieve an 88 percent classification rate in an eight class problem, representing a classification performance increase of thirty percent from previous efforts. A novel confidence figure of merit is implemented to attain a 100 percent classification rate with less than 60 percent rejection. The high classification rates are achieved on a target set which would pose significant problems to traditional image-based recognition systems. The targets are presented to the sensor in a variety of aspects and engine speeds at a range of 1 kilometer. The classification rates achieved demonstrate the benefits of using remote vibration measurement in a ground IFF system. The signature modeling and classification system can also be used to identify rotary and fixed-wing targets.
NASA Astrophysics Data System (ADS)
Selwyn, Ebenezer Juliet; Florinabel, D. Jemi
2018-04-01
Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.
Recursive heuristic classification
NASA Technical Reports Server (NTRS)
Wilkins, David C.
1994-01-01
The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.
Delineation of marsh types of the Texas coast from Corpus Christi Bay to the Sabine River in 2010
Enwright, Nicholas M.; Hartley, Stephen B.; Brasher, Michael G.; Visser, Jenneke M.; Mitchell, Michael K.; Ballard, Bart M.; Parr, Mark W.; Couvillion, Brady R.; Wilson, Barry C.
2014-01-01
Coastal zone managers and researchers often require detailed information regarding emergent marsh vegetation types for modeling habitat capacities and needs of marsh-reliant wildlife (such as waterfowl and alligator). Detailed information on the extent and distribution of marsh vegetation zones throughout the Texas coast has been historically unavailable. In response, the U.S. Geological Survey, in cooperation and collaboration with the U.S. Fish and Wildlife Service via the Gulf Coast Joint Venture, Texas A&M University-Kingsville, the University of Louisiana-Lafayette, and Ducks Unlimited, Inc., has produced a classification of marsh vegetation types along the middle and upper Texas coast from Corpus Christi Bay to the Sabine River. This study incorporates approximately 1,000 ground reference locations collected via helicopter surveys in coastal marsh areas and about 2,000 supplemental locations from fresh marsh, water, and “other” (that is, nonmarsh) areas. About two-thirds of these data were used for training, and about one-third were used for assessing accuracy. Decision-tree analyses using Rulequest See5 were used to classify emergent marsh vegetation types by using these data, multitemporal satellite-based multispectral imagery from 2009 to 2011, a bare-earth digital elevation model (DEM) based on airborne light detection and ranging (lidar), alternative contemporary land cover classifications, and other spatially explicit variables believed to be important for delineating the extent and distribution of marsh vegetation communities. Image objects were generated from segmentation of high-resolution airborne imagery acquired in 2010 and were used to refine the classification. The classification is dated 2010 because the year is both the midpoint of the multitemporal satellite-based imagery (2009–11) classified and the date of the high-resolution airborne imagery that was used to develop image objects. Overall accuracy corrected for bias (accuracy estimate incorporates true marginal proportions) was 91 percent (95 percent confidence interval [CI]: 89.2–92.8), with a kappa statistic of 0.79 (95 percent CI: 0.77–0.81). The classification performed best for saline marsh (user’s accuracy 81.5 percent; producer’s accuracy corrected for bias 62.9 percent) but showed a lesser ability to discriminate intermediate marsh (user’s accuracy 47.7 percent; producer’s accuracy corrected for bias 49.5 percent). Because of confusion in intermediate and brackish marsh classes, an alternative classification containing only three marsh types was created in which intermediate and brackish marshes were combined into a single class. Image objects were reattributed by using this alternative three-marsh-type classification. Overall accuracy, corrected for bias, of this more general classification was 92.4 percent (95 percent CI: 90.7–94.2), and the kappa statistic was 0.83 (95 percent CI: 0.81–0.85). Mean user’s accuracy for marshes within the four-marsh-type and three-marsh-type classifications was 65.4 percent and 75.6 percent, respectively, whereas mean producer’s accuracy was 56.7 percent and 65.1 percent, respectively. This study provides a more objective and repeatable method for classifying marsh types of the middle and upper Texas coast at an extent and greater level of detail than previously available for the study area. The seamless classification produced through this work is now available to help State agencies (such as the Texas Parks and Wildlife Department) and landscape-scale conservation partnerships (such as the Gulf Coast Prairie Landscape Conservation Cooperative and the Gulf Coast Joint Venture) to develop and (or) refine conservation plans targeting priority natural resources. Moreover, these data may improve projections of landscape change and serve as a baseline for monitoring future changes resulting from chronic and episodic stressors.
U.S. Fish and Wildlife Service 1979 wetland classification: a review
Cowardin, L.M.; Golet, F.C.
1995-01-01
In 1979 the US Fish and Wildlife Service published and adopted a classification of wetlands and deepwater habitats of the United States. The system was designed for use in a national inventory of wetlands. It was intended to be ecologically based, to furnish the mapping units needed for the inventory, and to provide national consistency in terminology and definition. We review the performance of the classification after 13 years of use. The definition of wetland is based on national lists of hydric soils and plants that occur in wetlands. Our experience suggests that wetland classifications must facilitate mapping and inventory because these data gathering functions are essential to management and preservation of the wetland resource, but the definitions and taxa must have ecological basis. The most serious problem faced in construction of the classification was lack of data for many of the diverse wetland types. Review of the performance of the classification suggests that, for the most part, it was successful in accomplishing its objectives, but that problem areas should be corrected and modification could strengthen its utility. The classification, at least in concept, could be applied outside the United States. Experience gained in use of the classification can furnish guidance as to pitfalls to be avoided in the wetland classification process.
NASA Astrophysics Data System (ADS)
Erener, A.
2013-04-01
Automatic extraction of urban features from high resolution satellite images is one of the main applications in remote sensing. It is useful for wide scale applications, namely: urban planning, urban mapping, disaster management, GIS (geographic information systems) updating, and military target detection. One common approach to detecting urban features from high resolution images is to use automatic classification methods. This paper has four main objectives with respect to detecting buildings. The first objective is to compare the performance of the most notable supervised classification algorithms, including the maximum likelihood classifier (MLC) and the support vector machine (SVM). In this experiment the primary consideration is the impact of kernel configuration on the performance of the SVM. The second objective of the study is to explore the suitability of integrating additional bands, namely first principal component (1st PC) and the intensity image, for original data for multi classification approaches. The performance evaluation of classification results is done using two different accuracy assessment methods: pixel based and object based approaches, which reflect the third aim of the study. The objective here is to demonstrate the differences in the evaluation of accuracies of classification methods. Considering consistency, the same set of ground truth data which is produced by labeling the building boundaries in the GIS environment is used for accuracy assessment. Lastly, the fourth aim is to experimentally evaluate variation in the accuracy of classifiers for six different real situations in order to identify the impact of spatial and spectral diversity on results. The method is applied to Quickbird images for various urban complexity levels, extending from simple to complex urban patterns. The simple surface type includes a regular urban area with low density and systematic buildings with brick rooftops. The complex surface type involves almost all kinds of challenges, such as high dense build up areas, regions with bare soil, and small and large buildings with different rooftops, such as concrete, brick, and metal. Using the pixel based accuracy assessment it was shown that the percent building detection (PBD) and quality percent (QP) of the MLC and SVM depend on the complexity and texture variation of the region. Generally, PBD values range between 70% and 90% for the MLC and SVM, respectively. No substantial improvements were observed when the SVM and MLC classifications were developed by the addition of more variables, instead of the use of only four bands. In the evaluation of object based accuracy assessment, it was demonstrated that while MLC and SVM provide higher rates of correct detection, they also provide higher rates of false alarms.
Jang, Cheng-Shin
2015-05-01
Accurately classifying the spatial features of the water temperatures and discharge rates of hot springs is crucial for environmental resources use and management. This study spatially characterized classifications of the water temperatures and discharge rates of hot springs in the Tatun Volcanic Region of Northern Taiwan by using indicator kriging (IK). The water temperatures and discharge rates of the springs were first assigned to high, moderate, and low categories according to the two thresholds of the proposed spring classification criteria. IK was then used to model the occurrence probabilities of the water temperatures and discharge rates of the springs and probabilistically determine their categories. Finally, nine combinations were acquired from the probability-based classifications for the spatial features of the water temperatures and discharge rates of the springs. Moreover, various combinations of spring water features were examined according to seven subzones of spring use in the study region. The research results reveal that probability-based classifications using IK provide practicable insights related to propagating the uncertainty of classifications according to the spatial features of the water temperatures and discharge rates of the springs. The springs in the Beitou (BT), Xingyi Road (XYR), Zhongshanlou (ZSL), and Lengshuikeng (LSK) subzones are suitable for supplying tourism hotels with a sufficient quantity of spring water because they have high or moderate discharge rates. Furthermore, natural hot springs in riverbeds and valleys should be developed in the Dingbeitou (DBT), ZSL, Xiayoukeng (XYK), and Macao (MC) subzones because of low discharge rates and low or moderate water temperatures.
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Morimoto, T.
1980-01-01
The author has identified the following significant results. Multispectral scanner data for Brasilia was corrected for atmospheric interference using the LOWTRAN-3 computer program and the analytical solution of the radiative transfer equation. This improved the contrast between two natural targets and the corrected images of two different dates were more similar than the original ones. Corrected images of MSS data for Ribeirao Preto gave a classification accuracy for sugar cane about 10% higher as compared to the original images.
Yaghoobi, Mohammad; Padol, Sara; Yuan, Yuhong; Hunt, Richard H
2010-05-01
The results of clinical trials with proton pump inhibitors (PPIs) are usually based on the Hetzel-Dent (HD), Savary-Miller (SM), or Los Angeles (LA) classifications to describe the severity and assess the healing of erosive oesophagitis. However, it is not known whether these classifications are comparable. The aim of this study was to review systematically the literature to compare the healing rates of erosive oesophagitis with PPIs in clinical trials assessed by the HD, SM, or LA classifications. A recursive, English language literature search in PubMed and Cochrane databases to December 2006 was performed. Double-blind randomized control trials comparing a PPI with another PPI, an H2-RA or placebo using endoscopic assessment of the healing of oesophagitis by the HD, SM or LA, or their modified classifications at 4 or 8 weeks, were included in the study. The healing rates on treatment with the same PPI(s), and same endoscopic grade(s) were pooled and compared between different classifications using Fisher's exact test or chi2 test where appropriate. Forty-seven studies from 965 potential citations met inclusion criteria. Seventy-eight PPI arms were identified, with 27 using HD, 29 using SM, and 22 using LA for five marketed PPIs. There was insufficient data for rabeprazole and esomeprazole (week 4 only) to compare because they were evaluated by only one classification. When data from all PPIs were pooled, regardless of baseline oesophagitis grades, the LA healing rate was significantly higher than SM and HD at both 4 and 8 weeks (74, 71, and 68% at 4 weeks and 89, 84, and 83% at 8 weeks, respectively). The distribution of different grades in study population was available only for pantoprazole where it was not significantly different between LA and SM subgroups. When analyzing data for PPI and dose, the LA classification showed a higher healing rate for omeprazole 20 mg/day and pantoprazole 40 mg/day (significant at 8 weeks), whereas healing by SM classification was significantly higher for omeprazole 40 mg/day (no data for LA) and lansoprazole 30 mg/day at 4 and 8 weeks. The healing rate by individual oesophagitis grade was not always available or robust enough for meaningful analysis. However, a difference between classifications remained. There is a significant, but not always consistent, difference in oesophagitis healing rates with the same PPI(s) reported by the LA, SM, or HD classifications. The possible difference between grading classifications should be considered when interpreting or comparing healing rates for oesophagitis from different studies.
Hyperspectral analysis of seagrass in Redfish Bay, Texas
NASA Astrophysics Data System (ADS)
Wood, John S.
Remote sensing using multi- and hyperspectral imaging and analysis has been used in resource management for quite some time, and for a variety of purposes. In the studies to follow, hyperspectral imagery of Redfish Bay is used to discriminate between species of seagrasses found below the water surface. Water attenuates and reflects light and energy from the electromagnetic spectrum, and as a result, subsurface analysis can be more complex than that performed in the terrestrial world. In the following studies, an iterative process is developed, using ENVI image processing software and ArcGIS software. Band selection was based on recommendations developed empirically in conjunction with ongoing research into depth corrections, which were applied to the imagery bands (a default depth of 65 cm was used). Polygons generated, classified and aggregated within ENVI are reclassified in ArcGIS using field site data that was randomly selected for that purpose. After the first iteration, polygons that remain classified as 'Mixed' are subjected to another iteration of classification in ENVI, then brought into ArcGIS and reclassified. Finally, when that classification scheme is exhausted, a supervised classification is performed, using a 'Maximum Likelihood' classification technique, which assigned the remaining polygons to the classification that was most like the training polygons, by digital number value. Producer's Accuracy by classification ranged from 23.33 % for the 'MixedMono' class to 66.67% for the 'Bare' class; User's Accuracy by classification ranged from 22.58% for the 'MixedMono' class to 69.57% for the 'Bare' classification. An overall accuracy of 37.93% was achieved. Producers and Users Accuracies for Halodule were 29% and 39%, respectively; for Thalassia, they were 46% and 40%. Cohen's Kappa Coefficient was calculated at .2988. We then returned to the field and collected spectral signatures of monotypic stands of seagrass at varying depths and at three sensor levels: above the water surface, just below the air/water interface, and at the canopy position, when it differed from the subsurface position. Analysis of plots of these spectral curves, after applying depth corrections and Multiplicative Scatter Correction, indicates that there are detectable spectral differences between Halodule and Thalassia species at all three positions. Further analysis indicated that only above-surface spectral signals could reliably be used to discriminate between species, because there was an overlap of the standard deviations in the other two positions. A recommendation for wavelengths that would produce increased accuracy in hyperspectral image analysis was made, based on areas where there is a significant amount of difference between the mean spectral signatures, and no overlap of the standard deviations in our samples. The original hyperspectral imagery was reprocessed, using the bands recommended from the research above (approximately 535, 600, 620, 638, and 656 nm). A depth raster was developed from various available sources, which was resampled and reclassified to reflect values for water absorption and water scattering, which were then applied to each band using the depth correction algorithm. Processing followed the iterative classification methods described above. Accuracy for this round of processing improved; overall accuracy increased from 38% to 57%. Improvements were noted in Producer's Accuracy, with the 'Bare' vi classification increasing from 67% to 73%, Halodule increasing from 29% to 63%, Thalassia increasing slightly, from 46% to 50%, and 'MixedMono' improving from 23% to 42%. User's Accuracy also improved, with the 'Bare' class increasing from 69% to 70%, Halodule increasing from 39% to 67%, Thalassia increasing from 40% to 7%, and 'MixedMono' increasing from 22.5% to 35%. A very recent report shows the mean percent cover of seagrasses in Redfish Bay and Corpus Christi Bay combined for all species at 68.6%, and individually by species: Halodule 39.8%, Thalassia 23.7%, Syringodium 4%, Ruppia 1% and Halophila 0.1%. Our study classifies 15% as 'Bare', 23% Halodule, 18% Thalassia, and 2% Ruppia. In addition, we classify 5% as 'Mixed', 22% as 'MixedMono', 12% as 'Bare/Halodule Mix', and 3% 'Bare/Thalassia Mix'. Aggregating the 'Bare' and 'Bare/species' classes would equate to approximately 30%, very close to what this new study produces. Other classes are quite similar, when considering that their study includes no 'Mixed' classifications. This series of research studies illustrates the application and utility of hyperspectral imagery and associated processing to mapping shallow benthic habitats. It also demonstrates that the technology is rapidly changing and adapting, which will lead to even further increases in accuracy. Future studies with hyperspectral imaging should include extensive spectral field collection, and the application of a depth correction.
Calès, P; Boursier, J; Lebigot, J; de Ledinghen, V; Aubé, C; Hubert, I; Oberti, F
2017-04-01
In chronic hepatitis C, the European Association for the Study of the Liver and the Asociacion Latinoamericana para el Estudio del Higado recommend performing transient elastography plus a blood test to diagnose significant fibrosis; test concordance confirms the diagnosis. To validate this rule and improve it by combining a blood test, FibroMeter (virus second generation, Echosens, Paris, France) and transient elastography (constitutive tests) into a single combined test, as suggested by the American Association for the Study of Liver Diseases and the Infectious Diseases Society of America. A total of 1199 patients were included in an exploratory set (HCV, n = 679) or in two validation sets (HCV ± HIV, HBV, n = 520). Accuracy was mainly evaluated by correct diagnosis rate for severe fibrosis (pathological Metavir F ≥ 3, primary outcome) by classical test scores or a fibrosis classification, reflecting Metavir staging, as a function of test concordance. Score accuracy: there were no significant differences between the blood test (75.7%), elastography (79.1%) and the combined test (79.4%) (P = 0.066); the score accuracy of each test was significantly (P < 0.001) decreased in discordant vs. concordant tests. Classification accuracy: combined test accuracy (91.7%) was significantly (P < 0.001) increased vs. the blood test (84.1%) and elastography (88.2%); accuracy of each constitutive test was significantly (P < 0.001) decreased in discordant vs. concordant tests but not with combined test: 89.0 vs. 92.7% (P = 0.118). Multivariate analysis for accuracy showed an interaction between concordance and fibrosis level: in the 1% of patients with full classification discordance and severe fibrosis, non-invasive tests were unreliable. The advantage of combined test classification was confirmed in the validation sets. The concordance recommendation is validated. A combined test, expressed in classification instead of score, improves this rule and validates the recommendation of a combined test, avoiding 99% of biopsies, and offering precise staging. © 2017 John Wiley & Sons Ltd.
Ben Chaabane, Salim; Fnaiech, Farhat
2014-01-23
Color image segmentation has been so far applied in many areas; hence, recently many different techniques have been developed and proposed. In the medical imaging area, the image segmentation may be helpful to provide assistance to doctor in order to follow-up the disease of a certain patient from the breast cancer processed images. The main objective of this work is to rebuild and also to enhance each cell from the three component images provided by an input image. Indeed, from an initial segmentation obtained using the statistical features and histogram threshold techniques, the resulting segmentation may represent accurately the non complete and pasted cells and enhance them. This allows real help to doctors, and consequently, these cells become clear and easy to be counted. A novel method for color edges extraction based on statistical features and automatic threshold is presented. The traditional edge detector, based on the first and the second order neighborhood, describing the relationship between the current pixel and its neighbors, is extended to the statistical domain. Hence, color edges in an image are obtained by combining the statistical features and the automatic threshold techniques. Finally, on the obtained color edges with specific primitive color, a combination rule is used to integrate the edge results over the three color components. Breast cancer cell images were used to evaluate the performance of the proposed method both quantitatively and qualitatively. Hence, a visual and a numerical assessment based on the probability of correct classification (PC), the false classification (Pf), and the classification accuracy (Sens(%)) are presented and compared with existing techniques. The proposed method shows its superiority in the detection of points which really belong to the cells, and also the facility of counting the number of the processed cells. Computer simulations highlight that the proposed method substantially enhances the segmented image with smaller error rates better than other existing algorithms under the same settings (patterns and parameters). Moreover, it provides high classification accuracy, reaching the rate of 97.94%. Additionally, the segmentation method may be extended to other medical imaging types having similar properties.
Character recognition using a neural network model with fuzzy representation
NASA Technical Reports Server (NTRS)
Tavakoli, Nassrin; Seniw, David
1992-01-01
The degree to which digital images are recognized correctly by computerized algorithms is highly dependent upon the representation and the classification processes. Fuzzy techniques play an important role in both processes. In this paper, the role of fuzzy representation and classification on the recognition of digital characters is investigated. An experimental Neural Network model with application to character recognition was developed. Through a set of experiments, the effect of fuzzy representation on the recognition accuracy of this model is presented.
Hsia, C C; Liou, K J; Aung, A P W; Foo, V; Huang, W; Biswas, J
2009-01-01
Pressure ulcers are common problems for bedridden patients. Caregivers need to reposition the sleeping posture of a patient every two hours in order to reduce the risk of getting ulcers. This study presents the use of Kurtosis and skewness estimation, principal component analysis (PCA) and support vector machines (SVMs) for sleeping posture classification using cost-effective pressure sensitive mattress that can help caregivers to make correct sleeping posture changes for the prevention of pressure ulcers.
Deployment and Performance of the NASA D3R During the GPM OLYMPEx Field Campaign
NASA Technical Reports Server (NTRS)
Chandrasekar, V.; Beauchamp, Robert M.; Chen, Haonan; Vega, Manuel; Schwaller, Mathew; Willie, Delbert; Dabrowski, Aaron; Kumar, Mohit; Petersen, Walter; Wolff, David
2016-01-01
The NASA D3R was successfully deployed and operated throughout the NASA OLYMPEx field campaign. A differential phase based attenuation correction technique has been implemented for D3R observations. Hydrometeor classification has been demonstrated for five distinct classes using Ku-band observations of both convection and stratiform rain. The stratiform rain hydrometeor classification is compared against LDR observations and shows good agreement in identification of mixed-phase hydrometeors in the melting layer.
NASA Astrophysics Data System (ADS)
Pérez Rosas, Osvaldo G.; Rivera Martínez, José L.; Maldonado Cano, Luis A.; López Rodríguez, Mario; Amaya Reyes, Laura M.; Cano Martínez, Elizabeth; García Vázquez, Mireya S.; Ramírez Acosta, Alejandro A.
2017-09-01
The automatic identification and classification of musical genres based on the sound similarities to form musical textures, it is a very active investigation area. In this context it has been created recognition systems of musical genres, formed by time-frequency characteristics extraction methods and by classification methods. The selection of this methods are important for a good development in the recognition systems. In this article they are proposed the Mel-Frequency Cepstral Coefficients (MFCC) methods as a characteristic extractor and Support Vector Machines (SVM) as a classifier for our system. The stablished parameters of the MFCC method in the system by our time-frequency analysis, represents the gamma of Mexican culture musical genres in this article. For the precision of a classification system of musical genres it is necessary that the descriptors represent the correct spectrum of each gender; to achieve this we must realize a correct parametrization of the MFCC like the one we present in this article. With the system developed we get satisfactory detection results, where the least identification percentage of musical genres was 66.67% and the one with the most precision was 100%.
Gait Phase Recognition for Lower-Limb Exoskeleton with Only Joint Angular Sensors
Liu, Du-Xin; Wu, Xinyu; Du, Wenbin; Wang, Can; Xu, Tiantian
2016-01-01
Gait phase is widely used for gait trajectory generation, gait control and gait evaluation on lower-limb exoskeletons. So far, a variety of methods have been developed to identify the gait phase for lower-limb exoskeletons. Angular sensors on lower-limb exoskeletons are essential for joint closed-loop controlling; however, other types of sensors, such as plantar pressure, attitude or inertial measurement unit, are not indispensable.Therefore, to make full use of existing sensors, we propose a novel gait phase recognition method for lower-limb exoskeletons using only joint angular sensors. The method consists of two procedures. Firstly, the gait deviation distances during walking are calculated and classified by Fisher’s linear discriminant method, and one gait cycle is divided into eight gait phases. The validity of the classification results is also verified based on large gait samples. Secondly, we build a gait phase recognition model based on multilayer perceptron and train it with the phase-labeled gait data. The experimental result of cross-validation shows that the model has a 94.45% average correct rate of set (CRS) and an 87.22% average correct rate of phase (CRP) on the testing set, and it can predict the gait phase accurately. The novel method avoids installing additional sensors on the exoskeleton or human body and simplifies the sensory system of the lower-limb exoskeleton. PMID:27690023
Combining multiple decisions: applications to bioinformatics
NASA Astrophysics Data System (ADS)
Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.
2008-01-01
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.
Support Vector Machines for Hyperspectral Remote Sensing Classification
NASA Technical Reports Server (NTRS)
Gualtieri, J. Anthony; Cromp, R. F.
1998-01-01
The Support Vector Machine provides a new way to design classification algorithms which learn from examples (supervised learning) and generalize when applied to new data. We demonstrate its success on a difficult classification problem from hyperspectral remote sensing, where we obtain performances of 96%, and 87% correct for a 4 class problem, and a 16 class problem respectively. These results are somewhat better than other recent results on the same data. A key feature of this classifier is its ability to use high-dimensional data without the usual recourse to a feature selection step to reduce the dimensionality of the data. For this application, this is important, as hyperspectral data consists of several hundred contiguous spectral channels for each exemplar. We provide an introduction to this new approach, and demonstrate its application to classification of an agriculture scene.
Low, Gary Kim-Kuan; Ogston, Simon A; Yong, Mun-Hin; Gan, Seng-Chiew; Chee, Hui-Yee
2018-06-01
Since the introduction of 2009 WHO dengue case classification, no literature was found regarding its effect on dengue death. This study was to evaluate the effect of 2009 WHO dengue case classification towards dengue case fatality rate. Various databases were used to search relevant articles since 1995. Studies included were cohort and cross-sectional studies, all patients with dengue infection and must report the number of death or case fatality rate. The Joanna Briggs Institute appraisal checklist was used to evaluate the risk of bias of the full-texts. The studies were grouped according to the classification adopted: WHO 1997 and WHO 2009. Meta-regression was employed using a logistic transformation (log-odds) of the case fatality rate. The result of the meta-regression was the adjusted case fatality rate and odds ratio on the explanatory variables. A total of 77 studies were included in the meta-regression analysis. The case fatality rate for all studies combined was 1.14% with 95% confidence interval (CI) of 0.82-1.58%. The combined (unadjusted) case fatality rate for 69 studies which adopted WHO 1997 dengue case classification was 1.09% with 95% CI of 0.77-1.55%; and for eight studies with WHO 2009 was 1.62% with 95% CI of 0.64-4.02%. The unadjusted and adjusted odds ratio of case fatality using WHO 2009 dengue case classification was 1.49 (95% CI: 0.52, 4.24) and 0.83 (95% CI: 0.26, 2.63) respectively, compared to WHO 1997 dengue case classification. There was an apparent increase in trend of case fatality rate from the year 1992-2016. Neither was statistically significant. The WHO 2009 dengue case classification might have no effect towards the case fatality rate although the adjusted results indicated a lower case fatality rate. Future studies are required for an update in the meta-regression analysis to confirm the findings. Copyright © 2018 Elsevier B.V. All rights reserved.
Hentschel, Annett G; John Livesley, W
2013-05-01
Criteria to differentiate personality disorder from extremes of normal personality variations are important given growing interest in dimensional classification because an extreme level of a personality dimension does not necessarily indicate disorder. The DSM-5 proposed classification of personality disorder offers a definition of general personality disorder based on chronic interpersonal and self/identity pathology. The ability of this approach to differentiate personality disorder from other mental disorders was evaluated using a self-report questionnaire, the General Assessment of Personality Disorder (GAPD). This measure was administered to a sample of psychiatric patients (N = 149) from different clinical sub-sites. Patients were divided into personality disordered and non-personality disordered groups on the basis of the Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II). The results showed a hit rate of 82% correct identified patients and a good accuracy of the predicted model. There was a substantial agreement between SCID-II interview and GAPD personality disorder diagnoses. The GAPD appears to predict personality disorder in general, which provides support of the DSM-5 general diagnostic criteria of personality disorder. Copyright © 2012 John Wiley & Sons, Ltd.
Multiclass Classification of Cardiac Arrhythmia Using Improved Feature Selection and SVM Invariants.
Mustaqeem, Anam; Anwar, Syed Muhammad; Majid, Muahammad
2018-01-01
Arrhythmia is considered a life-threatening disease causing serious health issues in patients, when left untreated. An early diagnosis of arrhythmias would be helpful in saving lives. This study is conducted to classify patients into one of the sixteen subclasses, among which one class represents absence of disease and the other fifteen classes represent electrocardiogram records of various subtypes of arrhythmias. The research is carried out on the dataset taken from the University of California at Irvine Machine Learning Data Repository. The dataset contains a large volume of feature dimensions which are reduced using wrapper based feature selection technique. For multiclass classification, support vector machine (SVM) based approaches including one-against-one (OAO), one-against-all (OAA), and error-correction code (ECC) are employed to detect the presence and absence of arrhythmias. The SVM method results are compared with other standard machine learning classifiers using varying parameters and the performance of the classifiers is evaluated using accuracy, kappa statistics, and root mean square error. The results show that OAO method of SVM outperforms all other classifiers by achieving an accuracy rate of 81.11% when used with 80/20 data split and 92.07% using 90/10 data split option.
2009-08-11
This final rule updates the payment rates used under the prospective payment system (PPS) for skilled nursing facilities (SNFs), for fiscal year (FY) 2010. In addition, it recalibrates the case-mix indexes so that they more accurately reflect parity in expenditures related to the implementation of case-mix refinements in January 2006. It also discusses the results of our ongoing analysis of nursing home staff time measurement data collected in the Staff Time and Resource Intensity Verification project, as well as a new Resource Utilization Groups, version 4 case-mix classification model for FY 2011 that will use the updated Minimum Data Set 3.0 resident assessment for case-mix classification. In addition, this final rule discusses the public comments that we have received on these and other issues, including a possible requirement for the quarterly reporting of nursing home staffing data, as well as on applying the quality monitoring mechanism in place for all other SNF PPS facilities to rural swing-bed hospitals. Finally, this final rule revises the regulations to incorporate certain technical corrections.
Macaluso, P J
2011-02-01
Digital photogrammetric methods were used to collect diameter, area, and perimeter data of the acetabulum for a twentieth-century skeletal sample from France (Georges Olivier Collection, Musée de l'Homme, Paris) consisting of 46 males and 36 females. The measurements were then subjected to both discriminant function and logistic regression analyses in order to develop osteometric standards for sex assessment. Univariate discriminant functions and logistic regression equations yielded overall correct classification accuracy rates for both the left and the right acetabula ranging from 84.1% to 89.6%. The multivariate models developed in this study did not provide increased accuracy over those using only a single variable. Classification sex bias ratios ranged between 1.1% and 7.3% for the majority of models. The results of this study, therefore, demonstrate that metric analysis of acetabular size provides a highly accurate, and easily replicable, method of discriminating sex in this documented skeletal collection. The results further suggest that the addition of area and perimeter data derived from digital images may provide a more effective method of sex assessment than that offered by traditional linear measurements alone. Copyright © 2010 Elsevier GmbH. All rights reserved.
Ishwaran, Hemant; Lu, Min
2018-06-04
Random forests are a popular nonparametric tree ensemble procedure with broad applications to data analysis. While its widespread popularity stems from its prediction performance, an equally important feature is that it provides a fully nonparametric measure of variable importance (VIMP). A current limitation of VIMP, however, is that no systematic method exists for estimating its variance. As a solution, we propose a subsampling approach that can be used to estimate the variance of VIMP and for constructing confidence intervals. The method is general enough that it can be applied to many useful settings, including regression, classification, and survival problems. Using extensive simulations, we demonstrate the effectiveness of the subsampling estimator and in particular find that the delete-d jackknife variance estimator, a close cousin, is especially effective under low subsampling rates due to its bias correction properties. These 2 estimators are highly competitive when compared with the .164 bootstrap estimator, a modified bootstrap procedure designed to deal with ties in out-of-sample data. Most importantly, subsampling is computationally fast, thus making it especially attractive for big data settings. Copyright © 2018 John Wiley & Sons, Ltd.
Polyphonic sonification of electrocardiography signals for diagnosis of cardiac pathologies
NASA Astrophysics Data System (ADS)
Kather, Jakob Nikolas; Hermann, Thomas; Bukschat, Yannick; Kramer, Tilmann; Schad, Lothar R.; Zöllner, Frank Gerrit
2017-03-01
Electrocardiography (ECG) data are multidimensional temporal data with ubiquitous applications in the clinic. Conventionally, these data are presented visually. It is presently unclear to what degree data sonification (auditory display), can enable the detection of clinically relevant cardiac pathologies in ECG data. In this study, we introduce a method for polyphonic sonification of ECG data, whereby different ECG channels are simultaneously represented by sound of different pitch. We retrospectively applied this method to 12 samples from a publicly available ECG database. We and colleagues from our professional environment then analyzed these data in a blinded way. Based on these analyses, we found that the sonification technique can be intuitively understood after a short training session. On average, the correct classification rate for observers trained in cardiology was 78%, compared to 68% and 50% for observers not trained in cardiology or not trained in medicine at all, respectively. These values compare to an expected random guessing performance of 25%. Strikingly, 27% of all observers had a classification accuracy over 90%, indicating that sonification can be very successfully used by talented individuals. These findings can serve as a baseline for potential clinical applications of ECG sonification.
Binning in Gaussian Kernel Regularization
2005-04-01
OSU-SVM Matlab package, the SVM trained on 966 bins has a comparable test classification rate as the SVM trained on 27,179 samples, but reduces the...71.40%) on 966 randomly sampled data. Using the OSU-SVM Matlab package, the SVM trained on 966 bins has a comparable test classification rate as the...the OSU-SVM Matlab package, the SVM trained on 966 bins has a comparable test classification rate as the SVM trained on 27,179 samples, and reduces
Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Aarsvold, John N.; Raghunath, Nivedita; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.; Votaw, John R.
2012-01-01
Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [11C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET. PMID:23039679
McLeod, Adam; Bochniewicz, Elaine M; Lum, Peter S; Holley, Rahsaan J; Emmer, Geoff; Dromerick, Alexander W
2016-02-01
To improve measurement of upper extremity (UE) use in the community by evaluating the feasibility of using body-worn sensor data and machine learning models to distinguish productive prehensile and bimanual UE activity use from extraneous movements associated with walking. Comparison of machine learning classification models with criterion standard of manually scored videos of performance in UE prosthesis users. Rehabilitation hospital training apartment. Convenience sample of UE prosthesis users (n=5) and controls (n=13) similar in age and hand dominance (N=18). Participants were filmed executing a series of functional activities; a trained observer annotated each frame to indicate either UE movement directed at functional activity or walking. Synchronized data from an inertial sensor attached to the dominant wrist were similarly classified as indicating either a functional use or walking. These data were used to train 3 classification models to predict the functional versus walking state given the associated sensor information. Models were trained over 4 trials: on UE amputees and controls and both within subject and across subject. Model performance was also examined with and without preprocessing (centering) in the across-subject trials. Percent correct classification. With the exception of the amputee/across-subject trial, at least 1 model classified >95% of test data correctly for all trial types. The top performer in the amputee/across-subject trial classified 85% of test examples correctly. We have demonstrated that computationally lightweight classification models can use inertial data collected from wrist-worn sensors to reliably distinguish prosthetic UE movements during functional use from walking-associated movement. This approach has promise in objectively measuring real-world UE use of prosthetic limbs and may be helpful in clinical trials and in measuring response to treatment of other UE pathologies. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paudel, M. R.; Mackenzie, M.; Rathee, S.
2013-08-15
Purpose: To evaluate the metal artifacts in kilovoltage computed tomography (kVCT) images that are corrected using a normalized metal artifact reduction (NMAR) method with megavoltage CT (MVCT) prior images.Methods: Tissue characterization phantoms containing bilateral steel inserts are used in all experiments. Two MVCT images, one without any metal artifact corrections and the other corrected using a modified iterative maximum likelihood polychromatic algorithm for CT (IMPACT) are translated to pseudo-kVCT images. These are then used as prior images without tissue classification in an NMAR technique for correcting the experimental kVCT image. The IMPACT method in MVCT included an additional model formore » the pair/triplet production process and the energy dependent response of the MVCT detectors. An experimental kVCT image, without the metal inserts and reconstructed using the filtered back projection (FBP) method, is artificially patched with the known steel inserts to get a reference image. The regular NMAR image containing the steel inserts that uses tissue classified kVCT prior and the NMAR images reconstructed using MVCT priors are compared with the reference image for metal artifact reduction. The Eclipse treatment planning system is used to calculate radiotherapy dose distributions on the corrected images and on the reference image using the Anisotropic Analytical Algorithm with 6 MV parallel opposed 5 × 10 cm{sup 2} fields passing through the bilateral steel inserts, and the results are compared. Gafchromic film is used to measure the actual dose delivered in a plane perpendicular to the beams at the isocenter.Results: The streaking and shading in the NMAR image using tissue classifications are significantly reduced. However, the structures, including metal, are deformed. Some uniform regions appear to have eroded from one side. There is a large variation of attenuation values inside the metal inserts. Similar results are seen in commercially corrected image. Use of MVCT prior images without tissue classification in NMAR significantly reduces these problems. The radiation dose calculated on the reference image is close to the dose measured using the film. Compared to the reference image, the calculated dose difference in the conventional NMAR image, the corrected images using uncorrected MVCT image, and IMPACT corrected MVCT image as priors is ∼15.5%, ∼5%, and ∼2.7%, respectively, at the isocenter.Conclusions: The deformation and erosion of the structures present in regular NMAR corrected images can be largely reduced by using MVCT priors without tissue segmentation. The attenuation value of metal being incorrect, large dose differences relative to the true value can result when using the conventional NMAR image. This difference can be significantly reduced if MVCT images are used as priors. Reduced tissue deformation, better tissue visualization, and correct information about the electron density of the tissues and metals in the artifact corrected images could help delineate the structures better, as well as calculate radiation dose more correctly, thus enhancing the quality of the radiotherapy treatment planning.« less
Shippentower, Gene E.; Schreck, Carl B.; Heppell, Scott A.
2011-01-01
We sought to determine whether a strontium chloride injection could be used to create a transgenerational otolith mark in steelhead Oncorhynchus mykiss. Two strontium injection trials and a survey of strontium: calcium (Sr:Ca) ratios in juvenile steelhead from various steelhead hatcheries were conducted to test the feasibility of the technique. In both trials, progeny of fish injected with strontium had significantly higher Sr:Ca ratios in the primordial region of their otoliths, as measured by an electron wavelength dispersive microprobe. In trial 1, the 5,000-mg/L treatment level showed that 56.8% of the otoliths were correctly classified, 12.2% being misclassified as belonging to the 0-mg/L treatment. In trial 2, the 20,000-mg/L treatment level showed that 30.8% of the otoliths were correctly classified, 13.5% being misclassified as belonging to the 0-mg/L treatment. There were no differences in the fertilization rates of eggs or survival rates of fry between the treatment and control groups. The Sr:Ca ratios in otoliths collected from various hatchery populations of steelhead varied and were greater than those found in otoliths from control fish in both of our injection trials. This study suggests that the marking technique led to recognizable increases in Sr:Ca ratios in some otoliths collected from fry produced by injected females. Not all progeny showed such increases, however, suggesting that the method holds promise but requires further refinement to reduce variation. Overall, there was a correct classification of about 40% across all treatments and trials; the variation in Sr:Ca ratios found among experimental trials and hatcheries indicates that care must be taken if the technique is employed where fish from more than one hatchery could be involved.
Steen, P.J.; Zorn, T.G.; Seelbach, P.W.; Schaeffer, J.S.
2008-01-01
Traditionally, fish habitat requirements have been described from local-scale environmental variables. However, recent studies have shown that studying landscape-scale processes improves our understanding of what drives species assemblages and distribution patterns across the landscape. Our goal was to learn more about constraints on the distribution of Michigan stream fish by examining landscape-scale habitat variables. We used classification trees and landscape-scale habitat variables to create and validate presence-absence models and relative abundance models for Michigan stream fishes. We developed 93 presence-absence models that on average were 72% correct in making predictions for an independent data set, and we developed 46 relative abundance models that were 76% correct in making predictions for independent data. The models were used to create statewide predictive distribution and abundance maps that have the potential to be used for a variety of conservation and scientific purposes. ?? Copyright by the American Fisheries Society 2008.
Joshi, Vinayak S; Reinhardt, Joseph M; Garvin, Mona K; Abramoff, Michael D
2014-01-01
The separation of the retinal vessel network into distinct arterial and venous vessel trees is of high interest. We propose an automated method for identification and separation of retinal vessel trees in a retinal color image by converting a vessel segmentation image into a vessel segment map and identifying the individual vessel trees by graph search. Orientation, width, and intensity of each vessel segment are utilized to find the optimal graph of vessel segments. The separated vessel trees are labeled as primary vessel or branches. We utilize the separated vessel trees for arterial-venous (AV) classification, based on the color properties of the vessels in each tree graph. We applied our approach to a dataset of 50 fundus images from 50 subjects. The proposed method resulted in an accuracy of 91.44% correctly classified vessel pixels as either artery or vein. The accuracy of correctly classified major vessel segments was 96.42%.
Analysis of thematic mapper simulator data collected over eastern North Dakota
NASA Technical Reports Server (NTRS)
Anderson, J. E. (Principal Investigator)
1982-01-01
The results of the analysis of aircraft-acquired thematic mapper simulator (TMS) data, collected to investigate the utility of thematic mapper data in crop area and land cover estimates, are discussed. Results of the analysis indicate that the seven-channel TMS data are capable of delineating the 13 crop types included in the study to an overall pixel classification accuracy of 80.97% correct, with relative efficiencies for four crop types examined between 1.62 and 26.61. Both supervised and unsupervised spectral signature development techniques were evaluated. The unsupervised methods proved to be inferior (based on analysis of variance) for the majority of crop types considered. Given the ground truth data set used for spectral signature development as well as evaluation of performance, it is possible to demonstrate which signature development technique would produce the highest percent correct classification for each crop type.
Quantifying color variation: Improved formulas for calculating hue with segment classification.
Smith, Stacey D
2014-03-01
Differences in color form a major component of biological variation, and quantifying these differences is the first step to understanding their evolutionary and ecological importance. One common method for measuring color variation is segment classification, which uses three variables (chroma, hue, and brightness) to describe the height and shape of reflectance curves. This study provides new formulas for calculating hue (the variable that describes the "type" of color) to give correct values in all regions of color space. • Reflectance spectra were obtained from the literature, and chroma, hue, and brightness were computed for each spectrum using the original formulas as well as the new formulas. Only the new formulas result in correct values in the blue-green portion of color space. • Use of the new formulas for calculating hue will result in more accurate color quantification for a broad range of biological applications.
Texture analysis based on the Hermite transform for image classification and segmentation
NASA Astrophysics Data System (ADS)
Estudillo-Romero, Alfonso; Escalante-Ramirez, Boris; Savage-Carmona, Jesus
2012-06-01
Texture analysis has become an important task in image processing because it is used as a preprocessing stage in different research areas including medical image analysis, industrial inspection, segmentation of remote sensed imaginary, multimedia indexing and retrieval. In order to extract visual texture features a texture image analysis technique is presented based on the Hermite transform. Psychovisual evidence suggests that the Gaussian derivatives fit the receptive field profiles of mammalian visual systems. The Hermite transform describes locally basic texture features in terms of Gaussian derivatives. Multiresolution combined with several analysis orders provides detection of patterns that characterizes every texture class. The analysis of the local maximum energy direction and steering of the transformation coefficients increase the method robustness against the texture orientation. This method presents an advantage over classical filter bank design because in the latter a fixed number of orientations for the analysis has to be selected. During the training stage, a subset of the Hermite analysis filters is chosen in order to improve the inter-class separability, reduce dimensionality of the feature vectors and computational cost during the classification stage. We exhaustively evaluated the correct classification rate of real randomly selected training and testing texture subsets using several kinds of common used texture features. A comparison between different distance measurements is also presented. Results of the unsupervised real texture segmentation using this approach and comparison with previous approaches showed the benefits of our proposal.
Utilizing feedback in adaptive SAR ATR systems
NASA Astrophysics Data System (ADS)
Horsfield, Owen; Blacknell, David
2009-05-01
Existing SAR ATR systems are usually trained off-line with samples of target imagery or CAD models, prior to conducting a mission. If the training data is not representative of mission conditions, then poor performance may result. In addition, it is difficult to acquire suitable training data for the many target types of interest. The Adaptive SAR ATR Problem Set (AdaptSAPS) program provides a MATLAB framework and image database for developing systems that adapt to mission conditions, meaning less reliance on accurate training data. A key function of an adaptive system is the ability to utilise truth feedback to improve performance, and it is this feature which AdaptSAPS is intended to exploit. This paper presents a new method for SAR ATR that does not use training data, based on supervised learning. This is achieved by using feature-based classification, and several new shadow features have been developed for this purpose. These features allow discrimination of vehicles from clutter, and classification of vehicles into two classes: targets, comprising military combat types, and non-targets, comprising bulldozers and trucks. The performance of the system is assessed using three baseline missions provided with AdaptSAPS, as well as three additional missions. All performance metrics indicate a distinct learning trend over the course of a mission, with most third and fourth quartile performance levels exceeding 85% correct classification. It has been demonstrated that these performance levels can be maintained even when truth feedback rates are reduced by up to 55% over the course of a mission.