Gadermayr, M.; Liedlgruber, M.; Uhl, A.; Vécsei, A.
2013-01-01
Due to the optics used in endoscopes, a typical degradation observed in endoscopic images are barrel-type distortions. In this work we investigate the impact of methods used to correct such distortions in images on the classification accuracy in the context of automated celiac disease classification. For this purpose we compare various different distortion correction methods and apply them to endoscopic images, which are subsequently classified. Since the interpolation used in such methods is also assumed to have an influence on the resulting classification accuracies, we also investigate different interpolation methods and their impact on the classification performance. In order to be able to make solid statements about the benefit of distortion correction we use various different feature extraction methods used to obtain features for the classification. Our experiments show that it is not possible to make a clear statement about the usefulness of distortion correction methods in the context of an automated diagnosis of celiac disease. This is mainly due to the fact that an eventual benefit of distortion correction highly depends on the feature extraction method used for the classification. PMID:23981585
Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei
2012-10-01
To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors' classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors' automatic classification and manual segmentation were 91.6% ± 2.0%. A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution.
Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei
2012-01-01
Purpose: To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. Methods: The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors’ classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. Results: The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors’ automatic classification and manual segmentation were 91.6% ± 2.0%. Conclusions: A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution. PMID:23039675
Remembering Left–Right Orientation of Pictures
Bartlett, James C.; Gernsbacher, Morton Ann; Till, Robert E.
2015-01-01
In a study of recognition memory for pictures, we observed an asymmetry in classifying test items as “same” versus “different” in left–right orientation: Identical copies of previously viewed items were classified more accurately than left–right reversals of those items. Response bias could not explain this asymmetry, and, moreover, correct “same” and “different” classifications were independently manipulable: Whereas repetition of input pictures (one vs. two presentations) affected primarily correct “same” classifications, retention interval (3 hr vs. 1 week) affected primarily correct “different” classifications. In addition, repetition but not retention interval affected judgments that previously seen pictures (both identical and reversed) were “old”. These and additional findings supported a dual-process hypothesis that links “same” classifications to high familiarity, and “different” classifications to conscious sampling of images of previously viewed pictures. PMID:2949051
Stoeger, Angela S.; Zeppelzauer, Matthias; Baotic, Anton
2015-01-01
Animal vocal signals are increasingly used to monitor wildlife populations and to obtain estimates of species occurrence and abundance. In the future, acoustic monitoring should function not only to detect animals, but also to extract detailed information about populations by discriminating sexes, age groups, social or kin groups, and potentially individuals. Here we show that it is possible to estimate age groups of African elephants (Loxodonta africana) based on acoustic parameters extracted from rumbles recorded under field conditions in a National Park in South Africa. Statistical models reached up to 70 % correct classification to four age groups (infants, calves, juveniles, adults) and 95 % correct classification when categorising into two groups (infants/calves lumped into one group versus adults). The models revealed that parameters representing absolute frequency values have the most discriminative power. Comparable classification results were obtained by fully automated classification of rumbles by high-dimensional features that represent the entire spectral envelope, such as MFCC (75 % correct classification) and GFCC (74 % correct classification). The reported results and methods provide the scientific foundation for a future system that could potentially automatically estimate the demography of an acoustically monitored elephant group or population. PMID:25821348
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
Classification of the Correct Quranic Letters Pronunciation of Male and Female Reciters
NASA Astrophysics Data System (ADS)
Khairuddin, Safiah; Ahmad, Salmiah; Embong, Abdul Halim; Nur Wahidah Nik Hashim, Nik; Altamas, Tareq M. K.; Nuratikah Syd Badaruddin, Syarifah; Shahbudin Hassan, Surul
2017-11-01
Recitation of the Holy Quran with the correct Tajweed is essential for every Muslim. Islam has encouraged Quranic education since early age as the recitation of the Quran correctly will represent the correct meaning of the words of Allah. It is important to recite the Quranic verses according to its characteristics (sifaat) and from its point of articulations (makhraj). This paper presents the identification and classification analysis of Quranic letters pronunciation for both male and female reciters, to obtain the unique representation of each letter by male as compared to female expert reciters. Linear Discriminant Analysis (LDA) was used as the classifier to classify the data with Formants and Power Spectral Density (PSD) as the acoustic features. The result shows that linear classifier of PSD with band 1 and band 2 power spectral combinations gives a high percentage of classification accuracy for most of the Quranic letters. It is also shown that the pronunciation by male reciters gives better result in the classification of the Quranic letters.
Young Kim, Eun; Johnson, Hans J
2013-01-01
A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.
NASA Astrophysics Data System (ADS)
Yang, He; Ma, Ben; Du, Qian; Yang, Chenghai
2010-08-01
In this paper, we propose approaches to improve the pixel-based support vector machine (SVM) classification for urban land use and land cover (LULC) mapping from airborne hyperspectral imagery with high spatial resolution. Class spatial neighborhood relationship is used to correct the misclassified class pairs, such as roof and trail, road and roof. These classes may be difficult to be separated because they may have similar spectral signatures and their spatial features are not distinct enough to help their discrimination. In addition, misclassification incurred from within-class trivial spectral variation can be corrected by using pixel connectivity information in a local window so that spectrally homogeneous regions can be well preserved. Our experimental results demonstrate the efficiency of the proposed approaches in classification accuracy improvement. The overall performance is competitive to the object-based SVM classification.
USDA-ARS?s Scientific Manuscript database
Panax quinquefolius L (P. quinquefolius L) samples grown in the United States and China were analyzed with high performance liquid chromatography-mass spectrometry (HPLC—MS). Prior to classification, the two-way datasets were subjected to pretreatment including baseline correction and retention tim...
USDA-ARS?s Scientific Manuscript database
Panax quinquefolius L (P. quinquefolius L) samples grown in the United States and China were analyzed with high performance liquid chromatography-mass spectrometry (HPLC—MS). Prior to classification, the two-way datasets were subjected to pretreatment including baseline correction and retention ti...
USDA-ARS?s Scientific Manuscript database
In this paper, we propose approaches to improve the pixel-based support vector machine (SVM) classification for urban land use and land cover (LULC) mapping from airborne hyperspectral imagery with high spatial resolution. Class spatial neighborhood relationship is used to correct the misclassified ...
NASA Astrophysics Data System (ADS)
Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.
2017-09-01
Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.
Novel high/low solubility classification methods for new molecular entities.
Dave, Rutwij A; Morris, Marilyn E
2016-09-10
This research describes a rapid solubility classification approach that could be used in the discovery and development of new molecular entities. Compounds (N=635) were divided into two groups based on information available in the literature: high solubility (BDDCS/BCS 1/3) and low solubility (BDDCS/BCS 2/4). We established decision rules for determining solubility classes using measured log solubility in molar units (MLogSM) or measured solubility (MSol) in mg/ml units. ROC curve analysis was applied to determine statistically significant threshold values of MSol and MLogSM. Results indicated that NMEs with MLogSM>-3.05 or MSol>0.30mg/mL will have ≥85% probability of being highly soluble and new molecular entities with MLogSM≤-3.05 or MSol≤0.30mg/mL will have ≥85% probability of being poorly soluble. When comparing solubility classification using the threshold values of MLogSM or MSol with BDDCS, we were able to correctly classify 85% of compounds. We also evaluated solubility classification of an independent set of 108 orally administered drugs using MSol (0.3mg/mL) and our method correctly classified 81% and 95% of compounds into high and low solubility classes, respectively. The high/low solubility classification using MLogSM or MSol is novel and independent of traditionally used dose number criteria. Copyright © 2016 Elsevier B.V. All rights reserved.
Genome-Wide Comparative Gene Family Classification
Frech, Christian; Chen, Nansheng
2010-01-01
Correct classification of genes into gene families is important for understanding gene function and evolution. Although gene families of many species have been resolved both computationally and experimentally with high accuracy, gene family classification in most newly sequenced genomes has not been done with the same high standard. This project has been designed to develop a strategy to effectively and accurately classify gene families across genomes. We first examine and compare the performance of computer programs developed for automated gene family classification. We demonstrate that some programs, including the hierarchical average-linkage clustering algorithm MC-UPGMA and the popular Markov clustering algorithm TRIBE-MCL, can reconstruct manual curation of gene families accurately. However, their performance is highly sensitive to parameter setting, i.e. different gene families require different program parameters for correct resolution. To circumvent the problem of parameterization, we have developed a comparative strategy for gene family classification. This strategy takes advantage of existing curated gene families of reference species to find suitable parameters for classifying genes in related genomes. To demonstrate the effectiveness of this novel strategy, we use TRIBE-MCL to classify chemosensory and ABC transporter gene families in C. elegans and its four sister species. We conclude that fully automated programs can establish biologically accurate gene families if parameterized accordingly. Comparative gene family classification finds optimal parameters automatically, thus allowing rapid insights into gene families of newly sequenced species. PMID:20976221
Studer, S; Naef, R; Schärer, P
1997-12-01
Esthetically correct treatment of a localized alveolar ridge defect is a frequent prosthetic challenge. Such defects can be overcome not only by a variety of prosthetic means, but also by several periodontal surgical techniques, notably soft tissue augmentations. Preoperative classification of the localized alveolar ridge defect can be greatly useful in evaluating the prognosis and technical difficulties involved. A semiquantitative classification, dependent on the severity of vertical and horizontal dimensional loss, is proposed to supplement the recognized qualitative classification of a ridge defect. Various methods of soft tissue augmentation are evaluated, based on initial volumetric measurements. The roll flap technique is proposed when the problem is related to ridge quality (single-tooth defect with little horizontal and vertical loss). Larger defects in which a volumetric problem must be solved are corrected through the subepithelial connective tissue technique. Additional mucogingival problems (eg, insufficient gingival width, high frenum, gingival scarring, or tattoo) should not be corrected simultaneously with augmentation procedures. In these cases, the onlay transplant technique is favored.
NASA Astrophysics Data System (ADS)
Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko
2015-01-01
Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.
NASA Astrophysics Data System (ADS)
Yao, C.; Zhang, Y.; Zhang, Y.; Liu, H.
2017-09-01
With the rapid development of Precision Agriculture (PA) promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN). For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.
Ahmadian, Alireza; Ay, Mohammad R; Bidgoli, Javad H; Sarkar, Saeed; Zaidi, Habib
2008-10-01
Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (mumap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated mumaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in clinical setting. More importantly, correction of oral contrast artifacts improved the readability and interpretation of the PET scan and showed substantial decrease of the SUV (104.3%) after correction. An automated segmentation algorithm for classification of irregular shapes of regions containing contrast medium was developed for wider applicability of the SCC algorithm for correction of oral contrast artifacts during the CTAC procedure. The algorithm is being refined and further validated in clinical setting.
a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images
NASA Astrophysics Data System (ADS)
Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei
2018-04-01
Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.
Hyperspectral analysis of columbia spotted frog habitat
Shive, J.P.; Pilliod, D.S.; Peterson, C.R.
2010-01-01
Wildlife managers increasingly are using remotely sensed imagery to improve habitat delineations and sampling strategies. Advances in remote sensing technology, such as hyperspectral imagery, provide more information than previously was available with multispectral sensors. We evaluated accuracy of high-resolution hyperspectral image classifications to identify wetlands and wetland habitat features important for Columbia spotted frogs (Rana luteiventris) and compared the results to multispectral image classification and United States Geological Survey topographic maps. The study area spanned 3 lake basins in the Salmon River Mountains, Idaho, USA. Hyperspectral data were collected with an airborne sensor on 30 June 2002 and on 8 July 2006. A 12-year comprehensive ground survey of the study area for Columbia spotted frog reproduction served as validation for image classifications. Hyperspectral image classification accuracy of wetlands was high, with a producer's accuracy of 96 (44 wetlands) correctly classified with the 2002 data and 89 (41 wetlands) correctly classified with the 2006 data. We applied habitat-based rules to delineate breeding habitat from other wetlands, and successfully predicted 74 (14 wetlands) of known breeding wetlands for the Columbia spotted frog. Emergent sedge microhabitat classification showed promise for directly predicting Columbia spotted frog egg mass locations within a wetland by correctly identifying 72 (23 of 32) of known locations. Our study indicates hyperspectral imagery can be an effective tool for mapping spotted frog breeding habitat in the selected mountain basins. We conclude that this technique has potential for improving site selection for inventory and monitoring programs conducted across similar wetland habitat and can be a useful tool for delineating wildlife habitats. ?? 2010 The Wildlife Society.
Atmospheric correction analysis on LANDSAT data over the Amazon region. [Manaus, Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dias, L. A. V.; Dossantos, J. R.; Formaggio, A. R.
1983-01-01
The Amazon Region natural resources were studied in two ways and compared. A LANDSAT scene and its attributes were selected, and a maximum likelihood classification was made. The scene was atmospherically corrected, taking into account Amazonic peculiarities revealed by (ground truth) of the same area, and the subsequent classification. Comparison shows that the classification improves with the atmospherically corrected images.
An electronic nose for reliable measurement and correct classification of beverages.
Mamat, Mazlina; Samad, Salina Abdul; Hannan, Mahammad A
2011-01-01
This paper reports the design of an electronic nose (E-nose) prototype for reliable measurement and correct classification of beverages. The prototype was developed and fabricated in the laboratory using commercially available metal oxide gas sensors and a temperature sensor. The repeatability, reproducibility and discriminative ability of the developed E-nose prototype were tested on odors emanating from different beverages such as blackcurrant juice, mango juice and orange juice, respectively. Repeated measurements of three beverages showed very high correlation (r > 0.97) between the same beverages to verify the repeatability. The prototype also produced highly correlated patterns (r > 0.97) in the measurement of beverages using different sensor batches to verify its reproducibility. The E-nose prototype also possessed good discriminative ability whereby it was able to produce different patterns for different beverages, different milk heat treatments (ultra high temperature, pasteurization) and fresh and spoiled milks. The discriminative ability of the E-nose was evaluated using Principal Component Analysis and a Multi Layer Perception Neural Network, with both methods showing good classification results.
An Electronic Nose for Reliable Measurement and Correct Classification of Beverages
Mamat, Mazlina; Samad, Salina Abdul; Hannan, Mahammad A.
2011-01-01
This paper reports the design of an electronic nose (E-nose) prototype for reliable measurement and correct classification of beverages. The prototype was developed and fabricated in the laboratory using commercially available metal oxide gas sensors and a temperature sensor. The repeatability, reproducibility and discriminative ability of the developed E-nose prototype were tested on odors emanating from different beverages such as blackcurrant juice, mango juice and orange juice, respectively. Repeated measurements of three beverages showed very high correlation (r > 0.97) between the same beverages to verify the repeatability. The prototype also produced highly correlated patterns (r > 0.97) in the measurement of beverages using different sensor batches to verify its reproducibility. The E-nose prototype also possessed good discriminative ability whereby it was able to produce different patterns for different beverages, different milk heat treatments (ultra high temperature, pasteurization) and fresh and spoiled milks. The discriminative ability of the E-nose was evaluated using Principal Component Analysis and a Multi Layer Perception Neural Network, with both methods showing good classification results. PMID:22163964
NASA Astrophysics Data System (ADS)
Zhang, Zhiming; de Wulf, Robert R.; van Coillie, Frieke M. B.; Verbeke, Lieven P. C.; de Clercq, Eva M.; Ou, Xiaokun
2011-01-01
Mapping of vegetation using remote sensing in mountainous areas is considerably hampered by topographic effects on the spectral response pattern. A variety of topographic normalization techniques have been proposed to correct these illumination effects due to topography. The purpose of this study was to compare six different topographic normalization methods (Cosine correction, Minnaert correction, C-correction, Sun-canopy-sensor correction, two-stage topographic normalization, and slope matching technique) for their effectiveness in enhancing vegetation classification in mountainous environments. Since most of the vegetation classes in the rugged terrain of the Lancang Watershed (China) did not feature a normal distribution, artificial neural networks (ANNs) were employed as a classifier. Comparing the ANN classifications, none of the topographic correction methods could significantly improve ETM+ image classification overall accuracy. Nevertheless, at the class level, the accuracy of pine forest could be increased by using topographically corrected images. On the contrary, oak forest and mixed forest accuracies were significantly decreased by using corrected images. The results also showed that none of the topographic normalization strategies was satisfactorily able to correct for the topographic effects in severely shadowed areas.
Sexing adult black-legged kittiwakes by DNA, behavior, and morphology
Jodice, P.G.R.; Lanctot, Richard B.; Gill, V.A.; Roby, D.D.; Hatch, Shyla A.
2000-01-01
We sexed adult Black-legged Kittiwakes (Rissa tridactyla) using DNA-based genetic techniques, behavior and morphology and compared results from these techniques. Genetic and morphology data were collected on 605 breeding kittiwakes and sex-specific behaviors were recorded for a sub-sample of 285 of these individuals. We compared sex classification based on both genetic and behavioral techniques for this sub-sample to assess the accuracy of the genetic technique. DNA-based techniques correctly sexed 97.2% and sex-specific behaviors, 96.5% of this sub-sample. We used the corrected genetic classifications from this sub-sample and the genetic classifications for the remaining birds, under the assumption they were correct, to develop predictive morphometric discriminant function models for all 605 birds. These models accurately predicted the sex of 73-96% of individuals examined, depending on the sample of birds used and the characters included. The most accurate single measurement for determining sex was length of head plus bill, which correctly classified 88% of individuals tested. When both members of a pair were measured, classification levels improved and approached the accuracy of both behavioral observations and genetic analyses. Morphometric techniques were only slightly less accurate than genetic techniques but were easier to implement in the field and less costly. Behavioral observations, while highly accurate, required that birds be easily observable during the breeding season and that birds be identifiable. As such, sex-specific behaviors may best be applied as a confirmation of sex for previously marked birds. All three techniques thus have the potential to be highly accurate, and the selection of one or more will depend on the circumstances of any particular field study.
Kalegowda, Yogesh; Harmer, Sarah L
2013-01-08
Artificial neural network (ANN) and a hybrid principal component analysis-artificial neural network (PCA-ANN) classifiers have been successfully implemented for classification of static time-of-flight secondary ion mass spectrometry (ToF-SIMS) mass spectra collected from complex Cu-Fe sulphides (chalcopyrite, bornite, chalcocite and pyrite) at different flotation conditions. ANNs are very good pattern classifiers because of: their ability to learn and generalise patterns that are not linearly separable; their fault and noise tolerance capability; and high parallelism. In the first approach, fragments from the whole ToF-SIMS spectrum were used as input to the ANN, the model yielded high overall correct classification rates of 100% for feed samples, 88% for conditioned feed samples and 91% for Eh modified samples. In the second approach, the hybrid pattern classifier PCA-ANN was integrated. PCA is a very effective multivariate data analysis tool applied to enhance species features and reduce data dimensionality. Principal component (PC) scores which accounted for 95% of the raw spectral data variance, were used as input to the ANN, the model yielded high overall correct classification rates of 88% for conditioned feed samples and 95% for Eh modified samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-13
... DEPARTMENT OF THE INTERIOR Bureau of Land Management [LLCAD09000.L14300000.ES0000; CACA- 051457] Correction for Notice of Realty Action; Recreation and Public Purposes Act Classification; California AGENCY: Bureau of Land Management, Interior. ACTION: Correction SUMMARY: This notice corrects a Notice of Realty...
Stability and bias of classification rates in biological applications of discriminant analysis
Williams, B.K.; Titus, K.; Hines, J.E.
1990-01-01
We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases
Network-based high level data classification.
Silva, Thiago Christiano; Zhao, Liang
2012-06-01
Traditional supervised data classification considers only physical features (e.g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.
Gold-standard for computer-assisted morphological sperm analysis.
Chang, Violeta; Garcia, Alejandra; Hitschfeld, Nancy; Härtel, Steffen
2017-04-01
Published algorithms for classification of human sperm heads are based on relatively small image databases that are not open to the public, and thus no direct comparison is available for competing methods. We describe a gold-standard for morphological sperm analysis (SCIAN-MorphoSpermGS), a dataset of sperm head images with expert-classification labels in one of the following classes: normal, tapered, pyriform, small or amorphous. This gold-standard is for evaluating and comparing known techniques and future improvements to present approaches for classification of human sperm heads for semen analysis. Although this paper does not provide a computational tool for morphological sperm analysis, we present a set of experiments for comparing sperm head description and classification common techniques. This classification base-line is aimed to be used as a reference for future improvements to present approaches for human sperm head classification. The gold-standard provides a label for each sperm head, which is achieved by majority voting among experts. The classification base-line compares four supervised learning methods (1- Nearest Neighbor, naive Bayes, decision trees and Support Vector Machine (SVM)) and three shape-based descriptors (Hu moments, Zernike moments and Fourier descriptors), reporting the accuracy and the true positive rate for each experiment. We used Fleiss' Kappa Coefficient to evaluate the inter-expert agreement and Fisher's exact test for inter-expert variability and statistical significant differences between descriptors and learning techniques. Our results confirm the high degree of inter-expert variability in the morphological sperm analysis. Regarding the classification base line, we show that none of the standard descriptors or classification approaches is best suitable for tackling the problem of sperm head classification. We discovered that the correct classification rate was highly variable when trying to discriminate among non-normal sperm heads. By using the Fourier descriptor and SVM, we achieved the best mean correct classification: only 49%. We conclude that the SCIAN-MorphoSpermGS will provide a standard tool for evaluation of characterization and classification approaches for human sperm heads. Indeed, there is a clear need for a specific shape-based descriptor for human sperm heads and a specific classification approach to tackle the problem of high variability within subcategories of abnormal sperm cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
Abbatangelo, Marco; Núñez-Carmona, Estefanía; Sberveglieri, Veronica; Zappa, Dario; Comini, Elisabetta; Sberveglieri, Giorgio
2018-05-18
Parmigiano Reggiano cheese is one of the most appreciated and consumed foods worldwide, especially in Italy, for its high content of nutrients and taste. However, these characteristics make this product subject to counterfeiting in different forms. In this study, a novel method based on an electronic nose has been developed to investigate the potentiality of this tool to distinguish rind percentages in grated Parmigiano Reggiano packages that should be lower than 18%. Different samples, in terms of percentage, seasoning and rind working process, were considered to tackle the problem at 360°. In parallel, GC-MS technique was used to give a name to the compounds that characterize Parmigiano and to relate them to sensors responses. Data analysis consisted of two stages: Multivariate analysis (PLS) and classification made in a hierarchical way with PLS-DA ad ANNs. Results were promising, in terms of correct classification of the samples. The correct classification rate (%) was higher for ANNs than PLS-DA, with correct identification approaching 100 percent.
Schuld, Christian; Franz, Steffen; Brüggemann, Karin; Heutehaus, Laura; Weidner, Norbert; Kirshblum, Steven C; Rupp, Rüdiger
2016-09-01
Prospective cohort study. Comparison of the classification performance between the worksheet revisions of 2011 and 2013 of the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI). Ongoing ISNCSCI instructional courses of the European Multicenter Study on Human Spinal Cord Injury (EMSCI). For quality control all participants were requested to classify five ISNCSCI cases directly before (pre-test) and after (post-test) the workshop. One hundred twenty-five clinicians working in 22 SCI centers attended the instructional course between November 2011 and March 2015. Seventy-two clinicians completed the post-test with the 2011 revision of the worksheet and 53 with the 2013 revision. Not applicable. The clinicians' classification performance assessed by the percentage of correctly determined motor levels (ML) and sensory levels, neurological levels of injury (NLI), ASIA Impairment Scales and zones of partial preservations. While no group differences were found in the pre-tests, the overall performance (rev2011: 92.2% ± 6.7%, rev2013: 94.3% ± 7.7%; P = 0.010), the percentage of correct MLs (83.2% ± 14.5% vs. 88.1% ± 15.3%; P = 0.046) and NLIs (86.1% ± 16.7% vs. 90.9% ± 18.6%; P = 0.043) improved significantly in the post-tests. Detailed ML analysis revealed the largest benefit of the 2013 revision (50.0% vs. 67.0%) in a case with a high cervical injury (NLI C2). The results from the EMSCI ISNCSCI post-tests show a significantly better classification performance using the revised 2013 worksheet presumably due to the body-side based grouping of myotomes and dermatomes and their correct horizontal alignment. Even with these proven advantages of the new layout, the correct determination of MLs in the segments C2-C4 remains difficult.
Cloud cover determination in polar regions from satellite imagery
NASA Technical Reports Server (NTRS)
Barry, R. G.; Maslanik, J. A.; Key, J. R.
1987-01-01
A definition is undertaken of the spectral and spatial characteristics of clouds and surface conditions in the polar regions, and to the creation of calibrated, geometrically correct data sets suitable for quantitative analysis. Ways are explored in which this information can be applied to cloud classifications as new methods or as extensions to existing classification schemes. A methodology is developed that uses automated techniques to merge Advanced Very High Resolution Radiometer (AVHRR) and Scanning Multichannel Microwave Radiometer (SMMR) data, and to apply first-order calibration and zenith angle corrections to the AVHRR imagery. Cloud cover and surface types are manually interpreted, and manual methods are used to define relatively pure training areas to describe the textural and multispectral characteristics of clouds over several surface conditions. The effects of viewing angle and bidirectional reflectance differences are studied for several classes, and the effectiveness of some key components of existing classification schemes is tested.
Taxman, Faye S; Kitsantas, Panagiota
2009-08-01
OBJECTIVE TO BE ADDRESSED: The purpose of this study was to investigate the structural and organizational factors that contribute to the availability and increased capacity for substance abuse treatment programs in correctional settings. We used classification and regression tree statistical procedures to identify how multi-level data can explain the variability in availability and capacity of substance abuse treatment programs in jails and probation/parole offices. The data for this study combined the National Criminal Justice Treatment Practices (NCJTP) Survey and the 2000 Census. The NCJTP survey was a nationally representative sample of correctional administrators for jails and probation/parole agencies. The sample size included 295 substance abuse treatment programs that were classified according to the intensity of their services: high, medium, and low. The independent variables included jurisdictional-level structural variables, attributes of the correctional administrators, and program and service delivery characteristics of the correctional agency. The two most important variables in predicting the availability of all three types of services were stronger working relationships with other organizations and the adoption of a standardized substance abuse screening tool by correctional agencies. For high and medium intensive programs, the capacity increased when an organizational learning strategy was used by administrators and the organization used a substance abuse screening tool. Implications on advancing treatment practices in correctional settings are discussed, including further work to test theories on how to better understand access to intensive treatment services. This study presents the first phase of understanding capacity-related issues regarding treatment programs offered in correctional settings.
ERIC Educational Resources Information Center
Furey, William M.; Marcotte, Amanda M.; Hintze, John M.; Shackett, Caroline M.
2016-01-01
The study presents a critical analysis of written expression curriculum-based measurement (WE-CBM) metrics derived from 3- and 10-min test lengths. Criterion validity and classification accuracy were examined for Total Words Written (TWW), Correct Writing Sequences (CWS), Percent Correct Writing Sequences (%CWS), and Correct Minus Incorrect…
A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-01-01
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-06-16
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.
Austin, Peter C; Lee, Douglas S
2011-01-01
Purpose: Classification trees are increasingly being used to classifying patients according to the presence or absence of a disease or health outcome. A limitation of classification trees is their limited predictive accuracy. In the data-mining and machine learning literature, boosting has been developed to improve classification. Boosting with classification trees iteratively grows classification trees in a sequence of reweighted datasets. In a given iteration, subjects that were misclassified in the previous iteration are weighted more highly than subjects that were correctly classified. Classifications from each of the classification trees in the sequence are combined through a weighted majority vote to produce a final classification. The authors' objective was to examine whether boosting improved the accuracy of classification trees for predicting outcomes in cardiovascular patients. Methods: We examined the utility of boosting classification trees for classifying 30-day mortality outcomes in patients hospitalized with either acute myocardial infarction or congestive heart failure. Results: Improvements in the misclassification rate using boosted classification trees were at best minor compared to when conventional classification trees were used. Minor to modest improvements to sensitivity were observed, with only a negligible reduction in specificity. For predicting cardiovascular mortality, boosted classification trees had high specificity, but low sensitivity. Conclusions: Gains in predictive accuracy for predicting cardiovascular outcomes were less impressive than gains in performance observed in the data mining literature. PMID:22254181
Seurinck, Sylvie; Deschepper, Ellen; Deboch, Bishaw; Verstraete, Willy; Siciliano, Steven
2006-03-01
Microbial source tracking (MST) methods need to be rapid, inexpensive and accurate. Unfortunately, many MST methods provide a wealth of information that is difficult to interpret by the regulators who use this information to make decisions. This paper describes the use of classification tree analysis to interpret the results of a MST method based on fatty acid methyl ester (FAME) profiles of Escherichia coli isolates, and to present results in a format readily interpretable by water quality managers. Raw sewage E. coli isolates and animal E. coli isolates from cow, dog, gull, and horse were isolated and their FAME profiles collected. Correct classification rates determined with leaveone-out cross-validation resulted in an overall low correct classification rate of 61%. A higher overall correct classification rate of 85% was obtained when the animal isolates were pooled together and compared to the raw sewage isolates. Bootstrap aggregation or adaptive resampling and combining of the FAME profile data increased correct classification rates substantially. Other MST methods may be better suited to differentiate between different fecal sources but classification tree analysis has enabled us to distinguish raw sewage from animal E. coli isolates, which previously had not been possible with other multivariate methods such as principal component analysis and cluster analysis.
Ensemble of classifiers for confidence-rated classification of NDE signal
NASA Astrophysics Data System (ADS)
Banerjee, Portia; Safdarnejad, Seyed; Udpa, Lalita; Udpa, Satish
2016-02-01
Ensemble of classifiers in general, aims to improve classification accuracy by combining results from multiple weak hypotheses into a single strong classifier through weighted majority voting. Improved versions of ensemble of classifiers generate self-rated confidence scores which estimate the reliability of each of its prediction and boost the classifier using these confidence-rated predictions. However, such a confidence metric is based only on the rate of correct classification. In existing works, although ensemble of classifiers has been widely used in computational intelligence, the effect of all factors of unreliability on the confidence of classification is highly overlooked. With relevance to NDE, classification results are affected by inherent ambiguity of classifica-tion, non-discriminative features, inadequate training samples and noise due to measurement. In this paper, we extend the existing ensemble classification by maximizing confidence of every classification decision in addition to minimizing the classification error. Initial results of the approach on data from eddy current inspection show improvement in classification performance of defect and non-defect indications.
Kopps, Anna M; Kang, Jungkoo; Sherwin, William B; Palsbøll, Per J
2015-06-30
Kinship analyses are important pillars of ecological and conservation genetic studies with potentially far-reaching implications. There is a need for power analyses that address a range of possible relationships. Nevertheless, such analyses are rarely applied, and studies that use genetic-data-based-kinship inference often ignore the influence of intrinsic population characteristics. We investigated 11 questions regarding the correct classification rate of dyads to relatedness categories (relatedness category assignments; RCA) using an individual-based model with realistic life history parameters. We investigated the effects of the number of genetic markers; marker type (microsatellite, single nucleotide polymorphism SNP, or both); minor allele frequency; typing error; mating system; and the number of overlapping generations under different demographic conditions. We found that (i) an increasing number of genetic markers increased the correct classification rate of the RCA so that up to >80% first cousins can be correctly assigned; (ii) the minimum number of genetic markers required for assignments with 80 and 95% correct classifications differed between relatedness categories, mating systems, and the number of overlapping generations; (iii) the correct classification rate was improved by adding additional relatedness categories and age and mitochondrial DNA data; and (iv) a combination of microsatellite and single-nucleotide polymorphism data increased the correct classification rate if <800 SNP loci were available. This study shows how intrinsic population characteristics, such as mating system and the number of overlapping generations, life history traits, and genetic marker characteristics, can influence the correct classification rate of an RCA study. Therefore, species-specific power analyses are essential for empirical studies. Copyright © 2015 Kopps et al.
Robust point cloud classification based on multi-level semantic relationships for urban scenes
NASA Astrophysics Data System (ADS)
Zhu, Qing; Li, Yuan; Hu, Han; Wu, Bo
2017-07-01
The semantic classification of point clouds is a fundamental part of three-dimensional urban reconstruction. For datasets with high spatial resolution but significantly more noises, a general trend is to exploit more contexture information to surmount the decrease of discrimination of features for classification. However, previous works on adoption of contexture information are either too restrictive or only in a small region and in this paper, we propose a point cloud classification method based on multi-level semantic relationships, including point-homogeneity, supervoxel-adjacency and class-knowledge constraints, which is more versatile and incrementally propagate the classification cues from individual points to the object level and formulate them as a graphical model. The point-homogeneity constraint clusters points with similar geometric and radiometric properties into regular-shaped supervoxels that correspond to the vertices in the graphical model. The supervoxel-adjacency constraint contributes to the pairwise interactions by providing explicit adjacent relationships between supervoxels. The class-knowledge constraint operates at the object level based on semantic rules, guaranteeing the classification correctness of supervoxel clusters at that level. International Society of Photogrammetry and Remote Sensing (ISPRS) benchmark tests have shown that the proposed method achieves state-of-the-art performance with an average per-area completeness and correctness of 93.88% and 95.78%, respectively. The evaluation of classification of photogrammetric point clouds and DSM generated from aerial imagery confirms the method's reliability in several challenging urban scenes.
Tactile surface classification for limbed robots using a pressure sensitive robot skin.
Shill, Jacob J; Collins, Emmanuel G; Coyle, Eric; Clark, Jonathan
2015-02-02
This paper describes an approach to terrain identification based on pressure images generated through direct surface contact using a robot skin constructed around a high-resolution pressure sensing array. Terrain signatures for classification are formulated from the magnitude frequency responses of the pressure images. The initial experimental results for statically obtained images show that the approach yields classification accuracies [Formula: see text]. The methodology is extended to accommodate the dynamic pressure images anticipated when a robot is walking or running. Experiments with a one-legged hopping robot yield similar identification accuracies [Formula: see text]. In addition, the accuracies are independent with respect to changing robot dynamics (i.e., when using different leg gaits). The paper further shows that the high-resolution capabilities of the sensor enables similarly textured surfaces to be distinguished. A correcting filter is developed to accommodate for failures or faults that inevitably occur within the sensing array with continued use. Experimental results show using the correcting filter can extend the effective operational lifespan of a high-resolution sensing array over 6x in the presence of sensor damage. The results presented suggest this methodology can be extended to autonomous field robots, providing a robot with crucial information about the environment that can be used to aid stable and efficient mobility over rough and varying terrains.
Automatic red eye correction and its quality metric
NASA Astrophysics Data System (ADS)
Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho
2008-01-01
The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose
Rahman, Mohammad Mizanur; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse
2017-01-01
Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms. PMID:28895910
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose.
Rahman, Mohammad Mizanur; Charoenlarpnopparut, Chalie; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse
2017-09-12
Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k -nearest neighbor ( k -NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.
Study design in high-dimensional classification analysis.
Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen
2016-10-01
Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Werdiningsih, Indah; Zaman, Badrus; Nuqoba, Barry
2017-08-01
This paper presents classification of brain cancer using wavelet transformation and Adaptive Neighborhood Based Modified Backpropagation (ANMBP). Three stages of the processes, namely features extraction, features reduction, and classification process. Wavelet transformation is used for feature extraction and ANMBP is used for classification process. The result of features extraction is feature vectors. Features reduction used 100 energy values per feature and 10 energy values per feature. Classifications of brain cancer are normal, alzheimer, glioma, and carcinoma. Based on simulation results, 10 energy values per feature can be used to classify brain cancer correctly. The correct classification rate of proposed system is 95 %. This research demonstrated that wavelet transformation can be used for features extraction and ANMBP can be used for classification of brain cancer.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-29
... DEPARTMENT OF LABOR Office of the Secretary Agency Information Collection Activities; Submission for OMB Review; Comment Request; Worker Classification Survey; Correction ACTION: Notice; correction... titled, ``Worker Classification Survey,'' to the Office of Management and Budget for review and approval...
[Physiologic and hygienic characteristics of college teachers work].
Ryzhov, A Ia; Komin, S V; Kopkareva, O O
2005-01-01
First series of studies covered analysis of lecture with registering number of words and movements complementary to them. The series 2 determined occupational activities of college teacher, according to contemporary hygienic classification, as highly intensive work requiring physiologic and managerial correction.
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 9 2013-01-01 2013-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the effective...
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 10 2014-01-01 2014-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the effective...
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 9 2012-01-01 2012-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the effective...
12 CFR 1777.20 - Capital classifications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the effective...
Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng
2016-01-01
Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555
Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng
2016-01-01
Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.
Effective classification of the prevalence of Schistosoma mansoni.
Mitchell, Shira A; Pagano, Marcello
2012-12-01
To present an effective classification method based on the prevalence of Schistosoma mansoni in the community. We created decision rules (defined by cut-offs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs while maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the De Vlas model for worm and egg distributions. We compare the proposed method with the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification. Modelling varying sensitivity lowers the lower cut-off more significantly than the upper cut-off, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%. Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method. © 2012 Blackwell Publishing Ltd.
Character recognition using a neural network model with fuzzy representation
NASA Technical Reports Server (NTRS)
Tavakoli, Nassrin; Seniw, David
1992-01-01
The degree to which digital images are recognized correctly by computerized algorithms is highly dependent upon the representation and the classification processes. Fuzzy techniques play an important role in both processes. In this paper, the role of fuzzy representation and classification on the recognition of digital characters is investigated. An experimental Neural Network model with application to character recognition was developed. Through a set of experiments, the effect of fuzzy representation on the recognition accuracy of this model is presented.
Queering the Catalog: Queer Theory and the Politics of Correction
ERIC Educational Resources Information Center
Drabinski, Emily
2013-01-01
Critiques of hegemonic library classification structures and controlled vocabularies have a rich history in information studies. This project has pointed out the trouble with classification and cataloging decisions that are framed as objective and neutral but are always ideological and worked to correct bias in library structures. Viewing…
Twarog, Nathaniel R.; Low, Jonathan A.; Currier, Duane G.; Miller, Greg; Chen, Taosheng; Shelat, Anang A.
2016-01-01
Phenotypic screening through high-content automated microscopy is a powerful tool for evaluating the mechanism of action of candidate therapeutics. Despite more than a decade of development, however, high content assays have yielded mixed results, identifying robust phenotypes in only a small subset of compound classes. This has led to a combinatorial explosion of assay techniques, analyzing cellular phenotypes across dozens of assays with hundreds of measurements. Here, using a minimalist three-stain assay and only 23 basic cellular measurements, we developed an analytical approach that leverages informative dimensions extracted by linear discriminant analysis to evaluate similarity between the phenotypic trajectories of different compounds in response to a range of doses. This method enabled us to visualize biologically-interpretable phenotypic tracks populated by compounds of similar mechanism of action, cluster compounds according to phenotypic similarity, and classify novel compounds by comparing them to phenotypically active exemplars. Hierarchical clustering applied to 154 compounds from over a dozen different mechanistic classes demonstrated tight agreement with published compound mechanism classification. Using 11 phenotypically active mechanism classes, classification was performed on all 154 compounds: 78% were correctly identified as belonging to one of the 11 exemplar classes or to a different unspecified class, with accuracy increasing to 89% when less phenotypically active compounds were excluded. Importantly, several apparent clustering and classification failures, including rigosertib and 5-fluoro-2’-deoxycytidine, instead revealed more complex mechanisms or off-target effects verified by more recent publications. These results show that a simple, easily replicated, minimalist high-content assay can reveal subtle variations in the cellular phenotype induced by compounds and can correctly predict mechanism of action, as long as the appropriate analytical tools are used. PMID:26886014
Twarog, Nathaniel R; Low, Jonathan A; Currier, Duane G; Miller, Greg; Chen, Taosheng; Shelat, Anang A
2016-01-01
Phenotypic screening through high-content automated microscopy is a powerful tool for evaluating the mechanism of action of candidate therapeutics. Despite more than a decade of development, however, high content assays have yielded mixed results, identifying robust phenotypes in only a small subset of compound classes. This has led to a combinatorial explosion of assay techniques, analyzing cellular phenotypes across dozens of assays with hundreds of measurements. Here, using a minimalist three-stain assay and only 23 basic cellular measurements, we developed an analytical approach that leverages informative dimensions extracted by linear discriminant analysis to evaluate similarity between the phenotypic trajectories of different compounds in response to a range of doses. This method enabled us to visualize biologically-interpretable phenotypic tracks populated by compounds of similar mechanism of action, cluster compounds according to phenotypic similarity, and classify novel compounds by comparing them to phenotypically active exemplars. Hierarchical clustering applied to 154 compounds from over a dozen different mechanistic classes demonstrated tight agreement with published compound mechanism classification. Using 11 phenotypically active mechanism classes, classification was performed on all 154 compounds: 78% were correctly identified as belonging to one of the 11 exemplar classes or to a different unspecified class, with accuracy increasing to 89% when less phenotypically active compounds were excluded. Importantly, several apparent clustering and classification failures, including rigosertib and 5-fluoro-2'-deoxycytidine, instead revealed more complex mechanisms or off-target effects verified by more recent publications. These results show that a simple, easily replicated, minimalist high-content assay can reveal subtle variations in the cellular phenotype induced by compounds and can correctly predict mechanism of action, as long as the appropriate analytical tools are used.
NASA Astrophysics Data System (ADS)
Dementev, A. O.; Dmitriev, E. V.; Kozoderov, V. V.; Egorov, V. D.
2017-10-01
Hyperspectral imaging is up-to-date promising technology widely applied for the accurate thematic mapping. The presence of a large number of narrow survey channels allows us to use subtle differences in spectral characteristics of objects and to make a more detailed classification than in the case of using standard multispectral data. The difficulties encountered in the processing of hyperspectral images are usually associated with the redundancy of spectral information which leads to the problem of the curse of dimensionality. Methods currently used for recognizing objects on multispectral and hyperspectral images are usually based on standard base supervised classification algorithms of various complexity. Accuracy of these algorithms can be significantly different depending on considered classification tasks. In this paper we study the performance of ensemble classification methods for the problem of classification of the forest vegetation. Error correcting output codes and boosting are tested on artificial data and real hyperspectral images. It is demonstrates, that boosting gives more significant improvement when used with simple base classifiers. The accuracy in this case in comparable the error correcting output code (ECOC) classifier with Gaussian kernel SVM base algorithm. However the necessity of boosting ECOC with Gaussian kernel SVM is questionable. It is demonstrated, that selected ensemble classifiers allow us to recognize forest species with high enough accuracy which can be compared with ground-based forest inventory data.
NASA Astrophysics Data System (ADS)
Aboutalebi, M.; Torres-Rua, A. F.; McKee, M.; Kustas, W. P.; Nieto, H.
2017-12-01
Shadows are an unavoidable component of high-resolution imagery. Although shadows can be a useful source of information about terrestrial features, they are a hindrance for image processing and lead to misclassification errors and increased uncertainty in defining surface reflectance properties. In precision agriculture activities, shadows may affect the performance of vegetation indices at pixel and plant scales. Thus, it becomes necessary to evaluate existing shadow detection and restoration methods, especially for applications that makes direct use of pixel information to estimate vegetation biomass, leaf area index (LAI), plant water use and stress, chlorophyll content, just to name a few. In this study, four high-resolution imageries captured by the Utah State University - AggieAir Unmanned Aerial Vehicle (UAV) system flown in 2014, 2015, and 2016 over a commercial vineyard located in the California for the USDA-Agricultural Research Service Grape Remote sensing Atmospheric Profile and Evapotranspiration Experiment (GRAPEX) Program are used for shadow detection and restoration. Four different methods for shadow detection are compared: (1) unsupervised classification, (2) supervised classification, (3) index-based method, and (4) physically-based method. Also, two different shadow restoration methods are evaluated: (1) linear correlation correction, and (2) gamma correction. The models' performance is evaluated over two vegetation indices: normalized difference vegetation index (NDVI) and LAI for both sunlit and shadowed pixels. Histogram and analysis of variance (ANOVA) are used as performance indicators. Results indicated that the performance of the supervised classification and the index-based method are better than other methods. In addition, there is a statistical difference between the average of NDVI and LAI on the sunlit and shadowed pixels. Among the shadow restoration methods, gamma correction visually works better than the linear correlation correction. Moreover, the statistical difference between sunlit and shadowed NDVI and LAI decreases after the application of the gamma restoration method. Potential effects of shadows on modeling surface energy balance and evapotranspiration using very high resolution UAV imagery over the GRAPEX vineyard will be discussed.
Characteristics of Forests in Western Sayani Mountains, Siberia from SAR Data
NASA Technical Reports Server (NTRS)
Ranson, K. Jon; Sun, Guoqing; Kharuk, V. I.; Kovacs, Katalin
1998-01-01
This paper investigated the possibility of using spaceborne radar data to map forest types and logging in the mountainous Western Sayani area in Siberia. L and C band HH, HV, and VV polarized images from the Shuttle Imaging Radar-C instrument were used in the study. Techniques to reduce topographic effects in the radar images were investigated. These included radiometric correction using illumination angle inferred from a digital elevation model, and reducing apparent effects of topography through band ratios. Forest classification was performed after terrain correction utilizing typical supervised techniques and principal component analyses. An ancillary data set of local elevations was also used to improve the forest classification. Map accuracy for each technique was estimated for training sites based on Russian forestry maps, satellite imagery and field measurements. The results indicate that it is necessary to correct for topography when attempting to classify forests in mountainous terrain. Radiometric correction based on a DEM (Digital Elevation Model) improved classification results but required reducing the SAR (Synthetic Aperture Radar) resolution to match the DEM. Using ratios of SAR channels that include cross-polarization improved classification and
An analysis of USSPACECOM's space surveillance network sensor tasking methodology
NASA Astrophysics Data System (ADS)
Berger, Jeff M.; Moles, Joseph B.; Wilsey, David G.
1992-12-01
This study provides the basis for the development of a cost/benefit assessment model to determine the effects of alterations to the Space Surveillance Network (SSN) on orbital element (OE) set accuracy. It provides a review of current methods used by NORAD and the SSN to gather and process observations, an alternative to the current Gabbard classification method, and the development of a model to determine the effects of observation rate and correction interval on OE set accuracy. The proposed classification scheme is based on satellite J2 perturbations. Specifically, classes were established based on mean motion, eccentricity, and inclination since J2 perturbation effects are functions of only these elements. Model development began by creating representative sensor observations using a highly accurate orbital propagation model. These observations were compared to predicted observations generated using the NORAD Simplified General Perturbation (SGP4) model and differentially corrected using a Bayes, sequential estimation, algorithm. A 10-run Monte Carlo analysis was performed using this model on 12 satellites using 16 different observation rate/correction interval combinations. An ANOVA and confidence interval analysis of the results show that this model does demonstrate the differences in steady state position error based on varying observation rate and correction interval.
Acoustic target detection and classification using neural networks
NASA Technical Reports Server (NTRS)
Robertson, James A.; Conlon, Mark
1993-01-01
A neural network approach to the classification of acoustic emissions of ground vehicles and helicopters is demonstrated. Data collected during the Joint Acoustic Propagation Experiment conducted in July of l991 at White Sands Missile Range, New Mexico was used to train a classifier to distinguish between the spectrums of a UH-1, M60, M1 and M114. An output node was also included that would recognize background (i.e. no target) data. Analysis revealed specific hidden nodes responding to the features input into the classifier. Initial results using the neural network were encouraging with high correct identification rates accompanied by high levels of confidence.
Phylogenetic Analysis and Classification of the Fungal bHLH Domain
Sailsbery, Joshua K.; Atchley, William R.; Dean, Ralph A.
2012-01-01
The basic Helix-Loop-Helix (bHLH) domain is an essential highly conserved DNA-binding domain found in many transcription factors in all eukaryotic organisms. The bHLH domain has been well studied in the Animal and Plant Kingdoms but has yet to be characterized within Fungi. Herein, we obtained and evaluated the phylogenetic relationship of 490 fungal-specific bHLH containing proteins from 55 whole genome projects composed of 49 Ascomycota and 6 Basidiomycota organisms. We identified 12 major groupings within Fungi (F1–F12); identifying conserved motifs and functions specific to each group. Several classification models were built to distinguish the 12 groups and elucidate the most discerning sites in the domain. Performance testing on these models, for correct group classification, resulted in a maximum sensitivity and specificity of 98.5% and 99.8%, respectively. We identified 12 highly discerning sites and incorporated those into a set of rules (simplified model) to classify sequences into the correct group. Conservation of amino acid sites and phylogenetic analyses established that like plant bHLH proteins, fungal bHLH–containing proteins are most closely related to animal Group B. The models used in these analyses were incorporated into a software package, the source code for which is available at www.fungalgenomics.ncsu.edu. PMID:22114358
Nketiah, Gabriel; Selnaes, Kirsten M; Sandsmark, Elise; Teruel, Jose R; Krüger-Stokke, Brage; Bertilsson, Helena; Bathen, Tone F; Elschot, Mattijs
2018-05-01
To evaluate the effect of correction for B 0 inhomogeneity-induced geometric distortion in echo-planar diffusion-weighted imaging on quantitative apparent diffusion coefficient (ADC) analysis in multiparametric prostate MRI. Geometric distortion correction was performed in echo-planar diffusion-weighted images (b = 0, 50, 400, 800 s/mm 2 ) of 28 patients, using two b 0 scans with opposing phase-encoding polarities. Histology-matched tumor and healthy tissue volumes of interest delineated on T 2 -weighted images were mapped to the nondistortion-corrected and distortion-corrected data sets by resampling with and without spatial coregistration. The ADC values were calculated on the volume and voxel level. The effect of distortion correction on ADC quantification and tissue classification was evaluated using linear-mixed models and logistic regression, respectively. Without coregistration, the absolute differences in tumor ADC (range: 0.0002-0.189 mm 2 /s×10 -3 (volume level); 0.014-0.493 mm 2 /s×10 -3 (voxel level)) between the nondistortion-corrected and distortion-corrected were significantly associated (P < 0.05) with distortion distance (mean: 1.4 ± 1.3 mm; range: 0.3-5.3 mm). No significant associations were found upon coregistration; however, in patients with high rectal gas residue, distortion correction resulted in improved spatial representation and significantly better classification of healthy versus tumor voxels (P < 0.05). Geometric distortion correction in DWI could improve quantitative ADC analysis in multiparametric prostate MRI. Magn Reson Med 79:2524-2532, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
5 CFR 511.703 - Retroactive effective date.
Code of Federal Regulations, 2011 CFR
2011-01-01
... CLASSIFICATION UNDER THE GENERAL SCHEDULE Effective Dates of Position Classification Actions or Decisions § 511... if the employee is wrongfully demoted. (b) Downgrading. (1) The effective date of a classification appellate certificate or agency appellate decision can be retroactive only if it corrects a classification...
NASA Astrophysics Data System (ADS)
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo
2018-06-01
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo
2018-06-05
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.
Boudissa, M; Orfeuvre, B; Chabanas, M; Tonetti, J
2017-09-01
The Letournel classification of acetabular fracture shows poor reproducibility in inexperienced observers, despite the introduction of 3D imaging. We therefore developed a method of semi-automatic segmentation based on CT data. The present prospective study aimed to assess: (1) whether semi-automatic bone-fragment segmentation increased the rate of correct classification; (2) if so, in which fracture types; and (3) feasibility using the open-source itksnap 3.0 software package without incurring extra cost for users. Semi-automatic segmentation of acetabular fractures significantly increases the rate of correct classification by orthopedic surgery residents. Twelve orthopedic surgery residents classified 23 acetabular fractures. Six used conventional 3D reconstructions provided by the center's radiology department (conventional group) and 6 others used reconstructions obtained by semi-automatic segmentation using the open-source itksnap 3.0 software package (segmentation group). Bone fragments were identified by specific colors. Correct classification rates were compared between groups on Chi 2 test. Assessment was repeated 2 weeks later, to determine intra-observer reproducibility. Correct classification rates were significantly higher in the "segmentation" group: 114/138 (83%) versus 71/138 (52%); P<0.0001. The difference was greater for simple (36/36 (100%) versus 17/36 (47%); P<0.0001) than complex fractures (79/102 (77%) versus 54/102 (53%); P=0.0004). Mean segmentation time per fracture was 27±3min [range, 21-35min]. The segmentation group showed excellent intra-observer correlation coefficients, overall (ICC=0.88), and for simple (ICC=0.92) and complex fractures (ICC=0.84). Semi-automatic segmentation, identifying the various bone fragments, was effective in increasing the rate of correct acetabular fracture classification on the Letournel system by orthopedic surgery residents. It may be considered for routine use in education and training. III: prospective case-control study of a diagnostic procedure. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Identification of Terrestrial Reflectance From Remote Sensing
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel; Nolf, Scott R.; Stacy, Kathryn (Technical Monitor)
2000-01-01
Correcting for atmospheric effects is an essential part of surface-reflectance recovery from radiance measurements. Model-based atmospheric correction techniques enable an accurate identification and classification of terrestrial reflectances from multi-spectral imagery. Successful and efficient removal of atmospheric effects from remote-sensing data is a key factor in the success of Earth observation missions. This report assesses the performance, robustness and sensitivity of two atmospheric-correction and reflectance-recovery techniques as part of an end-to-end simulation of hyper-spectral acquisition, identification and classification.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-08
...] RIN 1615-AB76 Commonwealth of the Northern Mariana Islands Transitional Worker Classification... Transitional Worker Classification. In that rule, we had sought to modify the title of a paragraph, but... the final rule Commonwealth of the Northern Mariana Islands Transitional Worker Classification...
Classification of Birds and Bats Using Flight Tracks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cullinan, Valerie I.; Matzner, Shari; Duberstein, Corey A.
Classification of birds and bats that use areas targeted for offshore wind farm development and the inference of their behavior is essential to evaluating the potential effects of development. The current approach to assessing the number and distribution of birds at sea involves transect surveys using trained individuals in boats or airplanes or using high-resolution imagery. These approaches are costly and have safety concerns. Based on a limited annotated library extracted from a single-camera thermal video, we provide a framework for building models that classify birds and bats and their associated behaviors. As an example, we developed a discriminant modelmore » for theoretical flight paths and applied it to data (N = 64 tracks) extracted from 5-min video clips. The agreement between model- and observer-classified path types was initially only 41%, but it increased to 73% when small-scale jitter was censored and path types were combined. Classification of 46 tracks of bats, swallows, gulls, and terns on average was 82% accurate, based on a jackknife cross-validation. Model classification of bats and terns (N = 4 and 2, respectively) was 94% and 91% correct, respectively; however, the variance associated with the tracks from these targets is poorly estimated. Model classification of gulls and swallows (N ≥ 18) was on average 73% and 85% correct, respectively. The models developed here should be considered preliminary because they are based on a small data set both in terms of the numbers of species and the identified flight tracks. Future classification models would be greatly improved by including a measure of distance between the camera and the target.« less
A Model Assessment and Classification System for Men and Women in Correctional Institutions.
ERIC Educational Resources Information Center
Hellervik, Lowell W.; And Others
The report describes a manpower assessment and classification system for criminal offenders directed towards making practical training and job classification decisions. The model is not concerned with custody classifications except as they affect occupational/training possibilities. The model combines traditional procedures of vocational…
12 CFR 702.101 - Measures and effective date of net worth classification.
Code of Federal Regulations, 2011 CFR
2011-01-01
... classification. 702.101 Section 702.101 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.101 Measures and effective date of net worth classification. (a) Net worth measures. For purposes of this part, a credit union...
12 CFR 702.101 - Measures and effective date of net worth classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... classification. 702.101 Section 702.101 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.101 Measures and effective date of net worth classification. (a) Net worth measures. For purposes of this part, a credit union...
12 CFR 1229.3 - Criteria for a Bank's capital classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Criteria for a Bank's capital classification... CLASSIFICATIONS AND PROMPT CORRECTIVE ACTION Federal Home Loan Banks § 1229.3 Criteria for a Bank's capital classification. (a) Adequately capitalized. Except where the Director has exercised authority to reclassify a...
Support Vector Machines for Hyperspectral Remote Sensing Classification
NASA Technical Reports Server (NTRS)
Gualtieri, J. Anthony; Cromp, R. F.
1998-01-01
The Support Vector Machine provides a new way to design classification algorithms which learn from examples (supervised learning) and generalize when applied to new data. We demonstrate its success on a difficult classification problem from hyperspectral remote sensing, where we obtain performances of 96%, and 87% correct for a 4 class problem, and a 16 class problem respectively. These results are somewhat better than other recent results on the same data. A key feature of this classifier is its ability to use high-dimensional data without the usual recourse to a feature selection step to reduce the dimensionality of the data. For this application, this is important, as hyperspectral data consists of several hundred contiguous spectral channels for each exemplar. We provide an introduction to this new approach, and demonstrate its application to classification of an agriculture scene.
Comparison of seven protocols to identify fecal contamination sources using Escherichia coli
Stoeckel, D.M.; Mathes, M.V.; Hyer, K.E.; Hagedorn, C.; Kator, H.; Lukasik, J.; O'Brien, T. L.; Fenger, T.W.; Samadpour, M.; Strickler, K.M.; Wiggins, B.A.
2004-01-01
Microbial source tracking (MST) uses various approaches to classify fecal-indicator microorganisms to source hosts. Reproducibility, accuracy, and robustness of seven phenotypic and genotypic MST protocols were evaluated by use of Escherichia coli from an eight-host library of known-source isolates and a separate, blinded challenge library. In reproducibility tests, measuring each protocol's ability to reclassify blinded replicates, only one (pulsed-field gel electrophoresis; PFGE) correctly classified all test replicates to host species; three protocols classified 48-62% correctly, and the remaining three classified fewer than 25% correctly. In accuracy tests, measuring each protocol's ability to correctly classify new isolates, ribotyping with EcoRI and PvuII approached 100% correct classification but only 6% of isolates were classified; four of the other six protocols (antibiotic resistance analysis, PFGE, and two repetitive-element PCR protocols) achieved better than random accuracy rates when 30-100% of challenge isolates were classified. In robustness tests, measuring each protocol's ability to recognize isolates from nonlibrary hosts, three protocols correctly classified 33-100% of isolates as "unknown origin," whereas four protocols classified all isolates to a source category. A relevance test, summarizing interpretations for a hypothetical water sample containing 30 challenge isolates, indicated that false-positive classifications would hinder interpretations for most protocols. Study results indicate that more representation in known-source libraries and better classification accuracy would be needed before field application. Thorough reliability assessment of classification results is crucial before and during application of MST protocols.
Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin
2017-05-08
Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.
Cohen, Aaron M
2008-01-01
We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.
NASA Technical Reports Server (NTRS)
Card, Don H.; Strong, Laurence L.
1989-01-01
An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.
Theoretical Interpretation of the Fluorescence Spectra of Toluene and P- Cresol
1994-07-01
NUMBER OF PAGES Toluene Geometrica 25 p-Cresol Fluorescence Is. PRICE CODE Spectra 17. SECURITY CLASSIFICATION 13. SECURITY CLASSIFICATION 19...State Frequencies of Toluene ................ 19 6 Computed and exp" Ground State Frequencies of p-Cresol ............... 20 7 Correction Factors for...Computed Ground State Vibrational Frequencies ....... 21 8 Computed and Corrected Excited State Frequencies of Toluene ............. 22 9 Computed and
Chance-corrected classification for use in discriminant analysis: Ecological applications
Titus, K.; Mosher, J.A.; Williams, B.K.
1984-01-01
A method for evaluating the classification table from a discriminant analysis is described. The statistic, kappa, is useful to ecologists in that it removes the effects of chance. It is useful even with equal group sample sizes although the need for a chance-corrected measure of prediction becomes greater with more dissimilar group sample sizes. Examples are presented.
Superiority of artificial neural networks for a genetic classification procedure.
Sant'Anna, I C; Tomaz, R S; Silva, G N; Nascimento, M; Bhering, L L; Cruz, C D
2015-08-19
The correct classification of individuals is extremely important for the preservation of genetic variability and for maximization of yield in breeding programs using phenotypic traits and genetic markers. The Fisher and Anderson discriminant functions are commonly used multivariate statistical techniques for these situations, which allow for the allocation of an initially unknown individual to predefined groups. However, for higher levels of similarity, such as those found in backcrossed populations, these methods have proven to be inefficient. Recently, much research has been devoted to developing a new paradigm of computing known as artificial neural networks (ANNs), which can be used to solve many statistical problems, including classification problems. The aim of this study was to evaluate the feasibility of ANNs as an evaluation technique of genetic diversity by comparing their performance with that of traditional methods. The discriminant functions were equally ineffective in discriminating the populations, with error rates of 23-82%, thereby preventing the correct discrimination of individuals between populations. The ANN was effective in classifying populations with low and high differentiation, such as those derived from a genetic design established from backcrosses, even in cases of low differentiation of the data sets. The ANN appears to be a promising technique to solve classification problems, since the number of individuals classified incorrectly by the ANN was always lower than that of the discriminant functions. We envisage the potential relevant application of this improved procedure in the genomic classification of markers to distinguish between breeds and accessions.
Arif, Muhammad
2012-06-01
In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.
Joshi, Vinayak S; Reinhardt, Joseph M; Garvin, Mona K; Abramoff, Michael D
2014-01-01
The separation of the retinal vessel network into distinct arterial and venous vessel trees is of high interest. We propose an automated method for identification and separation of retinal vessel trees in a retinal color image by converting a vessel segmentation image into a vessel segment map and identifying the individual vessel trees by graph search. Orientation, width, and intensity of each vessel segment are utilized to find the optimal graph of vessel segments. The separated vessel trees are labeled as primary vessel or branches. We utilize the separated vessel trees for arterial-venous (AV) classification, based on the color properties of the vessels in each tree graph. We applied our approach to a dataset of 50 fundus images from 50 subjects. The proposed method resulted in an accuracy of 91.44% correctly classified vessel pixels as either artery or vein. The accuracy of correctly classified major vessel segments was 96.42%.
Individual Patient Diagnosis of AD and FTD via High-Dimensional Pattern Classification of MRI
Davatzikos, C.; Resnick, S. M.; Wu, X.; Parmpi, P.; Clark, C. M.
2008-01-01
The purpose of this study is to determine the diagnostic accuracy of MRI-based high-dimensional pattern classification in differentiating between patients with Alzheimer’s Disease (AD), Frontotemporal Dementia (FTD), and healthy controls, on an individual patient basis. MRI scans of 37 patients with AD and 37 age-matched cognitively normal elderly individuals, as well as 12 patients with FTD and 12 age-matched cognitively normal elderly individuals, were analyzed using voxel-based analysis and high-dimensional pattern classification. Diagnostic sensitivity and specificity of spatial patterns of regional brain atrophy found to be characteristic of AD and FTD were determined via cross-validation and via split-sample methods. Complex spatial patterns of relatively reduced brain volumes were identified, including temporal, orbitofrontal, parietal and cingulate regions, which were predominantly characteristic of either AD or FTD. These patterns provided 100% diagnostic accuracy, when used to separate AD or FTD from healthy controls. The ability to correctly distinguish AD from FTD averaged 84.3%. All estimates of diagnostic accuracy were determined via cross-validation. In conclusion, AD- and FTD-specific patterns of brain atrophy can be detected with high accuracy using high-dimensional pattern classification of MRI scans obtained in a typical clinical setting. PMID:18474436
Tapper, Elliot B; Hunink, M G Myriam; Afdhal, Nezam H; Lai, Michelle; Sengupta, Neil
2016-01-01
The complications of Nonalcoholic Fatty Liver Disease (NAFLD) are dependent on the presence of advanced fibrosis. Given the high prevalence of NAFLD in the US, the optimal evaluation of NAFLD likely involves triage by a primary care physician (PCP) with advanced disease managed by gastroenterologists. We compared the cost-effectiveness of fibrosis risk-assessment strategies in a cohort of 10,000 simulated American patients with NAFLD performed in either PCP or referral clinics using a decision analytical microsimulation state-transition model. The strategies included use of vibration-controlled transient elastography (VCTE), the NAFLD fibrosis score (NFS), combination testing with NFS and VCTE, and liver biopsy (usual care by a specialist only). NFS and VCTE performance was obtained from a prospective cohort of 164 patients with NAFLD. Outcomes included cost per quality adjusted life year (QALY) and correct classification of fibrosis. Risk-stratification by the PCP using the NFS alone costs $5,985 per QALY while usual care costs $7,229/QALY. In the microsimulation, at a willingness-to-pay threshold of $100,000, the NFS alone in PCP clinic was the most cost-effective strategy in 94.2% of samples, followed by combination NFS/VCTE in the PCP clinic (5.6%) and usual care in 0.2%. The NFS based strategies yield the best biopsy-correct classification ratios (3.5) while the NFS/VCTE and usual care strategies yield more correct-classifications of advanced fibrosis at the cost of 3 and 37 additional biopsies per classification. Risk-stratification of patients with NAFLD primary care clinic is a cost-effective strategy that should be formally explored in clinical practice.
NASA Astrophysics Data System (ADS)
Kazama, Yoriko; Yamamoto, Tomonori
2017-10-01
Bathymetry at shallow water especially shallower than 15m is an important area for environmental monitoring and national defense. Because the depth of shallow water is changeable by the sediment deposition and the ocean waves, the periodic monitoring at shoe area is needed. Utilization of satellite images are well matched for widely and repeatedly monitoring at sea area. Sea bottom terrain model using by remote sensing data have been developed and these methods based on the radiative transfer model of the sun irradiance which is affected by the atmosphere, water, and sea bottom. We adopted that general method of the sea depth extraction to the satellite imagery, WorldView-2; which has very fine spatial resolution (50cm/pix) and eight bands at visible to near-infrared wavelengths. From high-spatial resolution satellite images, there is possibility to know the coral reefs and the rock area's detail terrain model which offers important information for the amphibious landing. In addition, the WorldView-2 satellite sensor has the band at near the ultraviolet wavelength that is transmitted through the water. On the other hand, the previous study showed that the estimation error by the satellite imagery was related to the sea bottom materials such as sand, coral reef, sea alga, and rocks. Therefore, in this study, we focused on sea bottom materials, and tried to improve the depth estimation accuracy. First, we classified the sea bottom materials by the SVM method, which used the depth data acquired by multi-beam sonar as supervised data. Then correction values in the depth estimation equation were calculated applying the classification results. As a result, the classification accuracy of sea bottom materials was 93%, and the depth estimation error using the correction by the classification result was within 1.2m.
NASA Astrophysics Data System (ADS)
Raziff, Abdul Rafiez Abdul; Sulaiman, Md Nasir; Mustapha, Norwati; Perumal, Thinagaran
2017-10-01
Gait recognition is widely used in many applications. In the application of the gait identification especially in people, the number of classes (people) is many which may comprise to more than 20. Due to the large amount of classes, the usage of single classification mapping (direct classification) may not be suitable as most of the existing algorithms are mostly designed for the binary classification. Furthermore, having many classes in a dataset may result in the possibility of having a high degree of overlapped class boundary. This paper discusses the application of multiclass classifier mappings such as one-vs-all (OvA), one-vs-one (OvO) and random correction code (RCC) on handheld based smartphone gait signal for person identification. The results is then compared with a single J48 decision tree for benchmark. From the result, it can be said that using multiclass classification mapping method thus partially improved the overall accuracy especially on OvO and RCC with width factor more than 4. For OvA, the accuracy result is worse than a single J48 due to a high number of classes.
High resolution through-the-wall radar image based on beamspace eigenstructure subspace methods
NASA Astrophysics Data System (ADS)
Yoon, Yeo-Sun; Amin, Moeness G.
2008-04-01
Through-the-wall imaging (TWI) is a challenging problem, even if the wall parameters and characteristics are known to the system operator. Proper target classification and correct imaging interpretation require the application of high resolution techniques using limited array size. In inverse synthetic aperture radar (ISAR), signal subspace methods such as Multiple Signal Classification (MUSIC) are used to obtain high resolution imaging. In this paper, we adopt signal subspace methods and apply them to the 2-D spectrum obtained from the delay-andsum beamforming image. This is in contrast to ISAR, where raw data, in frequency and angle, is directly used to form the estimate of the covariance matrix and array response vector. Using beams rather than raw data has two main advantages, namely, it improves the signal-to-noise ratio (SNR) and can correctly image typical indoor extended targets, such as tables and cabinets, as well as point targets. The paper presents both simulated and experimental results using synthesized and real data. It compares the performance of beam-space MUSIC and Capon beamformer. The experimental data is collected at the test facility in the Radar Imaging Laboratory, Villanova University.
Sousa, Mara E B C; Dias, Luís G; Veloso, Ana C A; Estevinho, Letícia; Peres, António M; Machado, Adélio A S C
2014-10-01
Colour and floral origin are key parameters that may influence the honey market. Monofloral light honey are more demanded by consumers, mainly due to their flavour, being more valuable for producers due to their higher price when compared to darker honey. The latter usually have a high anti-oxidant content that increases their healthy potential. This work showed that it is possible to correctly classify monofloral honey with a high variability in floral origin with a potentiometric electronic tongue after making a preliminary selection of honey according their colours: white, amber and dark honey. The results showed that the device had a very satisfactory sensitivity towards floral origin (Castanea sp., Echium sp., Erica sp., Lavandula sp., Prunus sp. and Rubus sp.), allowing a leave-one-out cross validation correct classification of 100%. Therefore, the E-tongue shows potential to be used at analytical laboratory level for honey samples classification according to market and quality parameters, as a practical tool for ensuring monofloral honey authenticity. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hafizt, M.; Manessa, M. D. M.; Adi, N. S.; Prayudha, B.
2017-12-01
Benthic habitat mapping using satellite data is one challenging task for practitioners and academician as benthic objects are covered by light-attenuating water column obscuring object discrimination. One common method to reduce this water-column effect is by using depth-invariant index (DII) image. However, the application of the correction in shallow coastal areas is challenging as a dark object such as seagrass could have a very low pixel value, preventing its reliable identification and classification. This limitation can be solved by specifically applying a classification process to areas with different water depth levels. The water depth level can be extracted from satellite imagery using Relative Water Depth Index (RWDI). This study proposed a new approach to improve the mapping accuracy, particularly for benthic dark objects by combining the DII of Lyzenga’s water column correction method and the RWDI of Stumpt’s method. This research was conducted in Lintea Island which has a high variation of benthic cover using Sentinel-2A imagery. To assess the effectiveness of the proposed new approach for benthic habitat mapping two different classification procedures are implemented. The first procedure is the commonly applied method in benthic habitat mapping where DII image is used as input data to all coastal area for image classification process regardless of depth variation. The second procedure is the proposed new approach where its initial step begins with the separation of the study area into shallow and deep waters using the RWDI image. Shallow area was then classified using the sunglint-corrected image as input data and the deep area was classified using DII image as input data. The final classification maps of those two areas were merged as a single benthic habitat map. A confusion matrix was then applied to evaluate the mapping accuracy of the final map. The result shows that the new proposed mapping approach can be used to map all benthic objects in all depth ranges and shows a better accuracy compared to that of classification map produced using only with DII.
Application of visible and near-infrared spectroscopy to classification of Miscanthus species
Jin, Xiaoli; Chen, Xiaoling; Xiao, Liang; ...
2017-04-03
Here, the feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validationmore » results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species.« less
Application of visible and near-infrared spectroscopy to classification of Miscanthus species
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Xiaoli; Chen, Xiaoling; Xiao, Liang
Here, the feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validationmore » results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species.« less
Application of visible and near-infrared spectroscopy to classification of Miscanthus species.
Jin, Xiaoli; Chen, Xiaoling; Xiao, Liang; Shi, Chunhai; Chen, Liang; Yu, Bin; Yi, Zili; Yoo, Ji Hye; Heo, Kweon; Yu, Chang Yeon; Yamada, Toshihiko; Sacks, Erik J; Peng, Junhua
2017-01-01
The feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validation results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species.
Application of visible and near-infrared spectroscopy to classification of Miscanthus species
Shi, Chunhai; Chen, Liang; Yu, Bin; Yi, Zili; Yoo, Ji Hye; Heo, Kweon; Yu, Chang Yeon; Yamada, Toshihiko; Sacks, Erik J.; Peng, Junhua
2017-01-01
The feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validation results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species. PMID:28369059
Classification bias in commercial business lists for retail food stores in the U.S.
Han, Euna; Powell, Lisa M; Zenk, Shannon N; Rimkus, Leah; Ohri-Vachaspati, Punam; Chaloupka, Frank J
2012-04-18
Aspects of the food environment such as the availability of different types of food stores have recently emerged as key modifiable factors that may contribute to the increased prevalence of obesity. Given that many of these studies have derived their results based on secondary datasets and the relationship of food stores with individual weight outcomes has been reported to vary by store type, it is important to understand the extent to which often-used secondary data correctly classify food stores. We evaluated the classification bias of food stores in Dun & Bradstreet (D&B) and InfoUSA commercial business lists. We performed a full census in 274 randomly selected census tracts in the Chicago metropolitan area and collected detailed store attributes inside stores for classification. Store attributes were compared by classification match status and store type. Systematic classification bias by census tract characteristics was assessed in multivariate regression. D&B had a higher classification match rate than InfoUSA for supermarkets and grocery stores, while InfoUSA was higher for convenience stores. Both lists were more likely to correctly classify large supermarkets, grocery stores, and convenience stores with more cash registers and different types of service counters (supermarkets and grocery stores only). The likelihood of a correct classification match for supermarkets and grocery stores did not vary systemically by tract characteristics whereas convenience stores were more likely to be misclassified in predominately Black tracts. Researches can rely on classification of food stores in commercial datasets for supermarkets and grocery stores whereas classifications for convenience and specialty food stores are subject to some systematic bias by neighborhood racial/ethnic composition.
Classification bias in commercial business lists for retail food stores in the U.S.
2012-01-01
Background Aspects of the food environment such as the availability of different types of food stores have recently emerged as key modifiable factors that may contribute to the increased prevalence of obesity. Given that many of these studies have derived their results based on secondary datasets and the relationship of food stores with individual weight outcomes has been reported to vary by store type, it is important to understand the extent to which often-used secondary data correctly classify food stores. We evaluated the classification bias of food stores in Dun & Bradstreet (D&B) and InfoUSA commercial business lists. Methods We performed a full census in 274 randomly selected census tracts in the Chicago metropolitan area and collected detailed store attributes inside stores for classification. Store attributes were compared by classification match status and store type. Systematic classification bias by census tract characteristics was assessed in multivariate regression. Results D&B had a higher classification match rate than InfoUSA for supermarkets and grocery stores, while InfoUSA was higher for convenience stores. Both lists were more likely to correctly classify large supermarkets, grocery stores, and convenience stores with more cash registers and different types of service counters (supermarkets and grocery stores only). The likelihood of a correct classification match for supermarkets and grocery stores did not vary systemically by tract characteristics whereas convenience stores were more likely to be misclassified in predominately Black tracts. Conclusion Researches can rely on classification of food stores in commercial datasets for supermarkets and grocery stores whereas classifications for convenience and specialty food stores are subject to some systematic bias by neighborhood racial/ethnic composition. PMID:22512874
The Immune System as a Model for Pattern Recognition and Classification
Carter, Jerome H.
2000-01-01
Objective: To design a pattern recognition engine based on concepts derived from mammalian immune systems. Design: A supervised learning system (Immunos-81) was created using software abstractions of T cells, B cells, antibodies, and their interactions. Artificial T cells control the creation of B-cell populations (clones), which compete for recognition of “unknowns.” The B-cell clone with the “simple highest avidity” (SHA) or “relative highest avidity” (RHA) is considered to have successfully classified the unknown. Measurement: Two standard machine learning data sets, consisting of eight nominal and six continuous variables, were used to test the recognition capabilities of Immunos-81. The first set (Cleveland), consisting of 303 cases of patients with suspected coronary artery disease, was used to perform a ten-way cross-validation. After completing the validation runs, the Cleveland data set was used as a training set prior to presentation of the second data set, consisting of 200 unknown cases. Results: For cross-validation runs, correct recognition using SHA ranged from a high of 96 percent to a low of 63.2 percent. The average correct classification for all runs was 83.2 percent. Using the RHA metric, 11.2 percent were labeled “too close to determine” and no further attempt was made to classify them. Of the remaining cases, 85.5 percent were correctly classified. When the second data set was presented, correct classification occurred in 73.5 percent of cases when SHA was used and in 80.3 percent of cases when RHA was used. Conclusions: The immune system offers a viable paradigm for the design of pattern recognition systems. Additional research is required to fully exploit the nuances of immune computation. PMID:10641961
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
Reduction of Topographic Effect for Curve Number Estimated from Remotely Sensed Imagery
NASA Astrophysics Data System (ADS)
Zhang, Wen-Yan; Lin, Chao-Yuan
2016-04-01
The Soil Conservation Service Curve Number (SCS-CN) method is commonly used in hydrology to estimate direct runoff volume. The CN is the empirical parameter which corresponding to land use/land cover, hydrologic soil group and antecedent soil moisture condition. In large watersheds with complex topography, satellite remote sensing is the appropriate approach to acquire the land use change information. However, the topographic effect have been usually found in the remotely sensed imageries and resulted in land use classification. This research selected summer and winter scenes of Landsat-5 TM during 2008 to classified land use in Chen-You-Lan Watershed, Taiwan. The b-correction, the empirical topographic correction method, was applied to Landsat-5 TM data. Land use were categorized using K-mean classification into 4 groups i.e. forest, grassland, agriculture and river. Accuracy assessment of image classification was performed with national land use map. The results showed that after topographic correction, the overall accuracy of classification was increased from 68.0% to 74.5%. The average CN estimated from remotely sensed imagery decreased from 48.69 to 45.35 where the average CN estimated from national LULC map was 44.11. Therefore, the topographic correction method was recommended to normalize the topographic effect from the satellite remote sensing data before estimating the CN.
The effect of finite field size on classification and atmospheric correction
NASA Technical Reports Server (NTRS)
Kaufman, Y. J.; Fraser, R. S.
1981-01-01
The atmospheric effect on the upward radiance of sunlight scattered from the Earth-atmosphere system is strongly influenced by the contrasts between fields and their sizes. For a given atmospheric turbidity, the atmospheric effect on classification of surface features is much stronger for nonuniform surfaces than for uniform surfaces. Therefore, the classification accuracy of agricultural fields and urban areas is dependent not only on the optical characteristics of the atmosphere, but also on the size of the surface do not account for the nonuniformity of the surface have only a slight effect on the classification accuracy; in other cases the classification accuracy descreases. The radiances above finite fields were computed to simulate radiances measured by a satellite. A simulation case including 11 agricultural fields and four natural fields (water, soil, savanah, and forest) was used to test the effect of the size of the background reflectance and the optical thickness of the atmosphere on classification accuracy. It is concluded that new atmospheric correction methods, which take into account the finite size of the fields, have to be developed to improve significantly the classification accuracy.
High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.
Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John
2017-02-01
The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.
ANALYSIS OF A CLASSIFICATION ERROR MATRIX USING CATEGORICAL DATA TECHNIQUES.
Rosenfield, George H.; Fitzpatrick-Lins, Katherine
1984-01-01
Summary form only given. A classification error matrix typically contains tabulation results of an accuracy evaluation of a thematic classification, such as that of a land use and land cover map. The diagonal elements of the matrix represent the counts corrected, and the usual designation of classification accuracy has been the total percent correct. The nondiagonal elements of the matrix have usually been neglected. The classification error matrix is known in statistical terms as a contingency table of categorical data. As an example, an application of these methodologies to a problem of remotely sensed data concerning two photointerpreters and four categories of classification indicated that there is no significant difference in the interpretation between the two photointerpreters, and that there are significant differences among the interpreted category classifications. However, two categories, oak and cottonwood, are not separable in classification in this experiment at the 0. 51 percent probability. A coefficient of agreement is determined for the interpreted map as a whole, and individually for each of the interpreted categories. A conditional coefficient of agreement for the individual categories is compared to other methods for expressing category accuracy which have already been presented in the remote sensing literature.
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2011 CFR
2011-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2014 CFR
2014-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2012 CFR
2012-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Community corrections center good time...
Delavarian, Mona; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Dibajnia, Parvin
2011-07-12
Automatic classification of different behavioral disorders with many similarities (e.g. in symptoms) by using an automated approach will help psychiatrists to concentrate on correct disorder and its treatment as soon as possible, to avoid wasting time on diagnosis, and to increase the accuracy of diagnosis. In this study, we tried to differentiate and classify (diagnose) 306 children with many similar symptoms and different behavioral disorders such as ADHD, depression, anxiety, comorbid depression and anxiety and conduct disorder with high accuracy. Classification was based on the symptoms and their severity. With examining 16 different available classifiers, by using "Prtools", we have proposed nearest mean classifier as the most accurate classifier with 96.92% accuracy in this research. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
A software tool for automatic classification and segmentation of 2D/3D medical images
NASA Astrophysics Data System (ADS)
Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur
2013-02-01
Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.
An automated approach to the design of decision tree classifiers
NASA Technical Reports Server (NTRS)
Argentiero, P.; Chin, P.; Beaudet, P.
1980-01-01
The classification of large dimensional data sets arising from the merging of remote sensing data with more traditional forms of ancillary data is considered. Decision tree classification, a popular approach to the problem, is characterized by the property that samples are subjected to a sequence of decision rules before they are assigned to a unique class. An automated technique for effective decision tree design which relies only on apriori statistics is presented. This procedure utilizes a set of two dimensional canonical transforms and Bayes table look-up decision rules. An optimal design at each node is derived based on the associated decision table. A procedure for computing the global probability of correct classfication is also provided. An example is given in which class statistics obtained from an actual LANDSAT scene are used as input to the program. The resulting decision tree design has an associated probability of correct classification of .76 compared to the theoretically optimum .79 probability of correct classification associated with a full dimensional Bayes classifier. Recommendations for future research are included.
Automatic parquet block sorting using real-time spectral classification
NASA Astrophysics Data System (ADS)
Astrom, Anders; Astrand, Erik; Johansson, Magnus
1999-03-01
This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.
Technical and investigative support for high density digital satellite recording systems
NASA Technical Reports Server (NTRS)
Schultz, R. A.
1982-01-01
Dropout and defect classification are discussed with emphasis on how surface defects responsible for electronic dropouts were identified, what affect various defects could have on the application of tapes to satellite tape recorders (STR), and what type of defects might be field correctable after production of the tape but prior to installation in the STR.
Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin
2017-01-01
Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system. PMID:28481320
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, E.L.
A novel method for performing real-time acquisition and processing Landsat/EROS data covers all aspects including radiometric and geometric corrections of multispectral scanner or return-beam vidicon inputs, image enhancement, statistical analysis, feature extraction, and classification. Radiometric transformations include bias/gain adjustment, noise suppression, calibration, scan angle compensation, and illumination compensation, including topography and atmospheric effects. Correction or compensation for geometric distortion includes sensor-related distortions, such as centering, skew, size, scan nonlinearity, radial symmetry, and tangential symmetry. Also included are object image-related distortions such as aspect angle (altitude), scale distortion (altitude), terrain relief, and earth curvature. Ephemeral corrections are also applied to compensatemore » for satellite forward movement, earth rotation, altitude variations, satellite vibration, and mirror scan velocity. Image enhancement includes high-pass, low-pass, and Laplacian mask filtering and data restoration for intermittent losses. Resource classification is provided by statistical analysis including histograms, correlational analysis, matrix manipulations, and determination of spectral responses. Feature extraction includes spatial frequency analysis, which is used in parallel discriminant functions in each array processor for rapid determination. The technique uses integrated parallel array processors that decimate the tasks concurrently under supervision of a control processor. The operator-machine interface is optimized for programming ease and graphics image windowing.« less
[Therapeutic strategy for different types of epicanthus].
Gaofeng, Li; Jun, Tan; Zihan, Wu; Wei, Ding; Huawei, Ouyang; Fan, Zhang; Mingcan, Luo
2015-11-01
To explore the reasonable therapeutic strategy for different types of epicanthus. Patients with epicanthus were classificated according to the shape, extent and inner canthal distance and treated with different methods appropriately. Modified asymmetric Z plasty with two curve method was used in lower eyelid type epicanthus, inner canthus type epicanthus and severe upper eyelid type epicanthus. Moderate upper epicanthus underwent '-' shape method. Mild Upper epicanthus in two conditions which underwent nasal augumentation and double eyelid formation with normal inner canthal distance need no correction surgery. The other mild epicanthus underwent '-' shape method. A total of 66 cases underwent the classification and the appropriate treatment. All wounds healed well. During 3 to 12 months follow-up period, all epicanthus were corrected completely with natural contour and unconspicuous scars. All patients were satisfied with the results. Classification of epicanthus hosed on the shape, extent and inner canthal distance and correction with appropriate methods is a reasonable therapeutic strategy.
Delineation of marsh types of the Texas coast from Corpus Christi Bay to the Sabine River in 2010
Enwright, Nicholas M.; Hartley, Stephen B.; Brasher, Michael G.; Visser, Jenneke M.; Mitchell, Michael K.; Ballard, Bart M.; Parr, Mark W.; Couvillion, Brady R.; Wilson, Barry C.
2014-01-01
Coastal zone managers and researchers often require detailed information regarding emergent marsh vegetation types for modeling habitat capacities and needs of marsh-reliant wildlife (such as waterfowl and alligator). Detailed information on the extent and distribution of marsh vegetation zones throughout the Texas coast has been historically unavailable. In response, the U.S. Geological Survey, in cooperation and collaboration with the U.S. Fish and Wildlife Service via the Gulf Coast Joint Venture, Texas A&M University-Kingsville, the University of Louisiana-Lafayette, and Ducks Unlimited, Inc., has produced a classification of marsh vegetation types along the middle and upper Texas coast from Corpus Christi Bay to the Sabine River. This study incorporates approximately 1,000 ground reference locations collected via helicopter surveys in coastal marsh areas and about 2,000 supplemental locations from fresh marsh, water, and “other” (that is, nonmarsh) areas. About two-thirds of these data were used for training, and about one-third were used for assessing accuracy. Decision-tree analyses using Rulequest See5 were used to classify emergent marsh vegetation types by using these data, multitemporal satellite-based multispectral imagery from 2009 to 2011, a bare-earth digital elevation model (DEM) based on airborne light detection and ranging (lidar), alternative contemporary land cover classifications, and other spatially explicit variables believed to be important for delineating the extent and distribution of marsh vegetation communities. Image objects were generated from segmentation of high-resolution airborne imagery acquired in 2010 and were used to refine the classification. The classification is dated 2010 because the year is both the midpoint of the multitemporal satellite-based imagery (2009–11) classified and the date of the high-resolution airborne imagery that was used to develop image objects. Overall accuracy corrected for bias (accuracy estimate incorporates true marginal proportions) was 91 percent (95 percent confidence interval [CI]: 89.2–92.8), with a kappa statistic of 0.79 (95 percent CI: 0.77–0.81). The classification performed best for saline marsh (user’s accuracy 81.5 percent; producer’s accuracy corrected for bias 62.9 percent) but showed a lesser ability to discriminate intermediate marsh (user’s accuracy 47.7 percent; producer’s accuracy corrected for bias 49.5 percent). Because of confusion in intermediate and brackish marsh classes, an alternative classification containing only three marsh types was created in which intermediate and brackish marshes were combined into a single class. Image objects were reattributed by using this alternative three-marsh-type classification. Overall accuracy, corrected for bias, of this more general classification was 92.4 percent (95 percent CI: 90.7–94.2), and the kappa statistic was 0.83 (95 percent CI: 0.81–0.85). Mean user’s accuracy for marshes within the four-marsh-type and three-marsh-type classifications was 65.4 percent and 75.6 percent, respectively, whereas mean producer’s accuracy was 56.7 percent and 65.1 percent, respectively. This study provides a more objective and repeatable method for classifying marsh types of the middle and upper Texas coast at an extent and greater level of detail than previously available for the study area. The seamless classification produced through this work is now available to help State agencies (such as the Texas Parks and Wildlife Department) and landscape-scale conservation partnerships (such as the Gulf Coast Prairie Landscape Conservation Cooperative and the Gulf Coast Joint Venture) to develop and (or) refine conservation plans targeting priority natural resources. Moreover, these data may improve projections of landscape change and serve as a baseline for monitoring future changes resulting from chronic and episodic stressors.
Classification of brain tumours using short echo time 1H MR spectra
NASA Astrophysics Data System (ADS)
Devos, A.; Lukas, L.; Suykens, J. A. K.; Vanhamme, L.; Tate, A. R.; Howe, F. A.; Majós, C.; Moreno-Torres, A.; van der Graaf, M.; Arús, C.; Van Huffel, S.
2004-09-01
The purpose was to objectively compare the application of several techniques and the use of several input features for brain tumour classification using Magnetic Resonance Spectroscopy (MRS). Short echo time 1H MRS signals from patients with glioblastomas ( n = 87), meningiomas ( n = 57), metastases ( n = 39), and astrocytomas grade II ( n = 22) were provided by six centres in the European Union funded INTERPRET project. Linear discriminant analysis, least squares support vector machines (LS-SVM) with a linear kernel and LS-SVM with radial basis function kernel were applied and evaluated over 100 stratified random splittings of the dataset into training and test sets. The area under the receiver operating characteristic curve (AUC) was used to measure the performance of binary classifiers, while the percentage of correct classifications was used to evaluate the multiclass classifiers. The influence of several factors on the classification performance has been tested: L2- vs. water normalization, magnitude vs. real spectra and baseline correction. The effect of input feature reduction was also investigated by using only the selected frequency regions containing the most discriminatory information, and peak integrated values. Using L2-normalized complete spectra the automated binary classifiers reached a mean test AUC of more than 0.95, except for glioblastomas vs. metastases. Similar results were obtained for all classification techniques and input features except for water normalized spectra, where classification performance was lower. This indicates that data acquisition and processing can be simplified for classification purposes, excluding the need for separate water signal acquisition, baseline correction or phasing.
NASA Astrophysics Data System (ADS)
Schmalz, M.; Ritter, G.; Key, R.
Accurate and computationally efficient spectral signature classification is a crucial step in the nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications using linear mixing models, signature classification accuracy depends on accurate spectral endmember discrimination [1]. If the endmembers cannot be classified correctly, then the signatures cannot be classified correctly, and object recognition from hyperspectral data will be inaccurate. In practice, the number of endmembers accurately classified often depends linearly on the number of inputs. This can lead to potentially severe classification errors in the presence of noise or densely interleaved signatures. In this paper, we present an comparison of emerging technologies for nonimaging spectral signature classfication based on a highly accurate, efficient search engine called Tabular Nearest Neighbor Encoding (TNE) [3,4] and a neural network technology called Morphological Neural Networks (MNNs) [5]. Based on prior results, TNE can optimize its classifier performance to track input nonergodicities, as well as yield measures of confidence or caution for evaluation of classification results. Unlike neural networks, TNE does not have a hidden intermediate data structure (e.g., the neural net weight matrix). Instead, TNE generates and exploits a user-accessible data structure called the agreement map (AM), which can be manipulated by Boolean logic operations to effect accurate classifier refinement algorithms. The open architecture and programmability of TNE's agreement map processing allows a TNE programmer or user to determine classification accuracy, as well as characterize in detail the signatures for which TNE did not obtain classification matches, and why such mis-matches occurred. In this study, we will compare TNE and MNN based endmember classification, using performance metrics such as probability of correct classification (Pd) and rate of false detections (Rfa). As proof of principle, we analyze classification of multiple closely spaced signatures from a NASA database of space material signatures. Additional analysis pertains to computational complexity and noise sensitivity, which are superior to Bayesian techniques based on classical neural networks. [1] Winter, M.E. "Fast autonomous spectral end-member determination in hyperspectral data," in Proceedings of the 13th International Conference On Applied Geologic Remote Sensing, Vancouver, B.C., Canada, pp. 337-44 (1999). [2] N. Keshava, "A survey of spectral unmixing algorithms," Lincoln Laboratory Journal 14:55-78 (2003). [3] Key, G., M.S. SCHMALZ, F.M. Caimi, and G.X. Ritter. "Performance analysis of tabular nearest neighbor encoding algorithm for joint compression and ATR", in Proceedings SPIE 3814:115-126 (1999). [4] Schmalz, M.S. and G. Key. "Algorithms for hyperspectral signature classification in unresolved object detection using tabular nearest neighbor encoding" in Proceedings of the 2007 AMOS Conference, Maui HI (2007). [5] Ritter, G.X., G. Urcid, and M.S. Schmalz. "Autonomous single-pass endmember approximation using lattice auto-associative memories", Neurocomputing (Elsevier), accepted (June 2008).
NASA Technical Reports Server (NTRS)
Hoffbeck, Joseph P.; Landgrebe, David A.
1994-01-01
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
NASA Astrophysics Data System (ADS)
He, Y.; He, Y.
2018-04-01
Urban shanty towns are communities that has contiguous old and dilapidated houses with more than 2000 square meters built-up area or more than 50 households. This study makes attempts to extract shanty towns in Nanning City using the product of Census and TripleSat satellite images. With 0.8-meter high-resolution remote sensing images, five texture characteristics (energy, contrast, maximum probability, and inverse difference moment) of shanty towns are trained and analyzed through GLCM. In this study, samples of shanty town are well classified with 98.2 % producer accuracy of unsupervised classification and 73.2 % supervised classification correctness. Low-rise and mid-rise residential blocks in Nanning City are classified into 4 different types by using k-means clustering and nearest neighbour classification respectively. This study initially establish texture feature descriptions of different types of residential areas, especially low-rise and mid-rise buildings, which would help city administrator evaluate residential blocks and reconstruction shanty towns.
Pipeline for illumination correction of images for high-throughput microscopy.
Singh, S; Bray, M-A; Jones, T R; Carpenter, A E
2014-12-01
The presence of systematic noise in images in high-throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non-homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high-content screen readouts due to software-based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real-world high-throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z'-factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high-content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post-hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open-source image analysis pipelines publicly available. This software-based solution has the potential to improve outcomes for a wide-variety of image-based HTS experiments. © 2014 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
Crop identification from radar imagery of the Huntington County, Indiana test site
NASA Technical Reports Server (NTRS)
Batlivala, P. P.; Ulaby, F. T. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Like polarization was successful in discriminating corn and soybeans; however, pasture and woods were consistently confused as soybeans and corn, respectively. The probability of correct classification was about 65%. The cross polarization component (highest for woods and lowest for pasture) helped in separating the woods from corn, and pasture from soybeans, and when used with the like polarization component, the probability of correct classification increased to 74%.
Gesteme-free context-aware adaptation of robot behavior in human-robot cooperation.
Nessi, Federico; Beretta, Elisa; Gatti, Cecilia; Ferrigno, Giancarlo; De Momi, Elena
2016-11-01
Cooperative robotics is receiving greater acceptance because the typical advantages provided by manipulators are combined with an intuitive usage. In particular, hands-on robotics may benefit from the adaptation of the assistant behavior with respect to the activity currently performed by the user. A fast and reliable classification of human activities is required, as well as strategies to smoothly modify the control of the manipulator. In this scenario, gesteme-based motion classification is inadequate because it needs the observation of a wide signal percentage and the definition of a rich vocabulary. In this work, a system able to recognize the user's current activity without a vocabulary of gestemes, and to accordingly adapt the manipulator's dynamic behavior is presented. An underlying stochastic model fits variations in the user's guidance forces and the resulting trajectories of the manipulator's end-effector with a set of Gaussian distribution. The high-level switching between these distributions is captured with hidden Markov models. The dynamic of the KUKA light-weight robot, a torque-controlled manipulator, is modified with respect to the classified activity using sigmoidal-shaped functions. The presented system is validated over a pool of 12 näive users in a scenario that addresses surgical targeting tasks on soft tissue. The robot's assistance is adapted in order to obtain a stiff behavior during activities that require critical accuracy constraint, and higher compliance during wide movements. Both the ability to provide the correct classification at each moment (sample accuracy) and the capability of correctly identify the correct sequence of activity (sequence accuracy) were evaluated. The proposed classifier is fast and accurate in all the experiments conducted (80% sample accuracy after the observation of ∼450ms of signal). Moreover, the ability of recognize the correct sequence of activities, without unwanted transitions is guaranteed (sequence accuracy ∼90% when computed far away from user desired transitions). Finally, the proposed activity-based adaptation of the robot's dynamic does not lead to a not smooth behavior (high smoothness, i.e. normalized jerk score <0.01). The provided system is able to dynamic assist the operator during cooperation in the presented scenario. Copyright © 2016 Elsevier B.V. All rights reserved.
Comparative Analysis of RF Emission Based Fingerprinting Techniques for ZigBee Device Classification
quantify the differences invarious RF fingerprinting techniques via comparative analysis of MDA/ML classification results. The findings herein demonstrate...correct classification rates followed by COR-DNA and then RF-DNA in most test cases and especially in low Eb/N0 ranges, where ZigBee is designed to operate.
12 CFR 1229.12 - Procedures related to capital classification and other actions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Procedures related to capital classification and other actions. 1229.12 Section 1229.12 Banks and Banking FEDERAL HOUSING FINANCE AGENCY ENTITY REGULATIONS CAPITAL CLASSIFICATIONS AND PROMPT CORRECTIVE ACTION Federal Home Loan Banks § 1229.12 Procedures...
Galaxy Zoo: quantitative visual morphological classifications for 48 000 galaxies from CANDELS
NASA Astrophysics Data System (ADS)
Simmons, B. D.; Lintott, Chris; Willett, Kyle W.; Masters, Karen L.; Kartaltepe, Jeyhan S.; Häußler, Boris; Kaviraj, Sugata; Krawczyk, Coleman; Kruk, S. J.; McIntosh, Daniel H.; Smethurst, R. J.; Nichol, Robert C.; Scarlata, Claudia; Schawinski, Kevin; Conselice, Christopher J.; Almaini, Omar; Ferguson, Henry C.; Fortson, Lucy; Hartley, William; Kocevski, Dale; Koekemoer, Anton M.; Mortlock, Alice; Newman, Jeffrey A.; Bamford, Steven P.; Grogin, N. A.; Lucas, Ray A.; Hathi, Nimish P.; McGrath, Elizabeth; Peth, Michael; Pforr, Janine; Rizer, Zachary; Wuyts, Stijn; Barro, Guillermo; Bell, Eric F.; Castellano, Marco; Dahlen, Tomas; Dekel, Avishai; Ownsworth, Jamie; Faber, Sandra M.; Finkelstein, Steven L.; Fontana, Adriano; Galametz, Audrey; Grützbauch, Ruth; Koo, David; Lotz, Jennifer; Mobasher, Bahram; Mozena, Mark; Salvato, Mara; Wiklind, Tommy
2017-02-01
We present quantified visual morphologies of approximately 48 000 galaxies observed in three Hubble Space Telescope legacy fields by the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) and classified by participants in the Galaxy Zoo project. 90 per cent of galaxies have z ≤ 3 and are observed in rest-frame optical wavelengths by CANDELS. Each galaxy received an average of 40 independent classifications, which we combine into detailed morphological information on galaxy features such as clumpiness, bar instabilities, spiral structure, and merger and tidal signatures. We apply a consensus-based classifier weighting method that preserves classifier independence while effectively down-weighting significantly outlying classifications. After analysing the effect of varying image depth on reported classifications, we also provide depth-corrected classifications which both preserve the information in the deepest observations and also enable the use of classifications at comparable depths across the full survey. Comparing the Galaxy Zoo classifications to previous classifications of the same galaxies shows very good agreement; for some applications, the high number of independent classifications provided by Galaxy Zoo provides an advantage in selecting galaxies with a particular morphological profile, while in others the combination of Galaxy Zoo with other classifications is a more promising approach than using any one method alone. We combine the Galaxy Zoo classifications of `smooth' galaxies with parametric morphologies to select a sample of featureless discs at 1 ≤ z ≤ 3, which may represent a dynamically warmer progenitor population to the settled disc galaxies seen at later epochs.
EUS-guided biopsy for the diagnosis and classification of lymphoma.
Ribeiro, Afonso; Pereira, Denise; Escalón, Maricer P; Goodman, Mark; Byrne, Gerald E
2010-04-01
EUS-guided FNA and Tru-cut biopsy (TCB) is highly accurate in the diagnosis of lymphoma. Subclassification, however, may be difficult in low-grade non-Hodgkin lymphoma and Hodgkin lymphoma. To determine the yield of EUS-guided biopsy to classify lymphoma based on the World Health Organization classification of tumors of hematopoietic lymphoid tissues. Retrospective study. Tertiary referral center. A total of 24 patients referred for EUS-guided biopsy who had a final diagnosis of lymphoma or "highly suspicious for lymphoma." EUS-guided FNA and TCB combined with flow cytometry (FC) analysis. MAIN OUTCOMES MEASUREMENT: Lymphoma subclassification accuracy of EUS guided biopsy. Twenty-four patients were included in this study. Twenty-three patients underwent EUS-FNA, and 1 patient had only TCB. Twenty-two underwent EUS-TCB combined with FNA. EUS correctly diagnosed lymphoma in 19 out of 24 patients (79%), and subclassification was determined in 16 patients (66.6%). Flow cytometry correctly identified B-cell monoclonality in 95% (18 out of 19). In 1 patient diagnosed as having marginal-zone lymphoma by EUS-FNA/FC only, the diagnosis was changed to hairy cell leukemia after a bone marrow biopsy was obtained. EUS had a lower yield in nonlarge B-cell lymphoma (only 9 out of 15 cases [60%]) compared with large B-cell lymphoma (78%; P = .3 [Fisher exact test]). Retrospective, small number of patients. EUS-guided biopsy has a lower yield to correctly classify Hodgkin lymphoma and low-grade lymphoma compared with high-grade diffuse large B-cell lymphoma. Copyright 2010 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ciany, Charles M.; Zurawski, William; Kerfoot, Ian
2001-10-01
The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.
Spectral-Based Volume Sensor Prototype, Post-VS4 Test Series Algorithm Development
2009-04-30
Computer Pcorr Probabilty / Percentage of Correct Classification (# Correct / # Total) PD PhotoDiode Pd Probabilty / Percentage of Detection (# Correct...Detections / Total of Sources) Pfa Probabilty / Percentage of False Alarm (# FAs / Total # of Sources) SBVS Spectral-Based Volume Sensor SFA Smoke and
76 FR 23872 - Editorial Corrections to the Export Administration Regulations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-29
... No. 100709293-1073-01] RIN 0694-AE96 Editorial Corrections to the Export Administration Regulations... Administration Regulations (EAR). In particular, this rule corrects the country entry for Syria on the Commerce... the Export Administration Regulations (EAR), including several Export Control Classification Number...
Neural network classification of questionable EGRET events
NASA Astrophysics Data System (ADS)
Meetre, C. A.; Norris, J. P.
1992-02-01
High energy gamma rays (greater than 20 MeV) pair producing in the spark chamber of the Energetic Gamma Ray Telescope Experiment (EGRET) give rise to a characteristic but highly variable 3-D locus of spark sites, which must be processed to decide whether the event is to be included in the database. A significant fraction (about 15 percent or 104 events/day) of the candidate events cannot be categorized (accept/reject) by an automated rule-based procedure; they are therefore tagged, and must be examined and classified manually by a team of expert analysts. We describe a feedforward, back-propagation neural network approach to the classification of the questionable events. The algorithm computes a set of coefficients using representative exemplars drawn from the preclassified set of questionable events. These coefficients map a given input event into a decision vector that, ideally, describes the correct disposition of the event. The net's accuracy is then tested using a different subset of preclassified events. Preliminary results demonstrate the net's ability to correctly classify a large proportion of the events for some categories of questionables. Current work includes the use of much larger training sets to improve the accuracy of the net.
Neural network classification of questionable EGRET events
NASA Technical Reports Server (NTRS)
Meetre, C. A.; Norris, J. P.
1992-01-01
High energy gamma rays (greater than 20 MeV) pair producing in the spark chamber of the Energetic Gamma Ray Telescope Experiment (EGRET) give rise to a characteristic but highly variable 3-D locus of spark sites, which must be processed to decide whether the event is to be included in the database. A significant fraction (about 15 percent or 10(exp 4) events/day) of the candidate events cannot be categorized (accept/reject) by an automated rule-based procedure; they are therefore tagged, and must be examined and classified manually by a team of expert analysts. We describe a feedforward, back-propagation neural network approach to the classification of the questionable events. The algorithm computes a set of coefficients using representative exemplars drawn from the preclassified set of questionable events. These coefficients map a given input event into a decision vector that, ideally, describes the correct disposition of the event. The net's accuracy is then tested using a different subset of preclassified events. Preliminary results demonstrate the net's ability to correctly classify a large proportion of the events for some categories of questionables. Current work includes the use of much larger training sets to improve the accuracy of the net.
NASA Astrophysics Data System (ADS)
Nganvongpanit, Korakot; Buddhachat, Kittisak; Piboon, Promporn; Euppayo, Thippaporn; Kaewmong, Patcharaporn; Cherdsukjai, Phaothep; Kittiwatanawong, Kongkiat; Thitaram, Chatchote
2017-04-01
The elemental composition was investigated and applied for identifying the sex and habitat of dugongs, in addition to distinguishing dugong tusks and teeth from other animal wildlife materials such as Asian elephant (Elephas maximus) tusks and tiger (Panthera tigris tigris) canine teeth. A total of 43 dugong tusks, 60 dugong teeth, 40 dolphin teeth, 1 whale tooth, 40 Asian elephant tusks and 20 tiger canine teeth were included in the study. Elemental analyses were conducted using a handheld X-ray fluorescence analyzer (HH-XRF). There was no significant difference in the elemental composition of male and female dugong tusks, whereas the overall accuracy for identifying habitat (the Andaman Sea and the Gulf of Thailand) was high (88.1%). Dolphin teeth were able to be correctly predicted 100% of the time. Furthermore, we demonstrated a discrepancy in elemental composition among dugong tusks, Asian elephant tusks and tiger canine teeth, and provided a high correct prediction rate among these species of 98.2%. Here, we demonstrate the feasible use of HH-XRF for preliminary species classification and habitat determination prior to using more advanced techniques such as molecular biology.
Mining geriatric assessment data for in-patient fall prediction models and high-risk subgroups
2012-01-01
Background Hospital in-patient falls constitute a prominent problem in terms of costs and consequences. Geriatric institutions are most often affected, and common screening tools cannot predict in-patient falls consistently. Our objectives are to derive comprehensible fall risk classification models from a large data set of geriatric in-patients' assessment data and to evaluate their predictive performance (aim#1), and to identify high-risk subgroups from the data (aim#2). Methods A data set of n = 5,176 single in-patient episodes covering 1.5 years of admissions to a geriatric hospital were extracted from the hospital's data base and matched with fall incident reports (n = 493). A classification tree model was induced using the C4.5 algorithm as well as a logistic regression model, and their predictive performance was evaluated. Furthermore, high-risk subgroups were identified from extracted classification rules with a support of more than 100 instances. Results The classification tree model showed an overall classification accuracy of 66%, with a sensitivity of 55.4%, a specificity of 67.1%, positive and negative predictive values of 15% resp. 93.5%. Five high-risk groups were identified, defined by high age, low Barthel index, cognitive impairment, multi-medication and co-morbidity. Conclusions Our results show that a little more than half of the fallers may be identified correctly by our model, but the positive predictive value is too low to be applicable. Non-fallers, on the other hand, may be sorted out with the model quite well. The high-risk subgroups and the risk factors identified (age, low ADL score, cognitive impairment, institutionalization, polypharmacy and co-morbidity) reflect domain knowledge and may be used to screen certain subgroups of patients with a high risk of falling. Classification models derived from a large data set using data mining methods can compete with current dedicated fall risk screening tools, yet lack diagnostic precision. High-risk subgroups may be identified automatically from existing geriatric assessment data, especially when combined with domain knowledge in a hybrid classification model. Further work is necessary to validate our approach in a controlled prospective setting. PMID:22417403
Mining geriatric assessment data for in-patient fall prediction models and high-risk subgroups.
Marschollek, Michael; Gövercin, Mehmet; Rust, Stefan; Gietzelt, Matthias; Schulze, Mareike; Wolf, Klaus-Hendrik; Steinhagen-Thiessen, Elisabeth
2012-03-14
Hospital in-patient falls constitute a prominent problem in terms of costs and consequences. Geriatric institutions are most often affected, and common screening tools cannot predict in-patient falls consistently. Our objectives are to derive comprehensible fall risk classification models from a large data set of geriatric in-patients' assessment data and to evaluate their predictive performance (aim#1), and to identify high-risk subgroups from the data (aim#2). A data set of n = 5,176 single in-patient episodes covering 1.5 years of admissions to a geriatric hospital were extracted from the hospital's data base and matched with fall incident reports (n = 493). A classification tree model was induced using the C4.5 algorithm as well as a logistic regression model, and their predictive performance was evaluated. Furthermore, high-risk subgroups were identified from extracted classification rules with a support of more than 100 instances. The classification tree model showed an overall classification accuracy of 66%, with a sensitivity of 55.4%, a specificity of 67.1%, positive and negative predictive values of 15% resp. 93.5%. Five high-risk groups were identified, defined by high age, low Barthel index, cognitive impairment, multi-medication and co-morbidity. Our results show that a little more than half of the fallers may be identified correctly by our model, but the positive predictive value is too low to be applicable. Non-fallers, on the other hand, may be sorted out with the model quite well. The high-risk subgroups and the risk factors identified (age, low ADL score, cognitive impairment, institutionalization, polypharmacy and co-morbidity) reflect domain knowledge and may be used to screen certain subgroups of patients with a high risk of falling. Classification models derived from a large data set using data mining methods can compete with current dedicated fall risk screening tools, yet lack diagnostic precision. High-risk subgroups may be identified automatically from existing geriatric assessment data, especially when combined with domain knowledge in a hybrid classification model. Further work is necessary to validate our approach in a controlled prospective setting.
The impact of missing trauma data on predicting massive transfusion
Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.
2013-01-01
INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514
Li, Yun; Zhang, Jin-Yu; Wang, Yuan-Zhong
2018-01-01
Three data fusion strategies (low-llevel, mid-llevel, and high-llevel) combined with a multivariate classification algorithm (random forest, RF) were applied to authenticate the geographical origins of Panax notoginseng collected from five regions of Yunnan province in China. In low-level fusion, the original data from two spectra (Fourier transform mid-IR spectrum and near-IR spectrum) were directly concatenated into a new matrix, which then was applied for the classification. Mid-level fusion was the strategy that inputted variables extracted from the spectral data into an RF classification model. The extracted variables were processed by iterate variable selection of the RF model and principal component analysis. The use of high-level fusion combined the decision making of each spectroscopic technique and resulted in an ensemble decision. The results showed that the mid-level and high-level data fusion take advantage of the information synergy from two spectroscopic techniques and had better classification performance than that of independent decision making. High-level data fusion is the most effective strategy since the classification results are better than those of the other fusion strategies: accuracy rates ranged between 93% and 96% for the low-level data fusion, between 95% and 98% for the mid-level data fusion, and between 98% and 100% for the high-level data fusion. In conclusion, the high-level data fusion strategy for Fourier transform mid-IR and near-IR spectra can be used as a reliable tool for correct geographical identification of P. notoginseng. Graphical abstract The analytical steps of Fourier transform mid-IR and near-IR spectral data fusion for the geographical traceability of Panax notoginseng.
Using phase for radar scatterer classification
NASA Astrophysics Data System (ADS)
Moore, Linda J.; Rigling, Brian D.; Penno, Robert P.; Zelnio, Edmund G.
2017-04-01
Traditional synthetic aperture radar (SAR) systems tend to discard phase information of formed complex radar imagery prior to automatic target recognition (ATR). This practice has historically been driven by available hardware storage, processing capabilities, and data link capacity. Recent advances in high performance computing (HPC) have enabled extremely dense storage and processing solutions. Therefore, previous motives for discarding radar phase information in ATR applications have been mitigated. First, we characterize the value of phase in one-dimensional (1-D) radar range profiles with respect to the ability to correctly estimate target features, which are currently employed in ATR algorithms for target discrimination. These features correspond to physical characteristics of targets through radio frequency (RF) scattering phenomenology. Physics-based electromagnetic scattering models developed from the geometrical theory of diffraction are utilized for the information analysis presented here. Information is quantified by the error of target parameter estimates from noisy radar signals when phase is either retained or discarded. Operating conditions (OCs) of signal-tonoise ratio (SNR) and bandwidth are considered. Second, we investigate the value of phase in 1-D radar returns with respect to the ability to correctly classify canonical targets. Classification performance is evaluated via logistic regression for three targets (sphere, plate, tophat). Phase information is demonstrated to improve radar target classification rates, particularly at low SNRs and low bandwidths.
Gender classification from video under challenging operating conditions
NASA Astrophysics Data System (ADS)
Mendoza-Schrock, Olga; Dong, Guozhu
2014-06-01
The literature is abundant with papers on gender classification research. However the majority of such research is based on the assumption that there is enough resolution so that the subject's face can be resolved. Hence the majority of the research is actually in the face recognition and facial feature area. A gap exists for gender classification under challenging operating conditions—different seasonal conditions, different clothing, etc.—and when the subject's face cannot be resolved due to lack of resolution. The Seasonal Weather and Gender (SWAG) Database is a novel database that contains subjects walking through a scene under operating conditions that span a calendar year. This paper exploits a subset of that database—the SWAG One dataset—using data mining techniques, traditional classifiers (ex. Naïve Bayes, Support Vector Machine, etc.) and traditional (canny edge detection, etc.) and non-traditional (height/width ratios, etc.) feature extractors to achieve high correct gender classification rates (greater than 85%). Another novelty includes exploiting frame differentials.
Shankar, Vijay; Reo, Nicholas V; Paliy, Oleg
2015-12-09
We previously showed that stool samples of pre-adolescent and adolescent US children diagnosed with diarrhea-predominant IBS (IBS-D) had different compositions of microbiota and metabolites compared to healthy age-matched controls. Here we explored whether observed fecal microbiota and metabolite differences between these two adolescent populations can be used to discriminate between IBS and health. We constructed individual microbiota- and metabolite-based sample classification models based on the partial least squares multivariate analysis and then applied a Bayesian approach to integrate individual models into a single classifier. The resulting combined classification achieved 84 % accuracy of correct sample group assignment and 86 % prediction for IBS-D in cross-validation tests. The performance of the cumulative classification model was further validated by the de novo analysis of stool samples from a small independent IBS-D cohort. High-throughput microbial and metabolite profiling of subject stool samples can be used to facilitate IBS diagnosis.
NASA Technical Reports Server (NTRS)
Mulligan, P. J.; Gervin, J. C.; Lu, Y. C.
1985-01-01
An area bordering the Eastern Shore of the Chesapeake Bay was selected for study and classified using unsupervised techniques applied to LANDSAT-2 MSS data and several band combinations of LANDSAT-4 TM data. The accuracies of these Level I land cover classifications were verified using the Taylor's Island USGS 7.5 minute topographic map which was photointerpreted, digitized and rasterized. The the Taylor's Island map, comparing the MSS and TM three band (2 3 4) classifications, the increased resolution of TM produced a small improvement in overall accuracy of 1% correct due primarily to a small improvement, and 1% and 3%, in areas such as water and woodland. This was expected as the MSS data typically produce high accuracies for categories which cover large contiguous areas. However, in the categories covering smaller areas within the map there was generally an improvement of at least 10%. Classification of the important residential category improved 12%, and wetlands were mapped with 11% greater accuracy.
Classification of plum spirit drinks by synchronous fluorescence spectroscopy.
Sádecká, J; Jakubíková, M; Májek, P; Kleinová, A
2016-04-01
Synchronous fluorescence spectroscopy was used in combination with principal component analysis (PCA) and linear discriminant analysis (LDA) for the differentiation of plum spirits according to their geographical origin. A total of 14 Czech, 12 Hungarian and 18 Slovak plum spirit samples were used. The samples were divided in two categories: colorless (22 samples) and colored (22 samples). Synchronous fluorescence spectra (SFS) obtained at a wavelength difference of 60 nm provided the best results. Considering the PCA-LDA applied to the SFS of all samples, Czech, Hungarian and Slovak colorless samples were properly classified in both the calibration and prediction sets. 100% of correct classification was also obtained for Czech and Hungarian colored samples. However, one group of Slovak colored samples was classified as belonging to the Hungarian group in the calibration set. Thus, the total correct classifications obtained were 94% and 100% for the calibration and prediction steps, respectively. The results were compared with those obtained using near-infrared (NIR) spectroscopy. Applying PCA-LDA to NIR spectra (5500-6000 cm(-1)), the total correct classifications were 91% and 92% for the calibration and prediction steps, respectively, which were slightly lower than those obtained using SFS. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jacobson, Robert B.; Elliott, Caroline M.; Huhmann, Brittany L.
2010-01-01
This report documents development of a spatially explicit river and flood-plain classification to evaluate potential for cottonwood restoration along the Sharpe and Fort Randall segments of the Middle Missouri River. This project involved evaluating existing topographic, water-surface elevation, and soils data to determine if they were sufficient to create a classification similar to the Land Capability Potential Index (LCPI) developed by Jacobson and others (U.S. Geological Survey Scientific Investigations Report 2007–5256) and developing a geomorphically based classification to apply to evaluating restoration potential.Existing topographic, water-surface elevation, and soils data for the Middle Missouri River were not sufficient to replicate the LCPI. The 1/3-arc-second National Elevation Dataset delineated most of the topographic complexity and produced cumulative frequency distributions similar to a high-resolution 5-meter topographic dataset developed for the Lower Missouri River. However, lack of bathymetry in the National Elevation Dataset produces a potentially critical bias in evaluation of frequently flooded surfaces close to the river. High-resolution soils data alone were insufficient to replace the information content of the LCPI. In test reaches in the Lower Missouri River, soil drainage classes from the Soil Survey Geographic Database database correctly classified 0.8–98.9 percent of the flood-plain area at or below the 5-year return interval flood stage depending on state of channel incision; on average for river miles 423–811, soil drainage class correctly classified only 30.2 percent of the flood-plain area at or below the 5-year return interval flood stage. Lack of congruence between soil characteristics and present-day hydrology results from relatively rapid incision and aggradation of segments of the Missouri River resulting from impoundments and engineering. The most sparsely available data in the Middle Missouri River were water-surface elevations. Whereas hydraulically modeled water-surface elevations were available at 1.6-kilometer intervals in the Lower Missouri River, water-surface elevations in the Middle Missouri River had to be interpolated between streamflow-gaging stations spaced 3–116 kilometers. Lack of high-resolution water-surface elevation data precludes development of LCPI-like classification maps.An hierarchical river classification framework is proposed to provide structure for a multiscale river classification. The segment-scale classification presented in this report is deductive and based on presumed effects of dams, significant tributaries, and geological (and engineered) channel constraints. An inductive reach-scale classification, nested within the segment scale, is based on multivariate statistical clustering of geomorphic data collected at 500-meter intervals along the river. Cluster-based classifications delineate reaches of the river with similar channel and flood-plain geomorphology, and presumably, similar geomorphic and hydrologic processes. The dominant variables in the clustering process were channel width (Fort Randall) and valley width (Sharpe), followed by braiding index (both segments).Clusters with multithread and highly sinuous channels are likely to be associated with dynamic channel migration and deposition of fresh, bare sediment conducive to natural cottonwood germination. However, restoration potential within these reaches is likely to be mitigated by interaction of cottonwood life stages with the highly altered flow regime.
15 CFR 748.3 - Classification requests, advisory opinions, and encryption registrations.
Code of Federal Regulations, 2013 CFR
2013-01-01
... may ask BIS to provide you with the correct Export Control Classification Number (ECCN) down to the... identified in your classification request is either described by an ECCN in the Commerce Control List (CCL) in Supplement No. 1 to Part 774 of the EAR or not described by an ECCN and, therefore, an “EAR99...
15 CFR 748.3 - Classification requests, advisory opinions, and encryption registrations.
Code of Federal Regulations, 2012 CFR
2012-01-01
... may ask BIS to provide you with the correct Export Control Classification Number (ECCN) down to the... identified in your classification request is either described by an ECCN in the Commerce Control List (CCL) in Supplement No. 1 to Part 774 of the EAR or not described by an ECCN and, therefore, an “EAR99...
15 CFR 748.3 - Classification requests, advisory opinions, and encryption registrations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... may ask BIS to provide you with the correct Export Control Classification Number (ECCN) down to the... identified in your classification request is either described by an ECCN in the Commerce Control List (CCL) in supplement No. 1 to part 774 of the EAR or not described by an ECCN and, therefore, an “EAR99...
Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Aarsvold, John N.; Raghunath, Nivedita; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.; Votaw, John R.
2012-01-01
Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [11C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET. PMID:23039679
Classification of LIDAR Data for Generating a High-Precision Roadway Map
NASA Astrophysics Data System (ADS)
Jeong, J.; Lee, I.
2016-06-01
Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.
Oshiyama, Natália F; Bassani, Rosana A; D'Ottaviano, Itala M L; Bassani, José W M
2012-04-01
As technology evolves, the role of medical equipment in the healthcare system, as well as technology management, becomes more important. Although the existence of large databases containing management information is currently common, extracting useful information from them is still difficult. A useful tool for identification of frequently failing equipment, which increases maintenance cost and downtime, would be the classification according to the corrective maintenance data. Nevertheless, establishment of classes may create inconsistencies, since an item may be close to two classes by the same extent. Paraconsistent logic might help solve this problem, as it allows the existence of inconsistent (contradictory) information without trivialization. In this paper, a methodology for medical equipment classification based on the ABC analysis of corrective maintenance data is presented, and complemented with a paraconsistent annotated logic analysis, which may enable the decision maker to take into consideration alerts created by the identification of inconsistencies and indeterminacies in the classification.
On evaluating clustering procedures for use in classification
NASA Technical Reports Server (NTRS)
Pore, M. D.; Moritz, T. E.; Register, D. T.; Yao, S. S.; Eppler, W. G. (Principal Investigator)
1979-01-01
The problem of evaluating clustering algorithms and their respective computer programs for use in a preprocessing step for classification is addressed. In clustering for classification the probability of correct classification is suggested as the ultimate measure of accuracy on training data. A means of implementing this criterion and a measure of cluster purity are discussed. Examples are given. A procedure for cluster labeling that is based on cluster purity and sample size is presented.
Hernández-Ibáñez, C; Blazquez-Sánchez, N; Aguilar-Bernier, M; Fúnez-Liébana, R; Rivas-Ruiz, F; de Troya-Martín, M
Incisional biopsy may not always provide a correct classification of histologic subtypes of basal cell carcinoma (BCC). High-frequency ultrasound (HFUS) imaging of the skin is useful for the diagnosis and management of this tumor. The main aim of this study was to compare the diagnostic value of HFUS compared with punch biopsy for the correct classification of histologic subtypes of primary BCC. We also analyzed the influence of tumor size and histologic subtype (single subtype vs. mixed) on the diagnostic yield of HFUS and punch biopsy. Retrospective observational study of primary BCCs treated by the Dermatology Department of Hospital Costa del Sol in Marbella, Spain, between october 2013 and may 2014. Surgical excision was preceded by HFUS imaging (Dermascan C © , 20-MHz linear probe) and a punch biopsy in all cases. We compared the overall diagnostic yield and accuracy (sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) of HFUS and punch biopsy against the gold standard (excisional biopsy with serial sections) for overall and subgroup results. We studied 156 cases. The overall diagnostic yield was 73.7% for HFUS (sensitivity, 74.5%; specificity, 73%) and 79.9% for punch biopsy (sensitivity, 76%; specificity, 82%). In the subgroup analyses, HFUS had a PPV of 93.3% for superficial BCC (vs. 92% for punch biopsy). In the analysis by tumor size, HFUS achieved an overall diagnostic yield of 70.4% for tumors measuring 40mm 2 or less and 77.3% for larger tumors; the NPV was 82% in both size groups. Punch biopsy performed better in the diagnosis of small lesions (overall diagnostic yield of 86.4% for lesions ≤40mm 2 vs. 72.6% for lesions >40mm 2 ). HFUS imaging was particularly useful for ruling out infiltrating BCCs, diagnosing simple, superficial BCCs, and correctly classifying BCCs larger than 40mm 2 . Copyright © 2016 AEDV. Publicado por Elsevier España, S.L.U. All rights reserved.
Discrimination of almonds (Prunus dulcis) geographical origin by minerals and fatty acids profiling.
Amorello, Diana; Orecchio, Santino; Pace, Andrea; Barreca, Salvatore
2016-09-01
Twenty-one almond samples from three different geographical origins (Sicily, Spain and California) were investigated by determining minerals and fatty acids compositions. Data were used to discriminate by chemometry almond origin by linear discriminant analysis. With respect to previous PCA profiling studies, this work provides a simpler analytical protocol for the identification of almonds geographical origin. Classification by using mineral contents data only was correct in 77% of the samples, while, by using fatty acid profiles, the percentages of samples correctly classified reached 82%. The coupling of mineral contents and fatty acid profiles lead to an increased efficiency of the classification with 87% of samples correctly classified.
NASA Astrophysics Data System (ADS)
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-01
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.
Treatment outcomes of saddle nose correction.
Hyun, Sang Min; Jang, Yong Ju
2013-01-01
Many valuable classification schemes for saddle nose have been suggested that integrate clinical deformity and treatment; however, there is no consensus regarding the most suitable classification and surgical method for saddle nose correction. To present clinical characteristics and treatment outcome of saddle nose deformity and to propose a modified classification system to better characterize the variety of different saddle nose deformities. The retrospective study included 91 patients who underwent rhinoplasty for correction of saddle nose from April 1, 2003, through December 31, 2011, with a minimum follow-up of 8 months. Saddle nose was classified into 4 types according to a modified classification. Aesthetic outcomes were classified as excellent, good, fair, or poor. Patients underwent minor cosmetic concealment by dorsal augmentation (n = 8) or major septal reconstruction combined with dorsal augmentation (n = 83). Autologous costal cartilages were used in 40 patients (44%), and homologous costal cartilages were used in 5 patients (6%). According to postoperative assessment, 29 patients had excellent, 42 patients had good, 18 patients had fair, and 2 patients had poor aesthetic outcomes. No statistical difference in surgical outcome according to saddle nose classification was observed. Eight patients underwent revision rhinoplasty, owing to recurrence of saddle, wound infection, or warping of the costal cartilage for dorsal augmentation. We introduce a modified saddle nose classification scheme that is simpler and better able to characterize different deformities. Among 91 patients with saddle nose, 20 (22%) had unsuccessful outcomes (fair or poor) and 8 (9%) underwent subsequent revision rhinoplasty. Thus, management of saddle nose deformities remains challenging. 4.
The reliability and validity of the Saliba Postural Classification System
Collins, Cristiana Kahl; Johnson, Vicky Saliba; Godwin, Ellen M.; Pappas, Evangelos
2016-01-01
Objectives To determine the reliability and validity of the Saliba Postural Classification System (SPCS). Methods Two physical therapists classified pictures of 100 volunteer participants standing in their habitual posture for inter and intra-tester reliability. For validity, 54 participants stood on a force plate in a habitual and a corrected posture, while a vertical force was applied through the shoulders until the clinician felt a postural give. Data were extracted at the time the give was felt and at a time in the corrected posture that matched the peak vertical ground reaction force (VGRF) in the habitual posture. Results Inter-tester reliability demonstrated 75% agreement with a Kappa = 0.64 (95% CI = 0.524–0.756, SE = 0.059). Intra-tester reliability demonstrated 87% agreement with a Kappa = 0.8, (95% CI = 0.702–0.898, SE = 0.05) and 80% agreement with a Kappa = 0.706, (95% CI = 0.594–0818, SE = 0.057). The examiner applied a significantly higher (p < 0.001) peak vertical force in the corrected posture prior to a postural give when compared to the habitual posture. Within the corrected posture, the %VGRF was higher when the test was ongoing vs. when a postural give was felt (p < 0.001). The %VGRF was not different between the two postures when comparing the peaks (p = 0.214). Discussion The SPCS has substantial agreement for inter- and intra-tester reliability and is largely a valid postural classification system as determined by the larger vertical forces in the corrected postures. Further studies on the correlation between the SPCS and diagnostic classifications are indicated. PMID:27559288
The reliability and validity of the Saliba Postural Classification System.
Collins, Cristiana Kahl; Johnson, Vicky Saliba; Godwin, Ellen M; Pappas, Evangelos
2016-07-01
To determine the reliability and validity of the Saliba Postural Classification System (SPCS). Two physical therapists classified pictures of 100 volunteer participants standing in their habitual posture for inter and intra-tester reliability. For validity, 54 participants stood on a force plate in a habitual and a corrected posture, while a vertical force was applied through the shoulders until the clinician felt a postural give. Data were extracted at the time the give was felt and at a time in the corrected posture that matched the peak vertical ground reaction force (VGRF) in the habitual posture. Inter-tester reliability demonstrated 75% agreement with a Kappa = 0.64 (95% CI = 0.524-0.756, SE = 0.059). Intra-tester reliability demonstrated 87% agreement with a Kappa = 0.8, (95% CI = 0.702-0.898, SE = 0.05) and 80% agreement with a Kappa = 0.706, (95% CI = 0.594-0818, SE = 0.057). The examiner applied a significantly higher (p < 0.001) peak vertical force in the corrected posture prior to a postural give when compared to the habitual posture. Within the corrected posture, the %VGRF was higher when the test was ongoing vs. when a postural give was felt (p < 0.001). The %VGRF was not different between the two postures when comparing the peaks (p = 0.214). The SPCS has substantial agreement for inter- and intra-tester reliability and is largely a valid postural classification system as determined by the larger vertical forces in the corrected postures. Further studies on the correlation between the SPCS and diagnostic classifications are indicated.
NASA Astrophysics Data System (ADS)
Khan, F.; Enzmann, F.; Kersten, M.
2015-12-01
In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.
Austin, Samuel H.; Nelms, David L.
2017-01-01
Climate change raises concern that risks of hydrological drought may be increasing. We estimate hydrological drought probabilities for rivers and streams in the United States (U.S.) using maximum likelihood logistic regression (MLLR). Streamflow data from winter months are used to estimate the chance of hydrological drought during summer months. Daily streamflow data collected from 9,144 stream gages from January 1, 1884 through January 9, 2014 provide hydrological drought streamflow probabilities for July, August, and September as functions of streamflows during October, November, December, January, and February, estimating outcomes 5-11 months ahead of their occurrence. Few drought prediction methods exploit temporal links among streamflows. We find MLLR modeling of drought streamflow probabilities exploits the explanatory power of temporally linked water flows. MLLR models with strong correct classification rates were produced for streams throughout the U.S. One ad hoc test of correct prediction rates of September 2013 hydrological droughts exceeded 90% correct classification. Some of the best-performing models coincide with areas of high concern including the West, the Midwest, Texas, the Southeast, and the Mid-Atlantic. Using hydrological drought MLLR probability estimates in a water management context can inform understanding of drought streamflow conditions, provide warning of future drought conditions, and aid water management decision making.
Texture operator for snow particle classification into snowflake and graupel
NASA Astrophysics Data System (ADS)
Nurzyńska, Karolina; Kubo, Mamoru; Muramoto, Ken-ichiro
2012-11-01
In order to improve the estimation of precipitation, the coefficients of Z-R relation should be determined for each snow type. Therefore, it is necessary to identify the type of falling snow. Consequently, this research addresses a problem of snow particle classification into snowflake and graupel in an automatic manner (as these types are the most common in the study region). Having correctly classified precipitation events, it is believed that it will be possible to estimate the related parameters accurately. The automatic classification system presented here describes the images with texture operators. Some of them are well-known from the literature: first order features, co-occurrence matrix, grey-tone difference matrix, run length matrix, and local binary pattern, but also a novel approach to design simple local statistic operators is introduced. In this work the following texture operators are defined: mean histogram, min-max histogram, and mean-variance histogram. Moreover, building a feature vector, which is based on the structure created in many from mentioned algorithms is also suggested. For classification, the k-nearest neighbourhood classifier was applied. The results showed that it is possible to achieve correct classification accuracy above 80% by most of the techniques. The best result of 86.06%, was achieved for operator built from a structure achieved in the middle stage of the co-occurrence matrix calculation. Next, it was noticed that describing an image with two texture operators does not improve the classification results considerably. In the best case the correct classification efficiency was 87.89% for a pair of texture operators created from local binary pattern and structure build in a middle stage of grey-tone difference matrix calculation. This also suggests that the information gathered by each texture operator is redundant. Therefore, the principal component analysis was applied in order to remove the unnecessary information and additionally reduce the length of the feature vectors. The improvement of the correct classification efficiency for up to 100% is possible for methods: min-max histogram, texture operator built from structure achieved in a middle stage of co-occurrence matrix calculation, texture operator built from a structure achieved in a middle stage of grey-tone difference matrix creation, and texture operator based on a histogram, when the feature vector stores 99% of initial information.
ERIC Educational Resources Information Center
Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver
2012-01-01
Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.
1976-01-01
Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.
NASA Astrophysics Data System (ADS)
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo
2016-02-01
Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
A compendium of fossil marine animal families, 2nd edition
NASA Technical Reports Server (NTRS)
Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)
1992-01-01
A comprehensive listing of 4075 taxonomic families of marine animals known from the fossil record is presented. This listing covers invertebrates, vertebrates, and animal-like protists, gives time intervals of apparent origination and extinction, and provides literature sources for these data. The time intervals are mostly 81 internationally recognized stratigraphic stages; more than half of the data are resolved to one of 145 substage divisions, providing more highly resolved data for studies of taxic macroevolution. Families are classified by order, class, and phylum, reflecting current classifications in the published literature. This compendium is a new edition of the 1982 publication, correcting errors and presenting greater stratigraphic resolution and more current ideas about acceptable families and their classification.
Sea ice type maps from Alaska synthetic aperture radar facility imagery: An assessment
NASA Technical Reports Server (NTRS)
Fetterer, Florence M.; Gineris, Denise; Kwok, Ronald
1994-01-01
Synthetic aperture radar (SAR) imagery received at the Alaskan SAR Facility is routinely and automatically classified on the Geophysical Processor System (GPS) to create ice type maps. We evaluated the wintertime performance of the GPS classification algorithm by comparing ice type percentages from supervised classification with percentages from the algorithm. The root mean square (RMS) difference for multiyear ice is about 6%, while the inconsistency in supervised classification is about 3%. The algorithm separates first-year from multiyear ice well, although it sometimes fails to correctly classify new ice and open water owing to the wide distribution of backscatter for these classes. Our results imply a high degree of accuracy and consistency in the growing archive of multiyear and first-year ice distribution maps. These results have implications for heat and mass balance studies which are furthered by the ability to accurately characterize ice type distributions over a large part of the Arctic.
Classification and correction of the radar bright band with polarimetric radar
NASA Astrophysics Data System (ADS)
Hall, Will; Rico-Ramirez, Miguel; Kramer, Stefan
2015-04-01
The annular region of enhanced radar reflectivity, known as the Bright Band (BB), occurs when the radar beam intersects a layer of melting hydrometeors. Radar reflectivity is related to rainfall through a power law equation and so this enhanced region can lead to overestimations of rainfall by a factor of up to 5, so it is important to correct for this. The BB region can be identified by using several techniques including hydrometeor classification and freezing level forecasts from mesoscale meteorological models. Advances in dual-polarisation radar measurements and continued research in the field has led to increased accuracy in the ability to identify the melting snow region. A method proposed by Kitchen et al (1994), a form of which is currently used operationally in the UK, utilises idealised Vertical Profiles of Reflectivity (VPR) to correct for the BB enhancement. A simpler and more computationally efficient method involves the formation of an average VPR from multiple elevations for correction that can still cause a significant decrease in error (Vignal 2000). The purpose of this research is to evaluate a method that relies only on analysis of measurements from an operational C-band polarimetric radar without the need for computationally expensive models. Initial results show that LDR is a strong classifier of melting snow with a high Critical Success Index of 97% when compared to the other variables. An algorithm based on idealised VPRs resulted in the largest decrease in error when BB corrected scans are compared to rain gauges and to lower level scans with a reduction in RMSE of 61% for rain-rate measurements. References Kitchen, M., R. Brown, and A. G. Davies, 1994: Real-time correction of weather radar data for the effects of bright band, range and orographic growth in widespread precipitation. Q.J.R. Meteorol. Soc., 120, 1231-1254. Vignal, B. et al, 2000: Three methods to determine profiles of reflectivity from volumetric radar data to correct precipitation estimates. J. Appl. Meteor., 39(10), 1715-1726.
Sentinel-2 Level 2A Prototype Processor: Architecture, Algorithms And First Results
NASA Astrophysics Data System (ADS)
Muller-Wilm, Uwe; Louis, Jerome; Richter, Rudolf; Gascon, Ferran; Niezette, Marc
2013-12-01
Sen2Core is a prototype processor for Sentinel-2 Level 2A product processing and formatting. The processor is developed for and with ESA and performs the tasks of Atmospheric Correction and Scene Classification of Level 1C input data. Level 2A outputs are: Bottom-Of- Atmosphere (BOA) corrected reflectance images, Aerosol Optical Thickness-, Water Vapour-, Scene Classification maps and Quality indicators, including cloud and snow probabilities. The Level 2A Product Formatting performed by the processor follows the specification of the Level 1C User Product.
Brandl, Caroline; Zimmermann, Martina E; Günther, Felix; Barth, Teresa; Olden, Matthias; Schelter, Sabine C; Kronenberg, Florian; Loss, Julika; Küchenhoff, Helmut; Helbig, Horst; Weber, Bernhard H F; Stark, Klaus J; Heid, Iris M
2018-06-06
While age-related macular degeneration (AMD) poses an important personal and public health burden, comparing epidemiological studies on AMD is hampered by differing approaches to classify AMD. In our AugUR study survey, recruiting residents from in/around Regensburg, Germany, aged 70+, we analyzed the AMD status derived from color fundus images applying two different classification systems. Based on 1,040 participants with gradable fundus images for at least one eye, we show that including individuals with only one gradable eye (n = 155) underestimates AMD prevalence and we provide a correction procedure. Bias-corrected and standardized to the Bavarian population, late AMD prevalence is 7.3% (95% confidence interval = [5.4; 9.4]). We find substantially different prevalence estimates for "early/intermediate AMD" depending on the classification system: 45.3% (95%-CI = [41.8; 48.7]) applying the Clinical Classification (early/intermediate AMD) or 17.1% (95%-CI = [14.6; 19.7]) applying the Three Continent AMD Consortium Severity Scale (mild/moderate/severe early AMD). We thus provide a first effort to grade AMD in a complete study with different classification systems, a first approach for bias-correction from individuals with only one gradable eye, and the first AMD prevalence estimates from a German elderly population. Our results underscore substantial differences for early/intermediate AMD prevalence estimates between classification systems and an urgent need for harmonization.
Oxytocin improves facial emotion recognition in young adults with antisocial personality disorder.
Timmermann, Marion; Jeung, Haang; Schmitt, Ruth; Boll, Sabrina; Freitag, Christine M; Bertsch, Katja; Herpertz, Sabine C
2017-11-01
Deficient facial emotion recognition has been suggested to underlie aggression in individuals with antisocial personality disorder (ASPD). As the neuropeptide oxytocin (OT) has been shown to improve facial emotion recognition, it might also exert beneficial effects in individuals providing so much harm to the society. In a double-blind, randomized, placebo-controlled crossover trial, 22 individuals with ASPD and 29 healthy control (HC) subjects (matched for age, sex, intelligence, and education) were intranasally administered either OT (24 IU) or a placebo 45min before participating in an emotion classification paradigm with fearful, angry, and happy faces. We assessed the number of correct classifications and reaction times as indicators of emotion recognition ability. Significant group×substance×emotion interactions were found in correct classifications and reaction times. Compared to HC, individuals with ASPD showed deficits in recognizing fearful and happy faces; these group differences were no longer observable under OT. Additionally, reaction times for angry faces differed significantly between the ASPD and HC group in the placebo condition. This effect was mainly driven by longer reaction times in HC subjects after placebo administration compared to OT administration while individuals with ASPD revealed descriptively the contrary response pattern. Our data indicate an improvement of the recognition of fearful and happy facial expressions by OT in young adults with ASPD. Particularly the increased recognition of facial fear is of high importance since the correct perception of distress signals in others is thought to inhibit aggression. Beneficial effects of OT might be further mediated by improved recognition of facial happiness probably reflecting increased social reward responsiveness. Copyright © 2017. Published by Elsevier Ltd.
Parasites as biological tags of fish stocks: a meta-analysis of their discriminatory power.
Poulin, Robert; Kamiya, Tsukushi
2015-01-01
The use of parasites as biological tags to discriminate among marine fish stocks has become a widely accepted method in fisheries management. Here, we first link this approach to its unstated ecological foundation, the decay in the similarity of the species composition of assemblages as a function of increasing distance between them, a phenomenon almost universal in nature. We explain how distance decay of similarity can influence the use of parasites as biological tags. Then, we perform a meta-analysis of 61 uses of parasites as tags of marine fish populations in multivariate discriminant analyses, obtained from 29 articles. Our main finding is that across all studies, the observed overall probability of correct classification of fish based on parasite data was about 71%. This corresponds to a two-fold improvement over the rate of correct classification expected by chance alone, and the average effect size (Zr = 0·463) computed from the original values was also indicative of a medium-to-large effect. However, none of the moderator variables included in the meta-analysis had a significant effect on the proportion of correct classification; these moderators included the total number of fish sampled, the number of parasite species used in the discriminant analysis, the number of localities from which fish were sampled, the minimum and maximum distance between any pair of sampling localities, etc. Therefore, there are no clear-cut situations in which the use of parasites as tags is more useful than others. Finally, we provide recommendations for the future usage of parasites as tags for stock discrimination, to ensure that future applications of the method achieve statistical rigour and a high discriminatory power.
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Valeriano, D. D.
1981-01-01
An evaluation of the multispectral image analyzer (system Image 1-100), using automatic classification, is presented. The region studied is situated. The automatic was carried out using the maximum likelihood (MAXVER) classification system. The following classes were established: urban area, bare soil, sugar cane, citrus culture (oranges), pastures, and reforestation. The classification matrix of the test sites indicate that the percentage of correct classification varied between 63% and 100%.
Human Factors Engineering. Student Supplement,
1981-08-01
a job TASK TAXONOMY A classification scheme for the different levels of activities in a system, i.e., job - task - sub-task, etc. TASK-AN~ALYSIS...with the classification of learning objectives by learning category so as to identify learningPhas III guidelines necessary for optimum learning to...correct. .4... .the sequencing of all dependent tasks. .1.. .the classification of learning objectives by learning category and the Identification of
Correlation-based pattern recognition for implantable defibrillators.
Wilkins, J.
1996-01-01
An estimated 300,000 Americans die each year from cardiac arrhythmias. Historically, drug therapy or surgery were the only treatment options available for patients suffering from arrhythmias. Recently, implantable arrhythmia management devices have been developed. These devices allow abnormal cardiac rhythms to be sensed and corrected in vivo. Proper arrhythmia classification is critical to selecting the appropriate therapeutic intervention. The classification problem is made more challenging by the power/computation constraints imposed by the short battery life of implantable devices. Current devices utilize heart rate-based classification algorithms. Although easy to implement, rate-based approaches have unacceptably high error rates in distinguishing supraventricular tachycardia (SVT) from ventricular tachycardia (VT). Conventional morphology assessment techniques used in ECG analysis often require too much computation to be practical for implantable devices. In this paper, a computationally-efficient, arrhythmia classification architecture using correlation-based morphology assessment is presented. The architecture classifies individuals heart beats by assessing similarity between an incoming cardiac signal vector and a series of prestored class templates. A series of these beat classifications are used to make an overall rhythm assessment. The system makes use of several new results in the field of pattern recognition. The resulting system achieved excellent accuracy in discriminating SVT and VT. PMID:8947674
NASA Astrophysics Data System (ADS)
Teutsch, Michael; Saur, Günter
2011-11-01
Spaceborne SAR imagery offers high capability for wide-ranging maritime surveillance especially in situations, where AIS (Automatic Identification System) data is not available. Therefore, maritime objects have to be detected and optional information such as size, orientation, or object/ship class is desired. In recent research work, we proposed a SAR processing chain consisting of pre-processing, detection, segmentation, and classification for single-polarimetric (HH) TerraSAR-X StripMap images to finally assign detection hypotheses to class "clutter", "non-ship", "unstructured ship", or "ship structure 1" (bulk carrier appearance) respectively "ship structure 2" (oil tanker appearance). In this work, we extend the existing processing chain and are now able to handle full-polarimetric (HH, HV, VH, VV) TerraSAR-X data. With the possibility of better noise suppression using the different polarizations, we slightly improve both the segmentation and the classification process. In several experiments we demonstrate the potential benefit for segmentation and classification. Precision of size and orientation estimation as well as correct classification rates are calculated individually for single- and quad-polarization and compared to each other.
Waldman, John R.; Fabrizio, Mary C.
1994-01-01
Stock contribution studies of mixed-stock fisheries rely on the application of classification algorithms to samples of unknown origin. Although the performance of these algorithms can be assessed, there are no guidelines regarding decisions about including minor stocks, pooling stocks into regional groups, or sampling discrete substocks to adequately characterize a stock. We examined these questions for striped bass Morone saxatilis of the U.S. Atlantic coast by applying linear discriminant functions to meristic and morphometric data from fish collected from spawning areas. Some of our samples were from the Hudson and Roanoke rivers and four tributaries of the Chesapeake Bay. We also collected fish of mixed-stock origin from the Atlantic Ocean near Montauk, New York. Inclusion of the minor stock from the Roanoke River in the classification algorithm decreased the correct-classification rate, whereas grouping of the Roanoke River and Chesapeake Bay stock into a regional (''southern'') group increased the overall resolution. The increased resolution was offset by our inability to obtain separate contribution estimates of the groups that were pooled. Although multivariate analysis of variance indicated significant differences among Chesapeake Bay substocks, increasing the number of substocks in the discriminant analysis decreased the overall correct-classification rate. Although the inclusion of one, two, three, or four substocks in the classification algorithm did not greatly affect the overall correct-classification rates, the specific combination of substocks significantly affected the relative contribution estimates derived from the mixed-stock sample. Future studies of this kind must balance the costs and benefits of including minor stocks and would profit from examination of the variation in discriminant characters among all Chesapeake Bay substocks.
Classification of ring artifacts for their effective removal using type adaptive correction schemes.
Anas, Emran Mohammad Abu; Lee, Soo Yeol; Hasan, Kamrul
2011-06-01
High resolution tomographic images acquired with a digital X-ray detector are often degraded by the so called ring artifacts. In this paper, a detail analysis including the classification, detection and correction of these ring artifacts is presented. At first, a novel idea for classifying rings into two categories, namely type I and type II rings, is proposed based on their statistical characteristics. The defective detector elements and the dusty scintillator screens result in type I ring and the mis-calibrated detector elements lead to type II ring. Unlike conventional approaches, we emphasize here on the separate detection and correction schemes for each type of rings for their effective removal. For the detection of type I ring, the histogram of the responses of the detector elements is used and a modified fast image inpainting algorithm is adopted to correct the responses of the defective pixels. On the other hand, to detect the type II ring, first a simple filtering scheme is presented based on the fast Fourier transform (FFT) to smooth the sum curve derived form the type I ring corrected projection data. The difference between the sum curve and its smoothed version is then used to detect their positions. Then, to remove the constant bias suffered by the responses of the mis-calibrated detector elements with view angle, an estimated dc shift is subtracted from them. The performance of the proposed algorithm is evaluated using real micro-CT images and is compared with three recently reported algorithms. Simulation results demonstrate superior performance of the proposed technique as compared to the techniques reported in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bonifazi, Giuseppe; Capobianco, Giuseppe; Serranti, Silvia
2018-06-01
The aim of this work was to recognize different polymer flakes from mixed plastic waste through an innovative hierarchical classification strategy based on hyperspectral imaging, with particular reference to low density polyethylene (LDPE) and high-density polyethylene (HDPE). A plastic waste composition assessment, including also LDPE and HDPE identification, may help to define optimal recycling strategies for product quality control. Correct handling of plastic waste is essential for its further "sustainable" recovery, maximizing the sorting performance in particular for plastics with similar characteristics as LDPE and HDPE. Five different plastic waste samples were chosen for the investigation: polypropylene (PP), LDPE, HDPE, polystyrene (PS) and polyvinyl chloride (PVC). A calibration dataset was realized utilizing the corresponding virgin polymers. Hyperspectral imaging in the short-wave infrared range (1000-2500 nm) was thus applied to evaluate the different plastic spectral attributes finalized to perform their recognition/classification. After exploring polymer spectral differences by principal component analysis (PCA), a hierarchical partial least squares discriminant analysis (PLS-DA) model was built allowing the five different polymers to be recognized. The proposed methodology, based on hierarchical classification, is very powerful and fast, allowing to recognize the five different polymers in a single step.
Methods for data classification
Garrity, George [Okemos, MI; Lilburn, Timothy G [Front Royal, VA
2011-10-11
The present invention provides methods for classifying data and uncovering and correcting annotation errors. In particular, the present invention provides a self-organizing, self-correcting algorithm for use in classifying data. Additionally, the present invention provides a method for classifying biological taxa.
NASA Astrophysics Data System (ADS)
Sridhar, J.
2015-12-01
The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.
NASA Technical Reports Server (NTRS)
Quattrochi, D. A.
1984-01-01
An initial analysis of LANDSAT 4 Thematic Mapper (TM) data for the discrimination of agricultural, forested wetland, and urban land covers is conducted using a scene of data collected over Arkansas and Tennessee. A classification of agricultural lands derived from multitemporal LANDSAT Multispectral Scanner (MSS) data is compared with a classification of TM data for the same area. Results from this comparative analysis show that the multitemporal MSS classification produced an overall accuracy of 80.91% while the TM classification yields an overall classification accuracy of 97.06% correct.
Semi-automatic knee cartilage segmentation
NASA Astrophysics Data System (ADS)
Dam, Erik B.; Folkesson, Jenny; Pettersen, Paola C.; Christiansen, Claus
2006-03-01
Osteo-Arthritis (OA) is a very common age-related cause of pain and reduced range of motion. A central effect of OA is wear-down of the articular cartilage that otherwise ensures smooth joint motion. Quantification of the cartilage breakdown is central in monitoring disease progression and therefore cartilage segmentation is required. Recent advances allow automatic cartilage segmentation with high accuracy in most cases. However, the automatic methods still fail in some problematic cases. For clinical studies, even if a few failing cases will be averaged out in the overall results, this reduces the mean accuracy and precision and thereby necessitates larger/longer studies. Since the severe OA cases are often most problematic for the automatic methods, there is even a risk that the quantification will introduce a bias in the results. Therefore, interactive inspection and correction of these problematic cases is desirable. For diagnosis on individuals, this is even more crucial since the diagnosis will otherwise simply fail. We introduce and evaluate a semi-automatic cartilage segmentation method combining an automatic pre-segmentation with an interactive step that allows inspection and correction. The automatic step consists of voxel classification based on supervised learning. The interactive step combines a watershed transformation of the original scan with the posterior probability map from the classification step at sub-voxel precision. We evaluate the method for the task of segmenting the tibial cartilage sheet from low-field magnetic resonance imaging (MRI) of knees. The evaluation shows that the combined method allows accurate and highly reproducible correction of the segmentation of even the worst cases in approximately ten minutes of interaction.
Power System Transient Stability Based on Data Mining Theory
NASA Astrophysics Data System (ADS)
Cui, Zhen; Shi, Jia; Wu, Runsheng; Lu, Dan; Cui, Mingde
2018-01-01
In order to study the stability of power system, a power system transient stability based on data mining theory is designed. By introducing association rules analysis in data mining theory, an association classification method for transient stability assessment is presented. A mathematical model of transient stability assessment based on data mining technology is established. Meanwhile, combining rule reasoning with classification prediction, the method of association classification is proposed to perform transient stability assessment. The transient stability index is used to identify the samples that cannot be correctly classified in association classification. Then, according to the critical stability of each sample, the time domain simulation method is used to determine the state, so as to ensure the accuracy of the final results. The results show that this stability assessment system can improve the speed of operation under the premise that the analysis result is completely correct, and the improved algorithm can find out the inherent relation between the change of power system operation mode and the change of transient stability degree.
GENIE: a hybrid genetic algorithm for feature classification in multispectral images
NASA Astrophysics Data System (ADS)
Perkins, Simon J.; Theiler, James P.; Brumby, Steven P.; Harvey, Neal R.; Porter, Reid B.; Szymanski, John J.; Bloch, Jeffrey J.
2000-10-01
We consider the problem of pixel-by-pixel classification of a multi- spectral image using supervised learning. Conventional spuervised classification techniques such as maximum likelihood classification and less conventional ones s uch as neural networks, typically base such classifications solely on the spectral components of each pixel. It is easy to see why: the color of a pixel provides a nice, bounded, fixed dimensional space in which these classifiers work well. It is often the case however, that spectral information alone is not sufficient to correctly classify a pixel. Maybe spatial neighborhood information is required as well. Or maybe the raw spectral components do not themselves make for easy classification, but some arithmetic combination of them would. In either of these cases we have the problem of selecting suitable spatial, spectral or spatio-spectral features that allow the classifier to do its job well. The number of all possible such features is extremely large. How can we select a suitable subset? We have developed GENIE, a hybrid learning system that combines a genetic algorithm that searches a space of image processing operations for a set that can produce suitable feature planes, and a more conventional classifier which uses those feature planes to output a final classification. In this paper we show that the use of a hybrid GA provides significant advantages over using either a GA alone or more conventional classification methods alone. We present results using high-resolution IKONOS data, looking for regions of burned forest and for roads.
Attention Recognition in EEG-Based Affective Learning Research Using CFS+KNN Algorithm.
Hu, Bin; Li, Xiaowei; Sun, Shuting; Ratcliffe, Martyn
2018-01-01
The research detailed in this paper focuses on the processing of Electroencephalography (EEG) data to identify attention during the learning process. The identification of affect using our procedures is integrated into a simulated distance learning system that provides feedback to the user with respect to attention and concentration. The authors propose a classification procedure that combines correlation-based feature selection (CFS) and a k-nearest-neighbor (KNN) data mining algorithm. To evaluate the CFS+KNN algorithm, it was test against CFS+C4.5 algorithm and other classification algorithms. The classification performance was measured 10 times with different 3-fold cross validation data. The data was derived from 10 subjects while they were attempting to learn material in a simulated distance learning environment. A self-assessment model of self-report was used with a single valence to evaluate attention on 3 levels (high, neutral, low). It was found that CFS+KNN had a much better performance, giving the highest correct classification rate (CCR) of % for the valence dimension divided into three classes.
NASA Astrophysics Data System (ADS)
Seong, Cho Kyu; Ho, Chung Duk; Pyo, Hong Deok; Kyeong Jin, Park
2016-04-01
This study aimed to investigate the classification ability with naked eyes according to the understanding level about rocks of pre-service science teachers. We developed a questionnaire concerning misconception about minerals and rocks. The participant were 132 pre-service science teachers. Data were analyzed using Rasch model. Participants were divided into a master group and a novice group according to their understanding level. Seventeen rocks samples (6 igneous, 5 sedimentary, and 6 metamorphic rocks) were presented to pre-service science teachers to examine their classification ability, and they classified the rocks according to the criteria we provided. The study revealed three major findings. First, the pre-service science teachers mainly classified rocks according to textures, color, and grain size. Second, while they relatively easily classified igneous rocks, participants were confused when distinguishing sedimentary and metamorphic rocks from one another by using the same classification criteria. On the other hand, the understanding level of rocks has shown a statistically significant correlation with the classification ability in terms of the formation mechanism of rocks, whereas there was no statically significant relationship found with determination of correct name of rocks. However, this study found that there was a statistically significant relationship between the classification ability with regard the formation mechanism of rocks and the determination of correct name of rocks Keywords : Pre-service science teacher, Understanding level, Rock classification ability, Formation mechanism, Criterion of classification
NASA Astrophysics Data System (ADS)
Khan, Faisal; Enzmann, Frieder; Kersten, Michael
2016-03-01
Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-25
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety. Copyright © 2014 Elsevier B.V. All rights reserved.
Classification and prediction of pilot weather encounters: A discriminant function analysis.
O'Hare, David; Hunter, David R; Martinussen, Monica; Wiggins, Mark
2011-05-01
Flight into adverse weather continues to be a significant hazard for General Aviation (GA) pilots. Weather-related crashes have a significantly higher fatality rate than other GA crashes. Previous research has identified lack of situational awareness, risk perception, and risk tolerance as possible explanations for why pilots would continue into adverse weather. However, very little is known about the nature of these encounters or the differences between pilots who avoid adverse weather and those who do not. Visitors to a web site described an experience with adverse weather and completed a range of measures of personal characteristics. The resulting data from 364 pilots were carefully screened and subject to a discriminant function analysis. Two significant functions were found. The first, accounting for 69% of the variance, reflected measures of risk awareness and pilot judgment while the second differentiated pilots in terms of their experience levels. The variables measured in this study enabled us to correctly discriminate between the three groups of pilots considerably better (53% correct classifications) than would have been possible by chance (33% correct classifications). The implications of these findings for targeting safety interventions are discussed.
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Mendonca, F. J.
1980-01-01
Ten segments of the size 20 x 10 km were aerially photographed and used as training areas for automatic classifications. The study areas was covered by four LANDSAT paths: 235, 236, 237, and 238. The percentages of overall correct classification for these paths range from 79.56 percent for path 238 to 95.59 percent for path 237.
39 CFR 3020.91 - Modification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Change the Mail Classification Schedule § 3020.91 Modification. The Postal Service shall submit corrections to product descriptions in the Mail Classification Schedule that do not constitute a proposal to modify the market dominant product list or the competitive product list as defined in § 3020.30 by filing...
39 CFR 3020.91 - Modification.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Change the Mail Classification Schedule § 3020.91 Modification. The Postal Service shall submit corrections to product descriptions in the Mail Classification Schedule that do not constitute a proposal to modify the market dominant product list or the competitive product list as defined in § 3020.30 by filing...
Exercise-Associated Collapse in Endurance Events: A Classification System.
ERIC Educational Resources Information Center
Roberts, William O.
1989-01-01
Describes a classification system devised for exercise-associated collapse in endurance events based on casualties observed at six Twin Cities Marathons. Major diagnostic criteria are body temperature and mental status. Management protocol includes fluid and fuel replacement, temperature correction, and leg cramp treatment. (Author/SM)
Jackman, Patrick; Sun, Da-Wen; Allen, Paul; Valous, Nektarios A; Mendoza, Fernando; Ward, Paddy
2010-04-01
A method to discriminate between various grades of pork and turkey ham was developed using colour and wavelet texture features. Image analysis methods originally developed for predicting the palatability of beef were applied to rapidly identify the ham grade. With high quality digital images of 50-94 slices per ham it was possible to identify the greyscale that best expressed the differences between the various ham grades. The best 10 discriminating image features were then found with a genetic algorithm. Using the best 10 image features, simple linear discriminant analysis models produced 100% correct classifications for both pork and turkey on both calibration and validation sets. 2009 Elsevier Ltd. All rights reserved.
Cole, William G.; Michael, Patricia; Blois, Marsden S.
1987-01-01
A computer program was created to use information about the statistical distribution of words in journal abstracts to make probabilistic judgments about the level of description (e.g. molecular, cell, organ) of medical text. Statistical analysis of 7,409 journal abstracts taken from three medical journals representing distinct levels of description revealed that many medical words seem to be highly specific to one or another level of description. For example, the word adrenoreceptors occurred only in the American Journal of Physiology, never in Journal of Biological Chemistry or in Journal of American Medical Association. Such highly specific words occured so frequently that the automatic classification program was able to classify correctly 45 out of 45 test abstracts, with 100% confidence. These findings are interpreted in terms of both a theory of the structure of medical knowledge and the pragmatics of automatic classification.
Kelly, J Daniel; Petisco, Cristina; Downey, Gerard
2006-08-23
A collection of authentic artisanal Irish honeys (n = 580) and certain of these honeys adulterated by fully inverted beet syrup (n = 280), high-fructose corn syrup (n = 160), partial invert cane syrup (n = 120), dextrose syrup (n = 160), and beet sucrose (n = 120) was assembled. All samples were adjusted to 70 degrees Bx and scanned in the midinfrared region (800-4000 cm(-1)) by attenuated total reflectance sample accessory. By use of soft independent modeling of class analogy (SIMCA) and partial least-squares (PLS) classification, authentic honey and honey adulterated by beet sucrose, dextrose syrups, and partial invert corn syrup could be identified with correct classification rates of 96.2%, 97.5%, 95.8%, and 91.7%, respectively. This combination of spectroscopic technique and chemometric methods was not able to unambiguously detect adulteration by high-fructose corn syrup or fully inverted beet syrup.
Bukreyev, Alexander A.; Chandran, Kartik; Dolnik, Olga; Dye, John M.; Ebihara, Hideki; Leroy, Eric M.; Mühlberger, Elke; Netesov, Sergey V.; Patterson, Jean L.; Paweska, Janusz T.; Saphire, Erica Ollmann; Smither, Sophie J.; Takada, Ayato; Towner, Jonathan S.; Volchkov, Viktor E.; Warren, Travis K.; Kuhn, Jens H.
2013-01-01
The International Committee on Taxonomy of Viruses (ICTV) Filoviridae Study Group prepares proposals on the classification and nomenclature of filoviruses to reflect current knowledge or to correct disagreements with the International Code of Virus Classification and Nomenclature (ICVCN). In recent years, filovirus taxonomy has been corrected and updated, but parts of it remain controversial, and several topics remain to be debated. This article summarizes the decisions and discussion of the currently acting ICTV Filoviridae Study Group since its inauguration in January 2012. PMID:24122154
Predictive models reduce talent development costs in female gymnastics.
Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle
2017-04-01
This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.
ERIC Educational Resources Information Center
Mozumdar, Arupendra; Liguori, Gary
2011-01-01
The purposes of this study were to generate correction equations for self-reported height and weight quartiles and to test the accuracy of the body mass index (BMI) classification based on corrected self-reported height and weight among 739 male and 434 female college students. The BMIqc (from height and weight quartile-specific, corrected…
Kanna, Rishi Mugesh; Schroeder, Gregory D.; Oner, Frank Cumhur; Vialle, Luiz; Chapman, Jens; Dvorak, Marcel; Fehlings, Michael; Shetty, Ajoy Prasad; Schnake, Klaus; Kandziora, Frank; Vaccaro, Alexander R.
2017-01-01
Study Design: Prospective survey-based study. Objectives: The AO Spine thoracolumbar injury classification has been shown to have good reproducibility among clinicians. However, the influence of spine surgeons’ clinical experience on fracture classification, stability assessment, and decision on management based on this classification has not been studied. Furthermore, the usefulness of varying imaging modalities including radiographs, computed tomography (CT) and magnetic resonance imaging (MRI) in the decision process was also studied. Methods: Forty-one spine surgeons from different regions, acquainted with the AOSpine classification system, were provided with 30 thoracolumbar fractures in a 3-step assessment: first radiographs, followed by CT and MRI. Surgeons classified the fracture, evaluated stability, chose management, and identified reasons for any changes. The surgeons were divided into 2 groups based on years of clinical experience as <10 years (n = 12) and >10 years (n = 29). Results: There were no significant differences between the 2 groups in correctly classifying A1, B2, and C type fractures. Surgeons with less experience had more correct diagnosis in classifying A3 (47.2% vs 38.5% in step 1, 73.6% vs 60.3% in step 2 and 77.8% vs 65.5% in step 3), A4 (16.7% vs 24.1% in step 1, 72.9% vs 57.8% in step 2 and 70.8% vs 56.0% in step3) and B1 injuries (31.9% vs 20.7% in step 1, 41.7% vs 36.8% in step 2 and 38.9% vs 33.9% in step 3). In the assessment of fracture stability and decision on treatment, the less and more experienced surgeons performed equally. The selection of a particular treatment plan varied in all subtypes except in A1 and C type injuries. Conclusion: Surgeons’ experience did not significantly affect overall fracture classification, evaluating stability and planning the treatment. Surgeons with less experience had a higher percentage of correct classification in A3 and A4 injuries. Despite variations between them in classification, the assessment of overall stability and management decisions were similar between the 2 groups. PMID:28815158
Rajasekaran, Shanmuganathan; Kanna, Rishi Mugesh; Schroeder, Gregory D; Oner, Frank Cumhur; Vialle, Luiz; Chapman, Jens; Dvorak, Marcel; Fehlings, Michael; Shetty, Ajoy Prasad; Schnake, Klaus; Kandziora, Frank; Vaccaro, Alexander R
2017-06-01
Prospective survey-based study. The AO Spine thoracolumbar injury classification has been shown to have good reproducibility among clinicians. However, the influence of spine surgeons' clinical experience on fracture classification, stability assessment, and decision on management based on this classification has not been studied. Furthermore, the usefulness of varying imaging modalities including radiographs, computed tomography (CT) and magnetic resonance imaging (MRI) in the decision process was also studied. Forty-one spine surgeons from different regions, acquainted with the AOSpine classification system, were provided with 30 thoracolumbar fractures in a 3-step assessment: first radiographs, followed by CT and MRI. Surgeons classified the fracture, evaluated stability, chose management, and identified reasons for any changes. The surgeons were divided into 2 groups based on years of clinical experience as <10 years (n = 12) and >10 years (n = 29). There were no significant differences between the 2 groups in correctly classifying A1, B2, and C type fractures. Surgeons with less experience had more correct diagnosis in classifying A3 (47.2% vs 38.5% in step 1, 73.6% vs 60.3% in step 2 and 77.8% vs 65.5% in step 3), A4 (16.7% vs 24.1% in step 1, 72.9% vs 57.8% in step 2 and 70.8% vs 56.0% in step3) and B1 injuries (31.9% vs 20.7% in step 1, 41.7% vs 36.8% in step 2 and 38.9% vs 33.9% in step 3). In the assessment of fracture stability and decision on treatment, the less and more experienced surgeons performed equally. The selection of a particular treatment plan varied in all subtypes except in A1 and C type injuries. Surgeons' experience did not significantly affect overall fracture classification, evaluating stability and planning the treatment. Surgeons with less experience had a higher percentage of correct classification in A3 and A4 injuries. Despite variations between them in classification, the assessment of overall stability and management decisions were similar between the 2 groups.
Using XMM-Newton and Optical Photometry to Figure Out CVs
NASA Astrophysics Data System (ADS)
Szkody, P.; Homer, L.; Henden, A.
2006-06-01
X-ray light curves from XMM-Newton combined with optical data from the satellite and ground-based observers provide distinctive shapes and periodicities that give information on the correct classification of cataclysmic variables. Our recent data on three SDSS sources with strong helium emission are used to identify a highly magnetic system (a polar), the spin of the white dwarf in an intermediate polar, and a typical disk accreting system.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-21
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 866 [Docket No. FDA-2010-N-0026] Medical Devices; Immunology and Microbiology Devices; Classification of Ovarian Adnexal Mass Assessment Score Test System; Correction AGENCY: Food and Drug Administration, HHS. ACTION...
12 CFR 1229.5 - Capital distributions for adequately capitalized Banks.
Code of Federal Regulations, 2010 CFR
2010-01-01
... CAPITAL CLASSIFICATIONS AND PROMPT CORRECTIVE ACTION Federal Home Loan Banks § 1229.5 Capital... classification of adequately capitalized. A Bank may not make a capital distribution if such distribution would... redeem its shares of stock if the transaction is made in connection with the issuance of additional Bank...
Estimation and Q-Matrix Validation for Diagnostic Classification Models
ERIC Educational Resources Information Center
Feng, Yuling
2013-01-01
Diagnostic classification models (DCMs) are structured latent class models widely discussed in the field of psychometrics. They model subjects' underlying attribute patterns and classify subjects into unobservable groups based on their mastery of attributes required to answer the items correctly. The effective implementation of DCMs depends…
Error Detection in Mechanized Classification Systems
ERIC Educational Resources Information Center
Hoyle, W. G.
1976-01-01
When documentary material is indexed by a mechanized classification system, and the results judged by trained professionals, the number of documents in disagreement, after suitable adjustment, defines the error rate of the system. In a test case disagreement was 22 percent and, of this 22 percent, the computer correctly identified two-thirds of…
77 FR 32010 - Applications (Classification, Advisory, and License) and Documentation
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-31
... DEPARTMENT OF COMMERCE Bureau of Industry and Security 15 CFR Part 748 Applications (Classification, Advisory, and License) and Documentation CFR Correction 0 In Title 15 of the Code of Federal... fourth column of the table, the two entries for ``National Semiconductor Hong Kong Limited'' are removed...
Das, A.J.; Battles, J.J.; Stephenson, N.L.; van Mantgem, P.J.
2007-01-01
We examined mortality of Abies concolor (Gord. & Glend.) Lindl. (white fir) and Pinus lambertiana Dougl. (sugar pine) by developing logistic models using three growth indices obtained from tree rings: average growth, growth trend, and count of abrupt growth declines. For P. lambertiana, models with average growth, growth trend, and count of abrupt declines improved overall prediction (78.6% dead trees correctly classified, 83.7% live trees correctly classified) compared with a model with average recent growth alone (69.6% dead trees correctly classified, 67.3% live trees correctly classified). For A. concolor, counts of abrupt declines and longer time intervals improved overall classification (trees with DBH ???20 cm: 78.9% dead trees correctly classified and 76.7% live trees correctly classified vs. 64.9% dead trees correctly classified and 77.9% live trees correctly classified; trees with DBH <20 cm: 71.6% dead trees correctly classified and 71.0% live trees correctly classified vs. 67.2% dead trees correctly classified and 66.7% live trees correctly classified). In general, count of abrupt declines improved live-tree classification. External validation of A. concolor models showed that they functioned well at stands not used in model development, and the development of size-specific models demonstrated important differences in mortality risk between understory and canopy trees. Population-level mortality-risk models were developed for A. concolor and generated realistic mortality rates at two sites. Our results support the contention that a more comprehensive use of the growth record yields a more robust assessment of mortality risk. ?? 2007 NRC.
Zur, Moran; Hanson, Allison S; Dahan, Arik
2014-09-30
While the solubility parameter is fairly straightforward when assigning BCS classification, the intestinal permeability (Peff) is more complex than generally recognized. In this paper we emphasize this complexity through the analysis of codeine, a commonly used antitussive/analgesic drug. Codeine was previously classified as a low-permeability compound, based on its lower LogP compared to metoprolol, a marker for the low-high permeability class boundary. In contrast, high fraction of dose absorbed (Fabs) was reported for codeine, which challenges the generally recognized Peff-Fabs correlation. The purpose of this study was to clarify this ambiguity through elucidation of codeine's BCS solubility/permeability class membership. Codeine's BCS solubility class was determined, and its intestinal permeability throughout the small intestine was investigated, both in vitro and in vivo in rats. Codeine was found to be unequivocally a high-solubility compound. All in vitro studies indicated that codeine's permeability is higher than metoprolol's. In vivo studies in rats showed similar permeability for both drugs throughout the entire small-intestine. In conclusion, codeine was found to be a BCS Class I compound. No Peff-Fabs discrepancy is involved in its absorption; rather, it reflects the risk of assigning BCS classification based on merely limited physicochemical characteristics. A thorough investigation using multiple experimental methods is prudent before assigning a BCS classification, to avoid misjudgment in various settings, e.g., drug discovery, formulation design, drug development and regulation. Copyright © 2013 Elsevier B.V. All rights reserved.
Analyzing thematic maps and mapping for accuracy
Rosenfield, G.H.
1982-01-01
Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.
Stöggl, Thomas; Holst, Anders; Jonasson, Arndt; Andersson, Erik; Wunsch, Tobias; Norström, Christer; Holmberg, Hans-Christer
2014-10-31
The purpose of the current study was to develop and validate an automatic algorithm for classification of cross-country (XC) ski-skating gears (G) using Smartphone accelerometer data. Eleven XC skiers (seven men, four women) with regional-to-international levels of performance carried out roller skiing trials on a treadmill using fixed gears (G2left, G2right, G3, G4left, G4right) and a 950-m trial using different speeds and inclines, applying gears and sides as they normally would. Gear classification by the Smartphone (on the chest) and based on video recordings were compared. Formachine-learning, a collective database was compared to individual data. The Smartphone application identified the trials with fixed gears correctly in all cases. In the 950-m trial, participants executed 140 ± 22 cycles as assessed by video analysis, with the automatic Smartphone application giving a similar value. Based on collective data, gears were identified correctly 86.0% ± 8.9% of the time, a value that rose to 90.3% ± 4.1% (P < 0.01) with machine learning from individual data. Classification was most often incorrect during transition between gears, especially to or from G3. Identification was most often correct for skiers who made relatively few transitions between gears. The accuracy of the automatic procedure for identifying G2left, G2right, G3, G4left and G4right was 96%, 90%, 81%, 88% and 94%, respectively. The algorithm identified gears correctly 100% of the time when a single gear was used and 90% of the time when different gears were employed during a variable protocol. This algorithm could be improved with respect to identification of transitions between gears or the side employed within a given gear.
75 FR 33989 - Export Administration Regulations: Technical Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-16
... 0694-AE69 Export Administration Regulations: Technical Corrections AGENCY: Bureau of Industry and... section of Export Control Classification Number 2B001 and the other is in the Technical Note on Adjusted... language regarding certain performance criteria of turning machines covered by Export Control...
Lewicke, Aaron; Sazonov, Edward; Corwin, Michael J; Neuman, Michael; Schuckers, Stephanie
2008-01-01
Reliability of classification performance is important for many biomedical applications. A classification model which considers reliability in the development of the model such that unreliable segments are rejected would be useful, particularly, in large biomedical data sets. This approach is demonstrated in the development of a technique to reliably determine sleep and wake using only the electrocardiogram (ECG) of infants. Typically, sleep state scoring is a time consuming task in which sleep states are manually derived from many physiological signals. The method was tested with simultaneous 8-h ECG and polysomnogram (PSG) determined sleep scores from 190 infants enrolled in the collaborative home infant monitoring evaluation (CHIME) study. Learning vector quantization (LVQ) neural network, multilayer perceptron (MLP) neural network, and support vector machines (SVMs) are tested as the classifiers. After systematic rejection of difficult to classify segments, the models can achieve 85%-87% correct classification while rejecting only 30% of the data. This corresponds to a Kappa statistic of 0.65-0.68. With rejection, accuracy improves by about 8% over a model without rejection. Additionally, the impact of the PSG scored indeterminate state epochs is analyzed. The advantages of a reliable sleep/wake classifier based only on ECG include high accuracy, simplicity of use, and low intrusiveness. Reliability of the classification can be built directly in the model, such that unreliable segments are rejected.
Large-scale classification of traffic signs under real-world conditions
NASA Astrophysics Data System (ADS)
Hazelhoff, Lykele; Creusen, Ivo; van de Wouw, Dennis; de With, Peter H. N.
2012-02-01
Traffic sign inventories are important to governmental agencies as they facilitate evaluation of traffic sign locations and are beneficial for road and sign maintenance. These inventories can be created (semi-)automatically based on street-level panoramic images. In these images, object detection is employed to detect the signs in each image, followed by a classification stage to retrieve the specific sign type. Classification of traffic signs is a complicated matter, since sign types are very similar with only minor differences within the sign, a high number of different signs is involved and multiple distortions occur, including variations in capturing conditions, occlusions, viewpoints and sign deformations. Therefore, we propose a method for robust classification of traffic signs, based on the Bag of Words approach for generic object classification. We extend the approach with a flexible, modular codebook to model the specific features of each sign type independently, in order to emphasize at the inter-sign differences instead of the parts common for all sign types. Additionally, this allows us to model and label the present false detections. Furthermore, analysis of the classification output provides the unreliable results. This classification system has been extensively tested for three different sign classes, covering 60 different sign types in total. These three data sets contain the sign detection results on street-level panoramic images, extracted from a country-wide database. The introduction of the modular codebook shows a significant improvement for all three sets, where the system is able to classify about 98% of the reliable results correctly.
Evaluation of an Algorithm to Predict Menstrual-Cycle Phase at the Time of Injury.
Tourville, Timothy W; Shultz, Sandra J; Vacek, Pamela M; Knudsen, Emily J; Bernstein, Ira M; Tourville, Kelly J; Hardy, Daniel M; Johnson, Robert J; Slauterbeck, James R; Beynnon, Bruce D
2016-01-01
Women are 2 to 8 times more likely to sustain an anterior cruciate ligament (ACL) injury than men, and previous studies indicated an increased risk for injury during the preovulatory phase of the menstrual cycle (MC). However, investigations of risk rely on retrospective classification of MC phase, and no tools for this have been validated. To evaluate the accuracy of an algorithm for retrospectively classifying MC phase at the time of a mock injury based on MC history and salivary progesterone (P4) concentration. Descriptive laboratory study. Research laboratory. Thirty-one healthy female collegiate athletes (age range, 18-24 years) provided serum or saliva (or both) samples at 8 visits over 1 complete MC. Self-reported MC information was obtained on a randomized date (1-45 days) after mock injury, which is the typical timeframe in which researchers have access to ACL-injured study participants. The MC phase was classified using the algorithm as applied in a stand-alone computational fashion and also by 4 clinical experts using the algorithm and additional subjective hormonal history information to help inform their decision. To assess algorithm accuracy, phase classifications were compared with the actual MC phase at the time of mock injury (ascertained using urinary luteinizing hormone tests and serial serum P4 samples). Clinical expert and computed classifications were compared using κ statistics. Fourteen participants (45%) experienced anovulatory cycles. The algorithm correctly classified MC phase for 23 participants (74%): 22 (76%) of 29 who were preovulatory/anovulatory and 1 (50%) of 2 who were postovulatory. Agreement between expert and algorithm classifications ranged from 80.6% (κ = 0.50) to 93% (κ = 0.83). Classifications based on same-day saliva sample and optimal P4 threshold were the same as those based on MC history alone (87.1% correct). Algorithm accuracy varied during the MC but at no time were both sensitivity and specificity levels acceptable. These findings raise concerns about the accuracy of previous retrospective MC-phase classification systems, particularly in a population with a high occurrence of anovulatory cycles.
Application of Wavelet Transform for PDZ Domain Classification
Daqrouq, Khaled; Alhmouz, Rami; Balamesh, Ahmed; Memic, Adnan
2015-01-01
PDZ domains have been identified as part of an array of signaling proteins that are often unrelated, except for the well-conserved structural PDZ domain they contain. These domains have been linked to many disease processes including common Avian influenza, as well as very rare conditions such as Fraser and Usher syndromes. Historically, based on the interactions and the nature of bonds they form, PDZ domains have most often been classified into one of three classes (class I, class II and others - class III), that is directly dependent on their binding partner. In this study, we report on three unique feature extraction approaches based on the bigram and trigram occurrence and existence rearrangements within the domain's primary amino acid sequences in assisting PDZ domain classification. Wavelet packet transform (WPT) and Shannon entropy denoted by wavelet entropy (WE) feature extraction methods were proposed. Using 115 unique human and mouse PDZ domains, the existence rearrangement approach yielded a high recognition rate (78.34%), which outperformed our occurrence rearrangements based method. The recognition rate was (81.41%) with validation technique. The method reported for PDZ domain classification from primary sequences proved to be an encouraging approach for obtaining consistent classification results. We anticipate that by increasing the database size, we can further improve feature extraction and correct classification. PMID:25860375
Large-scale optimization-based classification models in medicine and biology.
Lee, Eva K
2007-06-01
We present novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule); and (5) successive multi-stage classification capability to handle data points placed in the reserved-judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multi-group prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80 to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.
NASA Astrophysics Data System (ADS)
Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye
2016-06-01
This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.
Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita
2017-11-27
We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.
Bonifazi, Giuseppe; Capobianco, Giuseppe; Serranti, Silvia
2018-06-05
The aim of this work was to recognize different polymer flakes from mixed plastic waste through an innovative hierarchical classification strategy based on hyperspectral imaging, with particular reference to low density polyethylene (LDPE) and high-density polyethylene (HDPE). A plastic waste composition assessment, including also LDPE and HDPE identification, may help to define optimal recycling strategies for product quality control. Correct handling of plastic waste is essential for its further "sustainable" recovery, maximizing the sorting performance in particular for plastics with similar characteristics as LDPE and HDPE. Five different plastic waste samples were chosen for the investigation: polypropylene (PP), LDPE, HDPE, polystyrene (PS) and polyvinyl chloride (PVC). A calibration dataset was realized utilizing the corresponding virgin polymers. Hyperspectral imaging in the short-wave infrared range (1000-2500nm) was thus applied to evaluate the different plastic spectral attributes finalized to perform their recognition/classification. After exploring polymer spectral differences by principal component analysis (PCA), a hierarchical partial least squares discriminant analysis (PLS-DA) model was built allowing the five different polymers to be recognized. The proposed methodology, based on hierarchical classification, is very powerful and fast, allowing to recognize the five different polymers in a single step. Copyright © 2018 Elsevier B.V. All rights reserved.
Provenance establishment of coffee using solution ICP-MS and ICP-AES.
Valentin, Jenna L; Watling, R John
2013-11-01
Statistical interpretation of the concentrations of 59 elements, determined using solution based inductively coupled plasma mass spectrometry (ICP-MS) and inductively coupled plasma emission spectroscopy (ICP-AES), was used to establish the provenance of coffee samples from 15 countries across five continents. Data confirmed that the harvest year, degree of ripeness and whether the coffees were green or roasted had little effect on the elemental composition of the coffees. The application of linear discriminant analysis and principal component analysis of the elemental concentrations permitted up to 96.9% correct classification of the coffee samples according to their continent of origin. When samples from each continent were considered separately, up to 100% correct classification of coffee samples into their countries, and plantations of origin was achieved. This research demonstrates the potential of using elemental composition, in combination with statistical classification methods, for accurate provenance establishment of coffee. Copyright © 2013 Elsevier Ltd. All rights reserved.
Classification of cancerous cells based on the one-class problem approach
NASA Astrophysics Data System (ADS)
Murshed, Nabeel A.; Bortolozzi, Flavio; Sabourin, Robert
1996-03-01
One of the most important factors in reducing the effect of cancerous diseases is the early diagnosis, which requires a good and a robust method. With the advancement of computer technologies and digital image processing, the development of a computer-based system has become feasible. In this paper, we introduce a new approach for the detection of cancerous cells. This approach is based on the one-class problem approach, through which the classification system need only be trained with patterns of cancerous cells. This reduces the burden of the training task by about 50%. Based on this approach, a computer-based classification system is developed, based on the Fuzzy ARTMAP neural networks. Experimental results were performed using a set of 542 patterns taken from a sample of breast cancer. Results of the experiment show 98% correct identification of cancerous cells and 95% correct identification of non-cancerous cells.
ERIC Educational Resources Information Center
Potter, Penny F.; Graham-Moore, Brian E.
Most organizations planning to assess adverse impact or perform a stock analysis for affirmative action planning must correctly classify their jobs into appropriate occupational categories. Two methods of job classification were assessed in a combination archival and field study. Classification results from expert judgment of functional job…
Segmentation schema for enhancing land cover identification: A case study using Sentinel 2 data
NASA Astrophysics Data System (ADS)
Mongus, Domen; Žalik, Borut
2018-04-01
Land monitoring is performed increasingly using high and medium resolution optical satellites, such as the Sentinel-2. However, optical data is inevitably subjected to the variable operational conditions under which it was acquired. Overlapping of features caused by shadows, soft transitions between shadowed and non-shadowed regions, and temporal variability of the observed land-cover types require radiometric corrections. This study examines a new approach to enhancing the accuracy of land cover identification that resolves this problem. The proposed method constructs an ensemble-type classification model with weak classifiers tuned to the particular operational conditions under which the data was acquired. Iterative segmentation over the learning set is applied for this purpose, where feature space is partitioned according to the likelihood of misclassifications introduced by the classification model. As these are a consequence of overlapping features, such partitioning avoids the need for radiometric corrections of the data, and divides land cover types implicitly into subclasses. As a result, improved performance of all tested classification approaches were measured during the validation that was conducted on Sentinel-2 data. The highest accuracies in terms of F1-scores were achieved using the Naive Bayes Classifier as the weak classifier, while supplementing original spectral signatures with normalised difference vegetation index and texture analysis features, namely, average intensity, contrast, homogeneity, and dissimilarity. In total, an F1-score of nearly 95% was achieved in this way, with F1-scores of each particular land cover type reaching above 90%.
External validation of Medicare claims codes for digital mammography and computer-aided detection.
Fenton, Joshua J; Zhu, Weiwei; Balch, Steven; Smith-Bindman, Rebecca; Lindfors, Karen K; Hubbard, Rebecca A
2012-08-01
While Medicare claims are a potential resource for clinical mammography research or quality monitoring, the validity of key data elements remains uncertain. Claims codes for digital mammography and computer-aided detection (CAD), for example, have not been validated against a credible external reference standard. We matched Medicare mammography claims for women who received bilateral mammograms from 2003 to 2006 to corresponding mammography data from the Breast Cancer Surveillance Consortium (BCSC) registries in four U.S. states (N = 253,727 mammograms received by 120,709 women). We assessed the accuracy of the claims-based classifications of bilateral mammograms as either digital versus film and CAD versus non-CAD relative to a reference standard derived from BCSC data. Claims data correctly classified the large majority of film and digital mammograms (97.2% and 97.3%, respectively), yielding excellent agreement beyond chance (κ = 0.90). Claims data correctly classified the large majority of CAD mammograms (96.6%) but a lower percentage of non-CAD mammograms (86.7%). Agreement beyond chance remained high for CAD classification (κ = 0.83). From 2003 to 2006, the predictive values of claims-based digital and CAD classifications increased as the sample prevalences of each technology increased. Medicare claims data can accurately distinguish film and digital bilateral mammograms and mammograms conducted with and without CAD. The validity of Medicare claims data regarding film versus digital mammography and CAD suggests that these data elements can be useful in research and quality improvement. ©2012 AACR.
Scheeres, Korine; Knoop, Hans; Meer, van der Jos; Bleijenberg, Gijs
2009-04-01
Effective treatment of chronic fatigue syndrome (CFS) with cognitive behavioural therapy (CBT) relies on a correct classification of so called 'fluctuating active' versus 'passive' patients. For successful treatment with CBT is it especially important to recognise the passive patients and give them a tailored treatment protocol. In the present study it was evaluated whether CFS patient's physical activity pattern can be assessed most accurately with the 'Activity Pattern Interview' (API), the International Physical Activity Questionnaire (IPAQ) or the CFS-Activity Questionnaire (CFS-AQ). The three instruments were validated compared to actometers. Actometers are until now the best and most objective instrument to measure physical activity, but they are too expensive and time consuming for most clinical practice settings. In total 226 CFS patients enrolled for CBT therapy answered the API at intake and filled in the two questionnaires. Directly after intake they wore the actometer for two weeks. Based on receiver operating characteristic (ROC) curves the validity of the three methods were assessed and compared. Both the API and the two questionnaires had an acceptable validity (0.64 to 0.71). None of the three instruments was significantly better than the others. The proportion of false predictions was rather high for all three instrument. The IPAQ had the highest proportion of correct passive predictions (sensitivity 70.1%). The validity of all three instruments appeared to be fair, and all showed rather high proportions of false classifications. Hence in fact none of the tested instruments could really be called satisfactory. Because the IPAQ showed to be the best in correctly predicting 'passive' CFS patients, which is most essentially related to treatment results, it was concluded that the IPAQ is the preferable alternative for an actometer when treating CFS patients in clinical practice.
Learning from examples - Generation and evaluation of decision trees for software resource analysis
NASA Technical Reports Server (NTRS)
Selby, Richard W.; Porter, Adam A.
1988-01-01
A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.
77 FR 16661 - Tuberculosis in Cattle and Bison; State and Zone Designations; NM; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
...-0124] Tuberculosis in Cattle and Bison; State and Zone Designations; NM; Correction AGENCY: Animal and... in the regulatory text of an interim rule that amended the bovine tuberculosis regulations by establishing two separate zones with different tuberculosis risk classifications for the State of New Mexico...
Gupta, Mamta; Vang, Russell; Yemelyanova, Anna V; Kurman, Robert J; Li, Fanghong Rose; Maambo, Emily C; Murphy, Kathleen M; DeScipio, Cheryl; Thompson, Carol B; Ronnett, Brigitte M
2012-12-01
Distinction of hydatidiform moles from nonmolar specimens (NMs) and subclassification of hydatidiform moles as complete hydatidiform mole (CHM) and partial hydatidiform mole (PHM) are important for clinical practice and investigational studies; however, diagnosis based solely on morphology is affected by interobserver variability. Molecular genotyping can distinguish these entities by discerning androgenetic diploidy, diandric triploidy, and biparental diploidy to diagnose CHMs, PHMs, and NMs, respectively. Eighty genotyped cases (27 CHMs, 27 PHMs, 26 NMs) were selected from a series of 200 potentially molar specimens previously diagnosed using p57 immunohistochemistry and genotyping. Cases were classified by 6 pathologists (3 faculty level gynecologic pathologists and 3 fellows) on the basis of morphology, masked to p57 immunostaining and genotyping results, into 1 of 3 categories (CHM, PHM, or NM) during 2 diagnostic rounds; a third round incorporating p57 immunostaining results was also conducted. Consensus diagnoses (those rendered by 2 of 3 pathologists in each group) were also determined. Performance of experienced gynecologic pathologists versus fellow pathologists was compared, using genotyping results as the gold standard. Correct classification of CHMs ranged from 59% to 100%; there were no statistically significant differences in performance of faculty versus fellows in any round (P-values of 0.13, 0.67, and 0.54 for rounds 1 to 3, respectively). Correct classification of PHMs ranged from 26% to 93%, with statistically significantly better performance of faculty versus fellows in each round (P-values of 0.04, <0.01, and <0.01 for rounds 1 to 3, respectively). Correct classification of NMs ranged from 31% to 92%, with statistically significantly better performance of faculty only in round 2 (P-values of 1.0, <0.01, and 0.61 for rounds 1 to 3, respectively). Correct classification of all cases combined ranged from 51% to 75% by morphology and 70% to 80% with p57, with statistically significantly better performance of faculty only in round 2 (P-values of 0.69, <0.01, and 0.15 for rounds 1 to 3, respectively). p57 immunostaining significantly improved recognition of CHMs (P<0.01) and had high reproducibility (κ=0.93 to 0.96) but had no impact on distinction of PHMs and NMs. Genotyping provides a definitive diagnosis for the ∼25% to 50% of cases that are misclassified by morphology, especially those that are also unresolved by p57 immunostaining.
NASA Astrophysics Data System (ADS)
Petit, H. A.; Irassar, E. F.; Barbosa, M. R.
2018-01-01
Manufactured sands are particulate materials obtained as by product of rock crushing. Particle sizes in the sand can be as high as 6 mm and as low as a few microns. The concrete industry has been increasingly using these sands as fine aggregates to replace natural sands. The main shortcoming is the excess of particles smaller than <0.075 mm (Dust). This problem has been traditionally solved by a washing process. Air classification is being studied to replace the washing process and avoid the use of water. The complex classification process can only been understood with the aid of CFD-DEM simulations. This paper evaluates the applicability of a cross-flow air classifier to reduce the amount of dust in manufactured sands. Computational fluid dynamics (CFD) and discrete element modelling (DEM) were used for the assessment. Results show that the correct classification set up improves the size distribution of the raw materials. The cross-flow air classification is found to be influenced by the particle size distribution and the turbulence inside the chamber. The classifier can be re-designed to work at low inlet velocities to produce manufactured sand for the concrete industry.
A Study of Feature Combination for Vehicle Detection Based on Image Processing
2014-01-01
Video analytics play a critical role in most recent traffic monitoring and driver assistance systems. In this context, the correct detection and classification of surrounding vehicles through image analysis has been the focus of extensive research in the last years. Most of the pieces of work reported for image-based vehicle verification make use of supervised classification approaches and resort to techniques, such as histograms of oriented gradients (HOG), principal component analysis (PCA), and Gabor filters, among others. Unfortunately, existing approaches are lacking in two respects: first, comparison between methods using a common body of work has not been addressed; second, no study of the combination potentiality of popular features for vehicle classification has been reported. In this study the performance of the different techniques is first reviewed and compared using a common public database. Then, the combination capabilities of these techniques are explored and a methodology is presented for the fusion of classifiers built upon them, taking into account also the vehicle pose. The study unveils the limitations of single-feature based classification and makes clear that fusion of classifiers is highly beneficial for vehicle verification. PMID:24672299
Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul
2010-03-01
The quaternionic singular value decomposition is a technique to decompose a quaternion matrix (representation of a colour image) into quaternion singular vector and singular value component matrices exposing useful properties. The objective of this study was to use a small portion of uncorrelated singular values, as robust features for the classification of sliced pork ham images, using a supervised artificial neural network classifier. Images were acquired from four qualities of sliced cooked pork ham typically consumed in Ireland (90 slices per quality), having similar appearances. Mahalanobis distances and Pearson product moment correlations were used for feature selection. Six highly discriminating features were used as input to train the neural network. An adaptive feedforward multilayer perceptron classifier was employed to obtain a suitable mapping from the input dataset. The overall correct classification performance for the training, validation and test set were 90.3%, 94.4%, and 86.1%, respectively. The results confirm that the classification performance was satisfactory. Extracting the most informative features led to the recognition of a set of different but visually quite similar textural patterns based on quaternionic singular values. Copyright 2009 Elsevier Ltd. All rights reserved.
Automated speech analysis applied to laryngeal disease categorization.
Gelzinis, A; Verikas, A; Bacauskiene, M
2008-07-01
The long-term goal of the work is a decision support system for diagnostics of laryngeal diseases. Colour images of vocal folds, a voice signal, and questionnaire data are the information sources to be used in the analysis. This paper is concerned with automated analysis of a voice signal applied to screening of laryngeal diseases. The effectiveness of 11 different feature sets in classification of voice recordings of the sustained phonation of the vowel sound /a/ into a healthy and two pathological classes, diffuse and nodular, is investigated. A k-NN classifier, SVM, and a committee build using various aggregation options are used for the classification. The study was made using the mixed gender database containing 312 voice recordings. The correct classification rate of 84.6% was achieved when using an SVM committee consisting of four members. The pitch and amplitude perturbation measures, cepstral energy features, autocorrelation features as well as linear prediction cosine transform coefficients were amongst the feature sets providing the best performance. In the case of two class classification, using recordings from 79 subjects representing the pathological and 69 the healthy class, the correct classification rate of 95.5% was obtained from a five member committee. Again the pitch and amplitude perturbation measures provided the best performance.
Mengel, M; Sis, B; Halloran, P F
2007-10-01
The Banff process defined the diagnostic histologic lesions for renal allograft rejection and created a standardized classification system where none had existed. By correcting this deficit the process had universal impact on clinical practice and clinical and basic research. All trials of new drugs since the early 1990s benefited, because the Banff classification of lesions permitted the end point of biopsy-proven rejection. The Banff process has strengths, weaknesses, opportunities and threats (SWOT). The strength is its self-organizing group structure to create consensus. Consensus does not mean correctness: defining consensus is essential if a widely held view is to be proved wrong. The weaknesses of the Banff process are the absence of an independent external standard to test the classification; and its almost exclusive reliance on histopathology, which has inherent limitations in intra- and interobserver reproducibility, particularly at the interface between borderline and rejection, is exactly where clinicians demand precision. The opportunity lies in the new technology such as transcriptomics, which can form an external standard and can be incorporated into a new classification combining the elegance of histopathology and the objectivity of transcriptomics. The threat is the degree to which the renal transplant community will participate in and support this process.
Bruns, Nora; Dransfeld, Frauke; Hüning, Britta; Hobrecht, Julia; Storbeck, Tobias; Weiss, Christel; Felderhoff-Müser, Ursula; Müller, Hanna
2017-02-01
Neurodevelopmental outcome after prematurity is crucial. The aim was to compare two amplitude-integrated EEG (aEEG) classifications (Hellström-Westas (HW), Burdjalov) for outcome prediction. We recruited 65 infants ≤32 weeks gestational age with aEEG recordings within the first 72 h of life and Bayley testing at 24 months corrected age or death. Statistical analyses were performed for each 24 h section to determine whether very immature/depressed or mature/developed patterns predict survival/neurological outcome and to find predictors for mental development index (MDI) and psychomotor development index (PDI) at 24 months corrected age. On day 2, deceased infants showed no cycling in 80% (HW, p = 0.0140) and 100% (Burdjalov, p = 0.0041). The Burdjalov total score significantly differed between groups on day 2 (p = 0.0284) and the adapted Burdjalov total score on day 2 (p = 0.0183) and day 3 (p = 0.0472). Cycling on day 3 (HW; p = 0.0059) and background on day 3 (HW; p = 0.0212) are independent predictors for MDI (p = 0.0016) whereas no independent predictor for PDI was found (multiple regression analyses). Cycling in both classifications is a valuable tool to assess chance of survival. The classification by HW is also associated with long-term mental outcome. What is Known: •Neurodevelopmental outcome after preterm birth remains one of the major concerns in neonatology. •aEEG is used to measure brain activity and brain maturation in preterm infants. What is New: •The two common aEEG classifications and scoring systems described by Hellström-Westas and Burdjalov are valuable tools to predict neurodevelopmental outcome when performed within the first 72 h of life. •Both aEEG classifications are useful to predict chance of survival. The classification by Hellström-Westas can also predict long-term outcome at corrected age of 2 years.
Automatic photointerpretation for land use management in Minnesota
NASA Technical Reports Server (NTRS)
Swanlund, G. D. (Principal Investigator); Pile, D. R.
1973-01-01
The author has identified the following significant results. The Minnesota Iron Range area was selected as one of the land use areas to be evaluated. Six classes were selected: (1) hardwood; (2) conifer; (3) water (including in mines); (4) mines, tailings and wet areas; (5) open area; and (6) urban. Initial classification results show a correct classification of 70.1 to 95.4% for the six classes. This is extremely good. It can be further improved since there were some incorrect classifications in the ground truth.
Morphometric classification of Spanish thoroughbred stallion sperm heads.
Hidalgo, Manuel; Rodríguez, Inmaculada; Dorado, Jesús; Soler, Carles
2008-01-30
This work used semen samples collected from 12 stallions and assessed for sperm morphometry by the Sperm Class Analyzer (SCA) computer-assisted system. A discriminant analysis was performed on the morphometric data from that sperm to obtain a classification matrix for sperm head shape. Thereafter, we defined six types of sperm head shape. Classification of sperm head by this method obtained a globally correct assignment of 90.1%. Moreover, significant differences (p<0.05) were found between animals for all the sperm head morphometric parameters assessed.
Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.
Hoya, T; Chambers, J A
2001-01-01
In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.
ERIC Educational Resources Information Center
Scott, Marcia Strong; Delgado, Christine F.; Tu, Shihfen; Fletcher, Kathryn L.
2005-01-01
In this study, predictive classification accuracy was used to select those tasks from a kindergarten screening battery that best identified children who, three years later, were classified as educable mentally handicapped or as having a specific learning disability. A subset of measures enabled correct classification of 91% of the children in…
Assessment of statistical methods used in library-based approaches to microbial source tracking.
Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D
2003-12-01
Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.
The applicability of FORMOSAT-2 images to coastal waters/bodies classification
NASA Astrophysics Data System (ADS)
Teodoro, Ana; Duarte, Lia; Silva, Pedro
2015-10-01
FORMOSAT-2, launched in May 2004, is a Taiwanese satellite developed by the National Space Organization (NSPO) of Taiwan. The Remote Sensing Instrument (RSI) is a high spatial- resolution optical sensor onboard FORMOSAT-2 with a 2 m spatial resolution in the panchromatic (PAN) band and a 8 m spatial resolution in four multispectral (MS) bands from the visible to near-infrared region. The RSI images acquired during the daytime can be used for land cover/use studies, natural and forestry resources, disaster prevention and rescue works. The main objectives of this work were to investigate the application of FORMOSAT-2 data in order to: (1) identify beach patterns; (2) correctly extract a sand spit boundary. Different pixel-based and object-based classification algorithms were applied to four FORMOSAT-2 scenes and the results were compared with the results already obtained in previous works. Analyzing the results obtained, is possible to conclude that the FORMOSAT-2 data are adequate to the correct identification of beach patterns and to an accurately extraction of the sand spit boundary (Douro river estuary, Porto, Portugal). The results obtained were compared with the results already achieved with IKONOS-2 images. In conclusion, this research has demonstrated that the FORMOSAT-2 data and image processing techniques employed are an effective methodology to identify beach patterns and to correctly extract sand spit boundaries. In the future more FORMOSAT-2 images will be processed and will be consider the use of pan sharped images and data mining algorithms.
CSE database: extended annotations and new recommendations for ECG software testing.
Smíšek, Radovan; Maršánová, Lucie; Němcová, Andrea; Vítek, Martin; Kozumplík, Jiří; Nováková, Marie
2017-08-01
Nowadays, cardiovascular diseases represent the most common cause of death in western countries. Among various examination techniques, electrocardiography (ECG) is still a highly valuable tool used for the diagnosis of many cardiovascular disorders. In order to diagnose a person based on ECG, cardiologists can use automatic diagnostic algorithms. Research in this area is still necessary. In order to compare various algorithms correctly, it is necessary to test them on standard annotated databases, such as the Common Standards for Quantitative Electrocardiography (CSE) database. According to Scopus, the CSE database is the second most cited standard database. There were two main objectives in this work. First, new diagnoses were added to the CSE database, which extended its original annotations. Second, new recommendations for diagnostic software quality estimation were established. The ECG recordings were diagnosed by five new cardiologists independently, and in total, 59 different diagnoses were found. Such a large number of diagnoses is unique, even in terms of standard databases. Based on the cardiologists' diagnoses, a four-round consensus (4R consensus) was established. Such a 4R consensus means a correct final diagnosis, which should ideally be the output of any tested classification software. The accuracy of the cardiologists' diagnoses compared with the 4R consensus was the basis for the establishment of accuracy recommendations. The accuracy was determined in terms of sensitivity = 79.20-86.81%, positive predictive value = 79.10-87.11%, and the Jaccard coefficient = 72.21-81.14%, respectively. Within these ranges, the accuracy of the software is comparable with the accuracy of cardiologists. The accuracy quantification of the correct classification is unique. Diagnostic software developers can objectively evaluate the success of their algorithm and promote its further development. The annotations and recommendations proposed in this work will allow for faster development and testing of classification software. As a result, this might facilitate cardiologists' work and lead to faster diagnoses and earlier treatment.
Classification and disease prediction via mathematical programming
NASA Astrophysics Data System (ADS)
Lee, Eva K.; Wu, Tsung-Lin
2007-11-01
In this chapter, we present classification models based on mathematical programming approaches. We first provide an overview on various mathematical programming approaches, including linear programming, mixed integer programming, nonlinear programming and support vector machines. Next, we present our effort of novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule) and (5) successive multi-stage classification capability to handle data points placed in the reserved judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multigroup prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; multistage discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80% to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.
A hyper-temporal remote sensing protocol for high-resolution mapping of ecological sites
Karl, Jason W.
2017-01-01
Ecological site classification has emerged as a highly effective land management framework, but its utility at a regional scale has been limited due to the spatial ambiguity of ecological site locations in the U.S. or the absence of ecological site maps in other regions of the world. In response to these shortcomings, this study evaluated the use of hyper-temporal remote sensing (i.e., hundreds of images) for high spatial resolution mapping of ecological sites. We posit that hyper-temporal remote sensing can provide novel insights into the spatial variability of ecological sites by quantifying the temporal response of land surface spectral properties. This temporal response provides a spectral ‘fingerprint’ of the soil-vegetation-climate relationship which is central to the concept of ecological sites. Consequently, the main objective of this study was to predict the spatial distribution of ecological sites in a semi-arid rangeland using a 28-year time series of normalized difference vegetation index from Landsat TM 5 data and modeled using support vector machine classification. Results from this study show that support vector machine classification using hyper-temporal remote sensing imagery was effective in modeling ecological site classes, with a 62% correct classification. These results were compared to Gridded Soil Survey Geographic database and expert delineated maps of ecological sites which had a 51 and 89% correct classification, respectively. An analysis of the effects of ecological state on ecological site misclassifications revealed that sites in degraded states (e.g., shrub-dominated/shrubland and bare/annuals) had a higher rate of misclassification due to their close spectral similarity with other ecological sites. This study identified three important factors that need to be addressed to improve future model predictions: 1) sampling designs need to fully represent the range of both within class (i.e., states) and between class (i.e., ecological sites) spectral variability through time, 2) field sampling protocols that accurately characterize key soil properties (e.g., texture, depth) need to be adopted, and 3) additional environmental covariates (e.g. terrain attributes) need to be evaluated that may help further differentiate sites with similar spectral signals. Finally, the proposed hyper-temporal remote sensing framework may provide a standardized approach to evaluate and test our ecological site concepts through examining differences in vegetation dynamics in response to climatic variability and other drivers of land-use change. Results from this study demonstrate the efficacy of the hyper-temporal remote sensing approach for high resolution mapping of ecological sites, and highlights its utility in terms of reduced cost and time investment relative to traditional manual mapping approaches. PMID:28414731
A hyper-temporal remote sensing protocol for high-resolution mapping of ecological sites.
Maynard, Jonathan J; Karl, Jason W
2017-01-01
Ecological site classification has emerged as a highly effective land management framework, but its utility at a regional scale has been limited due to the spatial ambiguity of ecological site locations in the U.S. or the absence of ecological site maps in other regions of the world. In response to these shortcomings, this study evaluated the use of hyper-temporal remote sensing (i.e., hundreds of images) for high spatial resolution mapping of ecological sites. We posit that hyper-temporal remote sensing can provide novel insights into the spatial variability of ecological sites by quantifying the temporal response of land surface spectral properties. This temporal response provides a spectral 'fingerprint' of the soil-vegetation-climate relationship which is central to the concept of ecological sites. Consequently, the main objective of this study was to predict the spatial distribution of ecological sites in a semi-arid rangeland using a 28-year time series of normalized difference vegetation index from Landsat TM 5 data and modeled using support vector machine classification. Results from this study show that support vector machine classification using hyper-temporal remote sensing imagery was effective in modeling ecological site classes, with a 62% correct classification. These results were compared to Gridded Soil Survey Geographic database and expert delineated maps of ecological sites which had a 51 and 89% correct classification, respectively. An analysis of the effects of ecological state on ecological site misclassifications revealed that sites in degraded states (e.g., shrub-dominated/shrubland and bare/annuals) had a higher rate of misclassification due to their close spectral similarity with other ecological sites. This study identified three important factors that need to be addressed to improve future model predictions: 1) sampling designs need to fully represent the range of both within class (i.e., states) and between class (i.e., ecological sites) spectral variability through time, 2) field sampling protocols that accurately characterize key soil properties (e.g., texture, depth) need to be adopted, and 3) additional environmental covariates (e.g. terrain attributes) need to be evaluated that may help further differentiate sites with similar spectral signals. Finally, the proposed hyper-temporal remote sensing framework may provide a standardized approach to evaluate and test our ecological site concepts through examining differences in vegetation dynamics in response to climatic variability and other drivers of land-use change. Results from this study demonstrate the efficacy of the hyper-temporal remote sensing approach for high resolution mapping of ecological sites, and highlights its utility in terms of reduced cost and time investment relative to traditional manual mapping approaches.
Challenges in projecting clustering results across gene expression-profiling datasets.
Lusa, Lara; McShane, Lisa M; Reid, James F; De Cecco, Loris; Ambrogi, Federico; Biganzoli, Elia; Gariboldi, Manuela; Pierotti, Marco A
2007-11-21
Gene expression microarray studies for several types of cancer have been reported to identify previously unknown subtypes of tumors. For breast cancer, a molecular classification consisting of five subtypes based on gene expression microarray data has been proposed. These subtypes have been reported to exist across several breast cancer microarray studies, and they have demonstrated some association with clinical outcome. A classification rule based on the method of centroids has been proposed for identifying the subtypes in new collections of breast cancer samples; the method is based on the similarity of the new profiles to the mean expression profile of the previously identified subtypes. Previously identified centroids of five breast cancer subtypes were used to assign 99 breast cancer samples, including a subset of 65 estrogen receptor-positive (ER+) samples, to five breast cancer subtypes based on microarray data for the samples. The effect of mean centering the genes (i.e., transforming the expression of each gene so that its mean expression is equal to 0) on subtype assignment by method of centroids was assessed. Further studies of the effect of mean centering and of class prevalence in the test set on the accuracy of method of centroids classifications of ER status were carried out using training and test sets for which ER status had been independently determined by ligand-binding assay and for which the proportion of ER+ and ER- samples were systematically varied. When all 99 samples were considered, mean centering before application of the method of centroids appeared to be helpful for correctly assigning samples to subtypes, as evidenced by the expression of genes that had previously been used as markers to identify the subtypes. However, when only the 65 ER+ samples were considered for classification, many samples appeared to be misclassified, as evidenced by an unexpected distribution of ER+ samples among the resultant subtypes. When genes were mean centered before classification of samples for ER status, the accuracy of the ER subgroup assignments was highly dependent on the proportion of ER+ samples in the test set; this effect of subtype prevalence was not seen when gene expression data were not mean centered. Simple corrections such as mean centering of genes aimed at microarray platform or batch effect correction can have undesirable consequences because patient population effects can easily be confused with these assay-related effects. Careful thought should be given to the comparability of the patient populations before attempting to force data comparability for purposes of assigning subtypes to independent subjects.
Mechanics of Composite Materials with Different Moduli in Tension and Compression
1978-07-01
100% and 400% for carbon-carbon. The principal objective DD N 73 1473 EDITION OF I NOV65 IS OBSOLETE UNCLASSIFIED i i SECURITY CLASSIFICATION OF THIS...corrected. 40 TABLE 2.3 BUCKLING OF PAYLOAD BAY DOOR PANELS WITH VARIOUS LIGHTNING STRIKE PROTECTION CONCEPTS BUCKLING LOAD, N ., lb/in. CONFIGURATION...ORTHOTROPY AND HIGH Et/Ec p 70 P CC"’ CHANGE C02 CHAC -l- AXIAL CHANGE COMMISSION INTIIUNAL IXTERNAL i peamal PRESSURE 40 60 s AXIAL 0 IAN C. TENMiON
NASA Technical Reports Server (NTRS)
Park, Steve
1990-01-01
A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-02
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 866 [Docket No... Serological Reagents; Correction AGENCY: Food and Drug Administration, HHS. ACTION: Final rule; correction. SUMMARY: In the Federal Register of March 9, 2012 (76 FR 14272), the Food and Drug Administration (FDA...
Multispectral Resource Sampler (MPS): Proof of Concept. Literature survey of atmospheric corrections
NASA Technical Reports Server (NTRS)
Schowengerdt, R. A.; Slater, P. N.
1981-01-01
Work done in combining spectral bands to reduce atmospheric effects on spectral signatures is described. The development of atmospheric models and their use with ground and aerial measurements in correcting spectral signatures is reviewed. An overview of studies of atmospheric effects on the accuracy of scene classification is provided.
2008-09-01
automated processing of images for color correction, segmentation of foreground targets from sediment and classification of targets to taxonomic category...element in the development of HabCam as a tool for habitat characterization is the automated processing of images for color correction, segmentation of
Transgender Inmates in Prisons.
Routh, Douglas; Abess, Gassan; Makin, David; Stohr, Mary K; Hemmens, Craig; Yoo, Jihye
2017-05-01
Transgender inmates provide a conundrum for correctional staff, particularly when it comes to classification, victimization, and medical and health issues. Using LexisNexis and WestLaw and state Department of Corrections (DOC) information, we collected state statutes and DOC policies concerning transgender inmates. We utilized academic legal research with content analysis to determine whether a statute or policy addressed issues concerning classification procedures, access to counseling services, the initiation and continuation of hormone therapy, and sex reassignment surgery. We found that while more states are providing either statutory or policy guidelines for transgender inmates, a number of states are lagging behind and there is a shortage of guidance dealing with the medical issues related to being transgender.
NASA Astrophysics Data System (ADS)
Lu, Bing; He, Yuhong
2017-06-01
Investigating spatio-temporal variations of species composition in grassland is an essential step in evaluating grassland health conditions, understanding the evolutionary processes of the local ecosystem, and developing grassland management strategies. Space-borne remote sensing images (e.g., MODIS, Landsat, and Quickbird) with spatial resolutions varying from less than 1 m to 500 m have been widely applied for vegetation species classification at spatial scales from community to regional levels. However, the spatial resolutions of these images are not fine enough to investigate grassland species composition, since grass species are generally small in size and highly mixed, and vegetation cover is greatly heterogeneous. Unmanned Aerial Vehicle (UAV) as an emerging remote sensing platform offers a unique ability to acquire imagery at very high spatial resolution (centimetres). Compared to satellites or airplanes, UAVs can be deployed quickly and repeatedly, and are less limited by weather conditions, facilitating advantageous temporal studies. In this study, we utilize an octocopter, on which we mounted a modified digital camera (with near-infrared (NIR), green, and blue bands), to investigate species composition in a tall grassland in Ontario, Canada. Seven flight missions were conducted during the growing season (April to December) in 2015 to detect seasonal variations, and four of them were selected in this study to investigate the spatio-temporal variations of species composition. To quantitatively compare images acquired at different times, we establish a processing flow of UAV-acquired imagery, focusing on imagery quality evaluation and radiometric correction. The corrected imagery is then applied to an object-based species classification. Maps of species distribution are subsequently used for a spatio-temporal change analysis. Results indicate that UAV-acquired imagery is an incomparable data source for studying fine-scale grassland species composition, owing to its high spatial resolution. The overall accuracy is around 85% for images acquired at different times. Species composition is spatially attributed by topographical features and soil moisture conditions. Spatio-temporal variation of species composition implies the growing process and succession of different species, which is critical for understanding the evolutionary features of grassland ecosystems. Strengths and challenges of applying UAV-acquired imagery for vegetation studies are summarized at the end.
Bolivian satellite technology program on ERTS natural resources
NASA Technical Reports Server (NTRS)
Brockmann, H. C. (Principal Investigator); Bartoluccic C., L.; Hoffer, R. M.; Levandowski, D. W.; Ugarte, I.; Valenzuela, R. R.; Urena E., M.; Oros, R.
1977-01-01
The author has identified the following significant results. Application of digital classification for mapping land use permitted the separation of units at more specific levels in less time. A correct classification of data in the computer has a positive effect on the accuracy of the final products. Land use unit comparison with types of soils as represented by the colors of the coded map showed a class relation. Soil types in relation to land cover and land use demonstrated that vegetation was a positive factor in soils classification. Groupings of image resolution elements (pixels) permit studies of land use at different levels, thereby forming parameters for the classification of soils.
Spectral band selection for classification of soil organic matter content
NASA Technical Reports Server (NTRS)
Henderson, Tracey L.; Szilagyi, Andrea; Baumgardner, Marion F.; Chen, Chih-Chien Thomas; Landgrebe, David A.
1989-01-01
This paper describes the spectral-band-selection (SBS) algorithm of Chen and Landgrebe (1987, 1988, and 1989) and uses the algorithm to classify the organic matter content in the earth's surface soil. The effectiveness of the algorithm was evaluated comparing the results of classification of the soil organic matter using SBS bands with those obtained using Landsat MSS bands and TM bands, showing that the algorithm was successful in finding important spectral bands for classification of organic matter content. Using the calculated bands, the probabilities of correct classification for climate-stratified data were found to range from 0.910 to 0.980.
Sex determination from the talus and calcaneus measurements.
Gualdi-Russo, Emanuela
2007-09-13
Several studies have demonstrated that discriminant function equations used to determine the sex of a skeleton are population-specific. The purpose of the present research was to develop discriminant function equations for sex determination on the basis of 18 variables on the right and left talus and calcaneus in a modern northern Italian sample. The sample consisted of 118 skeletons (62 males and 56 females) from the Frassetto Collection (University of Bologna). The ages of the individuals ranged from 19 to 70 years. The results indicated that metric traits of the talus (in particular) and calcaneus are good indicators of sexual dimorphism. The percentage of correct classification was high (87.9-95.7%). In view of the differences among current Italian populations, we tested the validity of the discriminant function equations in an independent sample of individuals of different origin (northern and southern Italy). The accuracy of classification was high only for the northern Italians. Most southern Italian males were misclassified as females, confirming the population-specificity of discriminant function equations.
NASA Astrophysics Data System (ADS)
Lestari, A. W.; Rustam, Z.
2017-07-01
In the last decade, breast cancer has become the focus of world attention as this disease is one of the primary leading cause of death for women. Therefore, it is necessary to have the correct precautions and treatment. In previous studies, Fuzzy Kennel K-Medoid algorithm has been used for multi-class data. This paper proposes an algorithm to classify the high dimensional data of breast cancer using Fuzzy Possibilistic C-means (FPCM) and a new method based on clustering analysis using Normed Kernel Function-Based Fuzzy Possibilistic C-Means (NKFPCM). The objective of this paper is to obtain the best accuracy in classification of breast cancer data. In order to improve the accuracy of the two methods, the features candidates are evaluated using feature selection, where Laplacian Score is used. The results show the comparison accuracy and running time of FPCM and NKFPCM with and without feature selection.
NASA Astrophysics Data System (ADS)
Hall-Brown, Mary
The heterogeneity of Arctic vegetation can make land cover classification vey difficult when using medium to small resolution imagery (Schneider et al., 2009; Muller et al., 1999). Using high radiometric and spatial resolution imagery, such as the SPOT 5 and IKONOS satellites, have helped arctic land cover classification accuracies rise into the 80 and 90 percentiles (Allard, 2003; Stine et al., 2010; Muller et al., 1999). However, those increases usually come at a high price. High resolution imagery is very expensive and can often add tens of thousands of dollars onto the cost of the research. The EO-1 satellite launched in 2002 carries two sensors that have high specral and/or high spatial resolutions and can be an acceptable compromise between the resolution versus cost issues. The Hyperion is a hyperspectral sensor with the capability of collecting 242 spectral bands of information. The Advanced Land Imager (ALI) is an advanced multispectral sensor whose spatial resolution can be sharpened to 10 meters. This dissertation compares the accuracies of arctic land cover classifications produced by the Hyperion and ALI sensors to the classification accuracies produced by the Systeme Pour l' Observation de le Terre (SPOT), the Landsat Thematic Mapper (TM) and the Landsat Enhanced Thematic Mapper Plus (ETM+) sensors. Hyperion and ALI images from August 2004 were collected over the Upper Kuparuk River Basin, Alaska. Image processing included the stepwise discriminant analysis of pixels that were positively classified from coinciding ground control points, geometric and radiometric correction, and principle component analysis. Finally, stratified random sampling was used to perform accuracy assessments on satellite derived land cover classifications. Accuracy was estimated from an error matrix (confusion matrix) that provided the overall, producer's and user's accuracies. This research found that while the Hyperion sensor produced classfication accuracies that were equivalent to the TM and ETM+ sensor (approximately 78%), the Hyperion could not obtain the accuracy of the SPOT 5 HRV sensor. However, the land cover classifications derived from the ALI sensor exceeded most classification accuracies derived from the TM and ETM+ senors and were even comparable to most SPOT 5 HRV classifications (87%). With the deactivation of the Landsat series satellites, the monitoring of remote locations such as in the Arctic on an uninterupted basis thoughout the world is in jeopardy. The utilization of the Hyperion and ALI sensors are a way to keep that endeavor operational. By keeping the ALI sensor active at all times, uninterupted observation of the entire Earth can be accomplished. Keeping the Hyperion sensor as a "tasked" sensor can provide scientists with additional imagery and options for their studies without overburdening storage issues.
Byun, Wonwoo; Lee, Jung-Min; Kim, Youngwon; Brusseau, Timothy A
2018-03-26
This study examined the accuracy of the Fitbit activity tracker (FF) for quantifying sedentary behavior (SB) and varying intensities of physical activity (PA) in 3-5-year-old children. Twenty-eight healthy preschool-aged children (Girls: 46%, Mean age: 4.8 ± 1.0 years) wore the FF and were directly observed while performing a set of various unstructured and structured free-living activities from sedentary to vigorous intensity. The classification accuracy of the FF for measuring SB, light PA (LPA), moderate-to-vigorous PA (MVPA), and total PA (TPA) was examined calculating Pearson correlation coefficients (r), mean absolute percent error (MAPE), Cohen's kappa ( k ), sensitivity (Se), specificity (Sp), and area under the receiver operating curve (ROC-AUC). The classification accuracies of the FF (ROC-AUC) were 0.92, 0.63, 0.77 and 0.92 for SB, LPA, MVPA and TPA, respectively. Similarly, values of kappa, Se, Sp and percentage of correct classification were consistently high for SB and TPA, but low for LPA and MVPA. The FF demonstrated excellent classification accuracy for assessing SB and TPA, but lower accuracy for classifying LPA and MVPA. Our findings suggest that the FF should be considered as a valid instrument for assessing time spent sedentary and overall physical activity in preschool-aged children.
Detection of Anomalies in Citrus Leaves Using Laser-Induced Breakdown Spectroscopy (LIBS).
Sankaran, Sindhuja; Ehsani, Reza; Morgan, Kelly T
2015-08-01
Nutrient assessment and management are important to maintain productivity in citrus orchards. In this study, laser-induced breakdown spectroscopy (LIBS) was applied for rapid and real-time detection of citrus anomalies. Laser-induced breakdown spectroscopy spectra were collected from citrus leaves with anomalies such as diseases (Huanglongbing, citrus canker) and nutrient deficiencies (iron, manganese, magnesium, zinc), and compared with those of healthy leaves. Baseline correction, wavelet multivariate denoising, and normalization techniques were applied to the LIBS spectra before analysis. After spectral pre-processing, features were extracted using principal component analysis and classified using two models, quadratic discriminant analysis and support vector machine (SVM). The SVM resulted in a high average classification accuracy of 97.5%, with high average canker classification accuracy (96.5%). LIBS peak analysis indicated that high intensities at 229.7, 247.9, 280.3, 393.5, 397.0, and 769.8 nm were observed of 11 peaks found in all the samples. Future studies using controlled experiments with variable nutrient applications are required for quantification of foliar nutrients by using LIBS-based sensing.
ERIC Educational Resources Information Center
Spearing, Debra; Woehlke, Paula
To assess the effect on discriminant analysis in terms of correct classification into two groups, the following parameters were systematically altered using Monte Carlo techniques: sample sizes; proportions of one group to the other; number of independent variables; and covariance matrices. The pairing of the off diagonals (or covariances) with…
ERIC Educational Resources Information Center
Duffrin, Christopher; Eakin, Angela; Bertrand, Brenda; Barber-Heidel, Kimberly; Carraway-Stage, Virginia
2011-01-01
The American College Health Association estimated that 31% of college students are overweight or obese. It is important that students have a correct perception of body weight status as extra weight has potential adverse health effects. This study assessed accuracy of perceived weight status versus medical classification among 102 college students.…
Wheat cultivation: Identifying and estimating area by means of LANDSAT data
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Mendonca, F. J.; Cottrell, D. A.; Tardin, A. T.; Lee, D. C. L.; Shimabukuro, Y. E.; Moreira, M. A.; Delima, A. M.; Maia, F. C. S.
1981-01-01
Automatic classification of LANDSAT data supported by aerial photography for identification and estimation of wheat growing areas was evaluated. Data covering three regions in the State of Rio Grande do Sul, Brazil were analyzed. The average correct classification of IMAGE-100 data was 51.02% and 63.30%, respectively, for the periods of July and of September/October, 1979.
Nikolić, Biljana; Martinović, Jelena; Matić, Milan; Stefanović, Đorđe
2018-05-29
Different variables determine the performance of cyclists, which brings up the question how these parameters may help in their classification by specialty. The aim of the study was to determine differences in cardiorespiratory parameters of male cyclists according to their specialty, flat rider (N=21), hill rider (N=35) and sprinter (N=20) and obtain the multivariate model for further cyclists classification by specialties, based on selected variables. Seventeen variables were measured at submaximal and maximum load on the cycle ergometer Cosmed E 400HK (Cosmed, Rome, Italy) (initial 100W with 25W increase, 90-100 rpm). Multivariate discriminant analysis was used to determine which variables group cyclists within their specialty, and to predict which variables can direct cyclists to a particular specialty. Among nine variables that statistically contribute to the discriminant power of the model, achieved power on the anaerobic threshold and the produced CO2 had the biggest impact. The obtained discriminatory model correctly classified 91.43% of flat riders, 85.71% of hill riders, while sprinters were classified completely correct (100%), i.e. 92.10% of examinees were correctly classified, which point out the strength of the discriminatory model. Respiratory indicators mostly contribute to the discriminant power of the model, which may significantly contribute to training practice and laboratory tests in future.
Electronic Nose: A Promising Tool For Early Detection Of Alicyclobacillus spp In Soft Drinks
NASA Astrophysics Data System (ADS)
Concina, I.; Bornšek, M.; Baccelliere, S.; Falasconi, M.; Sberveglieri, G.
2009-05-01
In the present work we investigate the potential use of the Electronic Nose EOS835 (SACMI scarl, Italy) to early detect Alicyclobacillus spp in two flavoured soft drinks. These bacteria have been acknowledged by producer companies as a major quality control target microorganisms because of their ability to survive commercial pasteurization processes and produce taint compounds in final product. Electronic Nose was able to distinguish between uncontaminated and contaminated products before the taint metabolites were identifiable by an untrained panel. Classification tests showed an excellent rate of correct classification for both drinks (from 86% uo to 100%). High performance liquid chromatography analyses showed no presence of the main metabolite at a level of 200 ppb, thus confirming the skill of the Electronic Nose technology in performing an actual early diagnosis of contamination.
NASA Astrophysics Data System (ADS)
Mlynarczuk, Mariusz; Skiba, Marta
2017-06-01
The correct and consistent identification of the petrographic properties of coal is an important issue for researchers in the fields of mining and geology. As part of the study described in this paper, investigations concerning the application of artificial intelligence methods for the identification of the aforementioned characteristics were carried out. The methods in question were used to identify the maceral groups of coal, i.e. vitrinite, inertinite, and liptinite. Additionally, an attempt was made to identify some non-organic minerals. The analyses were performed using pattern recognition techniques (NN, kNN), as well as artificial neural network techniques (a multilayer perceptron - MLP). The classification process was carried out using microscopy images of polished sections of coals. A multidimensional feature space was defined, which made it possible to classify the discussed structures automatically, based on the methods of pattern recognition and algorithms of the artificial neural networks. Also, from the study we assessed the impact of the parameters for which the applied methods proved effective upon the final outcome of the classification procedure. The result of the analyses was a high percentage (over 97%) of correct classifications of maceral groups and mineral components. The paper discusses also an attempt to analyze particular macerals of the inertinite group. It was demonstrated that using artificial neural networks to this end makes it possible to classify the macerals properly in over 91% of cases. Thus, it was proved that artificial intelligence methods can be successfully applied for the identification of selected petrographic features of coal.
NASA Astrophysics Data System (ADS)
Verma, Sneha K.; Chun, Sophia; Liu, Brent J.
2014-03-01
Pain is a common complication after spinal cord injury with prevalence estimates ranging 77% to 81%, which highly affects a patient's lifestyle and well-being. In the current clinical setting paper-based forms are used to classify pain correctly, however, the accuracy of diagnoses and optimal management of pain largely depend on the expert reviewer, which in many cases is not possible because of very few experts in this field. The need for a clinical decision support system that can be used by expert and non-expert clinicians has been cited in literature, but such a system has not been developed. We have designed and developed a stand-alone tool for correctly classifying pain type in spinal cord injury (SCI) patients, using Bayesian decision theory. Various machine learning simulation methods are used to verify the algorithm using a pilot study data set, which consists of 48 patients data set. The data set consists of the paper-based forms, collected at Long Beach VA clinic with pain classification done by expert in the field. Using the WEKA as the machine learning tool we have tested on the 48 patient dataset that the hypothesis that attributes collected on the forms and the pain location marked by patients have very significant impact on the pain type classification. This tool will be integrated with an imaging informatics system to support a clinical study that will test the effectiveness of using Proton Beam radiotherapy for treating spinal cord injury (SCI) related neuropathic pain as an alternative to invasive surgical lesioning.
Shao, Wei; Liu, Mingxia; Zhang, Daoqiang
2016-01-01
The systematic study of subcellular location pattern is very important for fully characterizing the human proteome. Nowadays, with the great advances in automated microscopic imaging, accurate bioimage-based classification methods to predict protein subcellular locations are highly desired. All existing models were constructed on the independent parallel hypothesis, where the cellular component classes are positioned independently in a multi-class classification engine. The important structural information of cellular compartments is missed. To deal with this problem for developing more accurate models, we proposed a novel cell structure-driven classifier construction approach (SC-PSorter) by employing the prior biological structural information in the learning model. Specifically, the structural relationship among the cellular components is reflected by a new codeword matrix under the error correcting output coding framework. Then, we construct multiple SC-PSorter-based classifiers corresponding to the columns of the error correcting output coding codeword matrix using a multi-kernel support vector machine classification approach. Finally, we perform the classifier ensemble by combining those multiple SC-PSorter-based classifiers via majority voting. We evaluate our method on a collection of 1636 immunohistochemistry images from the Human Protein Atlas database. The experimental results show that our method achieves an overall accuracy of 89.0%, which is 6.4% higher than the state-of-the-art method. The dataset and code can be downloaded from https://github.com/shaoweinuaa/. dqzhang@nuaa.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Stöggl, Thomas; Holst, Anders; Jonasson, Arndt; Andersson, Erik; Wunsch, Tobias; Norström, Christer; Holmberg, Hans-Christer
2014-01-01
The purpose of the current study was to develop and validate an automatic algorithm for classification of cross-country (XC) ski-skating gears (G) using Smartphone accelerometer data. Eleven XC skiers (seven men, four women) with regional-to-international levels of performance carried out roller skiing trials on a treadmill using fixed gears (G2left, G2right, G3, G4left, G4right) and a 950-m trial using different speeds and inclines, applying gears and sides as they normally would. Gear classification by the Smartphone (on the chest) and based on video recordings were compared. Formachine-learning, a collective database was compared to individual data. The Smartphone application identified the trials with fixed gears correctly in all cases. In the 950-m trial, participants executed 140 ± 22 cycles as assessed by video analysis, with the automatic Smartphone application giving a similar value. Based on collective data, gears were identified correctly 86.0% ± 8.9% of the time, a value that rose to 90.3% ± 4.1% (P < 0.01) with machine learning from individual data. Classification was most often incorrect during transition between gears, especially to or from G3. Identification was most often correct for skiers who made relatively few transitions between gears. The accuracy of the automatic procedure for identifying G2left, G2right, G3, G4left and G4right was 96%, 90%, 81%, 88% and 94%, respectively. The algorithm identified gears correctly 100% of the time when a single gear was used and 90% of the time when different gears were employed during a variable protocol. This algorithm could be improved with respect to identification of transitions between gears or the side employed within a given gear. PMID:25365459
Vaz de Souza, Daniel; Schirru, Elia; Mannocci, Francesco; Foschi, Federico; Patel, Shanon
2017-01-01
The aim of this study was to compare the diagnostic efficacy of 2 cone-beam computed tomographic (CBCT) units with parallax periapical (PA) radiographs for the detection and classification of simulated external cervical resorption (ECR) lesions. Simulated ECR lesions were created on 13 mandibular teeth from 3 human dry mandibles. PA and CBCT scans were taken using 2 different units, Kodak CS9300 (Carestream Health Inc, Rochester, NY) and Morita 3D Accuitomo 80 (J Morita, Kyoto, Japan), before and after the creation of the ECR lesions. The lesions were then classified according to Heithersay's classification and their position on the root surface. Sensitivity, specificity, positive predictive values, negative predictive values, and receiver operator characteristic curves as well as the reproducibility of each technique were determined for diagnostic accuracy. The area under the receiver operating characteristic value for diagnostic accuracy for PA radiography and Kodak and Morita CBCT scanners was 0.872, 0.99, and 0.994, respectively. The sensitivity and specificity for both CBCT scanners were significantly better than PA radiography (P < .001). There was no statistical difference between the sensitivity and specificity of the 2 scanners. The percentage of correct diagnoses according to the tooth type was 87.4% for the Kodak scanner, 88.3% for the Morita scanner, and 48.5% for PA radiography.The ECR lesions were correctly identified according to the tooth surface in 87.8% Kodak, 89.1% Morita and 49.4% PA cases. The ECR lesions were correctly classified according to Heithersay classification in 70.5% of Kodak, 69.2% of Morita, and 39.7% of PA cases. This study revealed that both CBCT scanners tested were equally accurate in diagnosing ECR and significantly better than PA radiography. CBCT scans were more likely to correctly categorize ECR according to the Heithersay classification compared with parallax PA radiographs. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Nesteruk, Tomasz; Nesteruk, Marta; Styczyńska, Maria; Barcikowska-Kotowicz, Maria; Walecki, Jerzy
2016-01-01
The aim of the study was to evaluate the diagnostic value of two measurement techniques in patients with cognitive impairment - automated volumetry of the hippocampus, entorhinal cortex, parahippocampal gyrus, posterior cingulate gyrus, cortex of the temporal lobes and corpus callosum, and fractional anisotropy (FA) index measurement of the corpus callosum using diffusion tensor imaging. A total number of 96 patients underwent magnetic resonance imaging study of the brain - 33 healthy controls (HC), 33 patients with diagnosed mild cognitive impairment (MCI) and 30 patients with Alzheimer's disease (AD) in early stage. The severity of the dementia was evaluated with neuropsychological test battery. The volumetric measurements were performed automatically using FreeSurfer imaging software. The measurements of FA index were performed manually using ROI (region of interest) tool. The volumetric measurement of the temporal lobe cortex had the highest correct classification rate (68.7%), whereas the lowest was achieved with FA index measurement of the corpus callosum (51%). The highest sensitivity and specificity in discriminating between the patients with MCI vs. early AD was achieved with the volumetric measurement of the corpus callosum - the values were 73% and 71%, respectively, and the correct classification rate was 72%. The highest sensitivity and specificity in discriminating between HC and the patients with early AD was achieved with the volumetric measurement of the entorhinal cortex - the values were 94% and 100%, respectively, and the correct classification rate was 97%. The highest sensitivity and specificity in discriminating between HC and the patients with MCI was achieved with the volumetric measurement of the temporal lobe cortex - the values were 90% and 93%, respectively, and the correct classification rate was 92%. The diagnostic value varied depending on the measurement technique. The volumetric measurement of the atrophy proved to be the best imaging biomarker, which allowed the distinction between the groups of patients. The volumetric assessment of the corpus callosum proved to be a useful tool in discriminating between the patients with MCI vs. early AD.
Observation versus classification in supervised category learning.
Levering, Kimery R; Kurtz, Kenneth J
2015-02-01
The traditional supervised classification paradigm encourages learners to acquire only the knowledge needed to predict category membership (a discriminative approach). An alternative that aligns with important aspects of real-world concept formation is learning with a broader focus to acquire knowledge of the internal structure of each category (a generative approach). Our work addresses the impact of a particular component of the traditional classification task: the guess-and-correct cycle. We compare classification learning to a supervised observational learning task in which learners are shown labeled examples but make no classification response. The goals of this work sit at two levels: (1) testing for differences in the nature of the category representations that arise from two basic learning modes; and (2) evaluating the generative/discriminative continuum as a theoretical tool for understand learning modes and their outcomes. Specifically, we view the guess-and-correct cycle as consistent with a more discriminative approach and therefore expected it to lead to narrower category knowledge. Across two experiments, the observational mode led to greater sensitivity to distributional properties of features and correlations between features. We conclude that a relatively subtle procedural difference in supervised category learning substantially impacts what learners come to know about the categories. The results demonstrate the value of the generative/discriminative continuum as a tool for advancing the psychology of category learning and also provide a valuable constraint for formal models and associated theories.
Low-cost real-time automatic wheel classification system
NASA Astrophysics Data System (ADS)
Shabestari, Behrouz N.; Miller, John W. V.; Wedding, Victoria
1992-11-01
This paper describes the design and implementation of a low-cost machine vision system for identifying various types of automotive wheels which are manufactured in several styles and sizes. In this application, a variety of wheels travel on a conveyor in random order through a number of processing steps. One of these processes requires the identification of the wheel type which was performed manually by an operator. A vision system was designed to provide the required identification. The system consisted of an annular illumination source, a CCD TV camera, frame grabber, and 386-compatible computer. Statistical pattern recognition techniques were used to provide robust classification as well as a simple means for adding new wheel designs to the system. Maintenance of the system can be performed by plant personnel with minimal training. The basic steps for identification include image acquisition, segmentation of the regions of interest, extraction of selected features, and classification. The vision system has been installed in a plant and has proven to be extremely effective. The system properly identifies the wheels correctly up to 30 wheels per minute regardless of rotational orientation in the camera's field of view. Correct classification can even be achieved if a portion of the wheel is blocked off from the camera. Significant cost savings have been achieved by a reduction in scrap associated with incorrect manual classification as well as a reduction of labor in a tedious task.
Automated target classification in high resolution dual frequency sonar imagery
NASA Astrophysics Data System (ADS)
Aridgides, Tom; Fernández, Manuel
2007-04-01
An improved computer-aided-detection / computer-aided-classification (CAD/CAC) processing string has been developed. The classified objects of 2 distinct strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution dual frequency sonar imagery. Three significant fusion algorithm improvements were made. First, a nonlinear 2nd order (Volterra) feature LLRT fusion algorithm was developed. Second, a Box-Cox nonlinear feature LLRT fusion algorithm was developed. The Box-Cox transformation consists of raising the features to a to-be-determined power. Third, a repeated application of a subset feature selection / feature orthogonalization / Volterra feature LLRT fusion block was utilized. It was shown that cascaded Volterra feature LLRT fusion of the CAD/CAC processing strings outperforms summing, baseline single-stage Volterra and Box-Cox feature LLRT algorithms, yielding significant improvements over the best single CAD/CAC processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate. Additionally, the robustness of cascaded Volterra feature fusion was demonstrated, by showing that the algorithm yields similar performance with the training and test sets.
Shen, Jing; Hu, FangKe; Zhang, LiHai; Tang, PeiFu; Bi, ZhengGang
2013-04-01
The accuracy of intertrochanteric fracture classification is important; indeed, the patient outcomes are dependent on their classification. The aim of this study was to use the AO classification system to evaluate the variation in classification between X-ray and computed tomography (CT)/3D CT images. Then, differences in the length of surgery were evaluated based on two examinations. Intertrochanteric fractures were reviewed and surgeons were interviewed. The rates of correct discrimination and misclassification (overestimates and underestimates) probabilities were determined. The impact of misclassification on length of surgery was also evaluated. In total, 370 patents and four surgeons were included in the study. All patients had X-ray images and 210 patients had CT/3D CT images. Of them, 214 and 156 patients were treated by intramedullary and extramedullary fixation systems, respectively. The mean length of surgery was 62.1 ± 17.7 min. The overall rate of correct discrimination was 83.8 % and in the classification of A1, A2 and A3 were 80.0, 85.7 and 82.4 %, respectively. The rate of misclassification showed no significant difference between stable and unstable fractures (21.3 vs 13.1 %, P = 0.173). The overall rates of overestimates and underestimates were significantly different (5 vs 11.25 %, P = 0.041). Subtracting the rate of overestimates from underestimates had a positive correlation with prolonged surgery and showed a significant difference with intramedullary fixation (P < 0.001). Classification based on the AO system was good in terms of consistency. CT/3D CT examination was more reliable and more helpful for preoperative assessment, especially for performance of an intramedullary fixation.
Toward Automated Cochlear Implant Fitting Procedures Based on Event-Related Potentials.
Finke, Mareike; Billinger, Martin; Büchner, Andreas
Cochlear implants (CIs) restore hearing to the profoundly deaf by direct electrical stimulation of the auditory nerve. To provide an optimal electrical stimulation pattern the CI must be individually fitted to each CI user. To date, CI fitting is primarily based on subjective feedback from the user. However, not all CI users are able to provide such feedback, for example, small children. This study explores the possibility of using the electroencephalogram (EEG) to objectively determine if CI users are able to hear differences in tones presented to them, which has potential applications in CI fitting or closed loop systems. Deviant and standard stimuli were presented to 12 CI users in an active auditory oddball paradigm. The EEG was recorded in two sessions and classification of the EEG data was performed with shrinkage linear discriminant analysis. Also, the impact of CI artifact removal on classification performance and the possibility to reuse a trained classifier in future sessions were evaluated. Overall, classification performance was above chance level for all participants although performance varied considerably between participants. Also, artifacts were successfully removed from the EEG without impairing classification performance. Finally, reuse of the classifier causes only a small loss in classification performance. Our data provide first evidence that EEG can be automatically classified on single-trial basis in CI users. Despite the slightly poorer classification performance over sessions, classifier and CI artifact correction appear stable over successive sessions. Thus, classifier and artifact correction weights can be reused without repeating the set-up procedure in every session, which makes the technique easier applicable. With our present data, we can show successful classification of event-related cortical potential patterns in CI users. In the future, this has the potential to objectify and automate parts of CI fitting procedures.
NASA Astrophysics Data System (ADS)
Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.
2014-10-01
Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model comparison on the red and near infrared bands. The advantages of SCnS + C and SCnS + W on both bands are expected to facilitate forest classification and change detection applications.
Pathological classification of equine recurrent laryngeal neuropathy.
Draper, Alexandra C E; Piercy, Richard J
2018-04-24
Recurrent Laryngeal Neuropathy (RLN) is a highly prevalent and predominantly left-sided, degenerative disorder of the recurrent laryngeal nerves (RLn) of tall horses, that causes inspiratory stridor at exercise because of intrinsic laryngeal muscle paresis. The associated laryngeal dysfunction and exercise intolerance in athletic horses commonly leads to surgical intervention, retirement or euthanasia with associated financial and welfare implications. Despite speculation, there is a lack of consensus and conflicting evidence supporting the primary classification of RLN, as either a distal ("dying back") axonopathy or as a primary myelinopathy and as either a (bilateral) mononeuropathy or a polyneuropathy; this uncertainty hinders etiological and pathophysiological research. In this review, we discuss the neuropathological changes and electrophysiological deficits reported in the RLn of affected horses, and the evidence for correct classification of the disorder. In so doing, we summarize and reveal the limitations of much historical research on RLN and propose future directions that might best help identify the etiology and pathophysiology of this enigmatic disorder. Copyright © 2018 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
Model-Based Building Detection from Low-Cost Optical Sensors Onboard Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Karantzalos, K.; Koutsourakis, P.; Kalisperakis, I.; Grammatikopoulos, L.
2015-08-01
The automated and cost-effective building detection in ultra high spatial resolution is of major importance for various engineering and smart city applications. To this end, in this paper, a model-based building detection technique has been developed able to extract and reconstruct buildings from UAV aerial imagery and low-cost imaging sensors. In particular, the developed approach through advanced structure from motion, bundle adjustment and dense image matching computes a DSM and a true orthomosaic from the numerous GoPro images which are characterised by important geometric distortions and fish-eye effect. An unsupervised multi-region, graphcut segmentation and a rule-based classification is responsible for delivering the initial multi-class classification map. The DTM is then calculated based on inpaininting and mathematical morphology process. A data fusion process between the detected building from the DSM/DTM and the classification map feeds a grammar-based building reconstruction and scene building are extracted and reconstructed. Preliminary experimental results appear quite promising with the quantitative evaluation indicating detection rates at object level of 88% regarding the correctness and above 75% regarding the detection completeness.
Through thick and thin: quantitative classification of photometric observing conditions on Paranal
NASA Astrophysics Data System (ADS)
Kerber, Florian; Querel, Richard R.; Neureiter, Bianca; Hanuschik, Reinhard
2016-07-01
A Low Humidity and Temperature Profiling (LHATPRO) microwave radiometer is used to monitor sky conditions over ESO's Paranal observatory. It provides measurements of precipitable water vapour (PWV) at 183 GHz, which are being used in Service Mode for scheduling observations that can take advantage of favourable conditions for infrared (IR) observations. The instrument also contains an IR camera measuring sky brightness temperature at 10.5 μm. It is capable of detecting cold and thin, even sub-visual, cirrus clouds. We present a diagnostic diagram that, based on a sophisticated time series analysis of these IR sky brightness data, allows for the automatic and quantitative classification of photometric observing conditions over Paranal. The method is highly sensitive to the presence of even very thin clouds but robust against other causes of sky brightness variations. The diagram has been validated across the complete range of conditions that occur over Paranal and we find that the automated process provides correct classification at the 95% level. We plan to develop our method into an operational tool for routine use in support of ESO Science Operations.
Testing and Validating Machine Learning Classifiers by Metamorphic Testing☆
Xie, Xiaoyuan; Ho, Joshua W. K.; Murphy, Christian; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh
2011-01-01
Machine Learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no “test oracle” to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique “metamorphic testing”, which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program. PMID:21532969
Structural Analysis of Biodiversity
Sirovich, Lawrence; Stoeckle, Mark Y.; Zhang, Yu
2010-01-01
Large, recently-available genomic databases cover a wide range of life forms, suggesting opportunity for insights into genetic structure of biodiversity. In this study we refine our recently-described technique using indicator vectors to analyze and visualize nucleotide sequences. The indicator vector approach generates correlation matrices, dubbed Klee diagrams, which represent a novel way of assembling and viewing large genomic datasets. To explore its potential utility, here we apply the improved algorithm to a collection of almost 17000 DNA barcode sequences covering 12 widely-separated animal taxa, demonstrating that indicator vectors for classification gave correct assignment in all 11000 test cases. Indicator vector analysis revealed discontinuities corresponding to species- and higher-level taxonomic divisions, suggesting an efficient approach to classification of organisms from poorly-studied groups. As compared to standard distance metrics, indicator vectors preserve diagnostic character probabilities, enable automated classification of test sequences, and generate high-information density single-page displays. These results support application of indicator vectors for comparative analysis of large nucleotide data sets and raise prospect of gaining insight into broad-scale patterns in the genetic structure of biodiversity. PMID:20195371
Detection of Aspens Using High Resolution Aerial Laser Scanning Data and Digital Aerial Images
Säynäjoki, Raita; Packalén, Petteri; Maltamo, Matti; Vehmas, Mikko; Eerikäinen, Kalle
2008-01-01
The aim was to use high resolution Aerial Laser Scanning (ALS) data and aerial images to detect European aspen (Populus tremula L.) from among other deciduous trees. The field data consisted of 14 sample plots of 30 m × 30 m size located in the Koli National Park in the North Karelia, Eastern Finland. A Canopy Height Model (CHM) was interpolated from the ALS data with a pulse density of 3.86/m2, low-pass filtered using Height-Based Filtering (HBF) and binarized to create the mask needed to separate the ground pixels from the canopy pixels within individual areas. Watershed segmentation was applied to the low-pass filtered CHM in order to create preliminary canopy segments, from which the non-canopy elements were extracted to obtain the final canopy segmentation, i.e. the ground mask was analysed against the canopy mask. A manual classification of aerial images was employed to separate the canopy segments of deciduous trees from those of coniferous trees. Finally, linear discriminant analysis was applied to the correctly classified canopy segments of deciduous trees to classify them into segments belonging to aspen and those belonging to other deciduous trees. The independent variables used in the classification were obtained from the first pulse ALS point data. The accuracy of discrimination between aspen and other deciduous trees was 78.6%. The independent variables in the classification function were the proportion of vegetation hits, the standard deviation of in pulse heights, accumulated intensity at the 90th percentile and the proportion of laser points reflected at the 60th height percentile. The accuracy of classification corresponded to the validation results of earlier ALS-based studies on the classification of individual deciduous trees to tree species. PMID:27873799
Age and gender classification of Merriam's turkeys from foot measurements
Mark A. Rumble; Todd R. Mills; Brian F. Wakeling; Richard W. Hoffman
1996-01-01
Wild turkey sex and age information is needed to define population structure but is difficult to obtain. We classified age and gender of Merriamâs turkeys (Meleagris gallopavo merriami) accurately based on measurements of two foot characteristics. Gender of birds was correctly classified 93% of the time from measurements of middle toe pads; correct...
Evaluation of thyroid tissue by Raman spectroscopy
NASA Astrophysics Data System (ADS)
Teixeira, C. S. B.; Bitar, R. A.; Santos, A. B. O.; Kulcsar, M. A. V.; Friguglietti, C. U. M.; Martinho, H. S.; da Costa, R. B.; Martin, A. A.
2010-02-01
Thyroid gland is a small gland in the neck consisting of two lobes connected by an isthmus. Thyroid's main function is to produce the hormones thyroxine (T4), triiodothyronine (T3) and calcitonin. Thyroid disorders can disturb the production of these hormones, which will affect numerous processes within the body such as: regulating metabolism and increasing utilization of cholesterol, fats, proteins, and carbohydrates. The gland itself can also be injured; for example, neoplasias, which have been considered the most important, causing damage of to the gland and are difficult to diagnose. There are several types of thyroid cancer: Papillary, Follicular, Medullary, and Anaplastic. The occurrence rate, in general is between 4 and 7%; which is on the increase (30%), probably due to new technology that is able to find small thyroid cancers that may not have been found previously. The most common method used for thyroid diagnoses are: anamnesis, ultrasonography, and laboratory exams (Fine Needle Aspiration Biopsy- FNAB). However, the sensitivity of those test are rather poor, with a high rate of false-negative results, therefore there is an urgent need to develop new diagnostic techniques. Raman spectroscopy has been presented as a valuable tool for cancer diagnosis in many different tissues. In this work, 27 fragments of the thyroid were collected from 18 patients, comprising the following histologic groups: goitre adjacent tissue, goitre nodular tissue, follicular adenoma, follicular carcinoma, and papillary carcinoma. Spectral collection was done with a commercial FTRaman Spectrometer (Bruker RFS100/S) using a 1064 nm laser excitation and Ge detector. Principal Component Analysis, Cluster Analysis, and Linear Discriminant Analysis with cross-validation were applied as spectral classification algorithm. Comparing the goitre adjacent tissue with the goitre nodular region, an index of 58.3% of correct classification was obtained. Between goitre (nodular region and adjacent tissue) and papillary carcinoma, the index of correct classification was 64.9%, and the classification between benign tissues (goitre and follicular adenoma) and malignant tissues (papillary and follicular carcinomas), the index was 72.5%.
Learning for VMM + WTA Embedded Classifiers
2016-03-31
enabling correct classification of each novel acoustic signal (generator, idle car , and idle truck). The classification structure requires, after...measured on our SoC FPAA IC. The test input is composed of signals from urban environment for 3 objects (generator, idle car , and idle truck...classifier results from a rural truck data set, an urban generator set, and urban idle car dataset. Solid lines represent our extracted background
Fourier-based classification of protein secondary structures.
Shu, Jian-Jun; Yong, Kian Yan
2017-04-15
The correct prediction of protein secondary structures is one of the key issues in predicting the correct protein folded shape, which is used for determining gene function. Existing methods make use of amino acids properties as indices to classify protein secondary structures, but are faced with a significant number of misclassifications. The paper presents a technique for the classification of protein secondary structures based on protein "signal-plotting" and the use of the Fourier technique for digital signal processing. New indices are proposed to classify protein secondary structures by analyzing hydrophobicity profiles. The approach is simple and straightforward. Results show that the more types of protein secondary structures can be classified by means of these newly-proposed indices. Copyright © 2017 Elsevier Inc. All rights reserved.
SVM based colon polyps classifier in a wireless active stereo endoscope.
Ayoub, J; Granado, B; Mhanna, Y; Romain, O
2010-01-01
This work focuses on the recognition of three-dimensional colon polyps captured by an active stereo vision sensor. The detection algorithm consists of SVM classifier trained on robust feature descriptors. The study is related to Cyclope, this prototype sensor allows real time 3D object reconstruction and continues to be optimized technically to improve its classification task by differentiation between hyperplastic and adenomatous polyps. Experimental results were encouraging and show correct classification rate of approximately 97%. The work contains detailed statistics about the detection rate and the computing complexity. Inspired by intensity histogram, the work shows a new approach that extracts a set of features based on depth histogram and combines stereo measurement with SVM classifiers to correctly classify benign and malignant polyps.
NASA Astrophysics Data System (ADS)
Babic, Z.; Pilipovic, R.; Risojevic, V.; Mirjanic, G.
2016-06-01
Honey bees have crucial role in pollination across the world. This paper presents a simple, non-invasive, system for pollen bearing honey bee detection in surveillance video obtained at the entrance of a hive. The proposed system can be used as a part of a more complex system for tracking and counting of honey bees with remote pollination monitoring as a final goal. The proposed method is executed in real time on embedded systems co-located with a hive. Background subtraction, color segmentation and morphology methods are used for segmentation of honey bees. Classification in two classes, pollen bearing honey bees and honey bees that do not have pollen load, is performed using nearest mean classifier, with a simple descriptor consisting of color variance and eccentricity features. On in-house data set we achieved correct classification rate of 88.7% with 50 training images per class. We show that the obtained classification results are not far behind from the results of state-of-the-art image classification methods. That favors the proposed method, particularly having in mind that real time video transmission to remote high performance computing workstation is still an issue, and transfer of obtained parameters of pollination process is much easier.
NASA Astrophysics Data System (ADS)
Gurbanov, Rafig; Gozen, Ayse Gul; Severcan, Feride
2018-01-01
Rapid, cost-effective, sensitive and accurate methodologies to classify bacteria are still in the process of development. The major drawbacks of standard microbiological, molecular and immunological techniques call for the possible usage of infrared (IR) spectroscopy based supervised chemometric techniques. Previous applications of IR based chemometric methods have demonstrated outstanding findings in the classification of bacteria. Therefore, we have exploited an IR spectroscopy based chemometrics using supervised method namely Soft Independent Modeling of Class Analogy (SIMCA) technique for the first time to classify heavy metal-exposed bacteria to be used in the selection of suitable bacteria to evaluate their potential for environmental cleanup applications. Herein, we present the powerful differentiation and classification of laboratory strains (Escherichia coli and Staphylococcus aureus) and environmental isolates (Gordonia sp. and Microbacterium oxydans) of bacteria exposed to growth inhibitory concentrations of silver (Ag), cadmium (Cd) and lead (Pb). Our results demonstrated that SIMCA was able to differentiate all heavy metal-exposed and control groups from each other with 95% confidence level. Correct identification of randomly chosen test samples in their corresponding groups and high model distances between the classes were also achieved. We report, for the first time, the success of IR spectroscopy coupled with supervised chemometric technique SIMCA in classification of different bacteria under a given treatment.
Haylen, Bernard T; Lee, Joseph; Maher, Chris; Deprest, Jan; Freeman, Robert
2014-06-01
Results of interobserver reliability studies for the International Urogynecological Association-International Continence Society (IUGA-ICS) Complication Classification coding can be greatly influenced by study design factors such as participant instruction, motivation, and test-question clarity. We attempted to optimize these factors. After a 15-min instructional lecture with eight clinical case examples (including images) and with classification/coding charts available, those clinicians attending an IUGA Surgical Complications workshop were presented with eight similar-style test cases over 10 min and asked to code them using the Category, Time and Site classification. Answers were compared to predetermined correct codes obtained by five instigators of the IUGA-ICS prostheses and grafts complications classification. Prelecture and postquiz participant confidence levels using a five-step Likert scale were assessed. Complete sets of answers to the questions (24 codings) were provided by 34 respondents, only three of whom reported prior use of the charts. Average score [n (%)] out of eight, as well as median score (range) for each coding category were: (i) Category: 7.3 (91 %); 7 (4-8); (ii) Time: 7.8 (98 %); 7 (6-8); (iii) Site: 7.2 (90 %); 7 (5-8). Overall, the equivalent calculations (out of 24) were 22.3 (93 %) and 22 (18-24). Mean prelecture confidence was 1.37 (out of 5), rising to 3.85 postquiz. Urogynecologists had the highest correlation with correct coding, followed closely by fellows and general gynecologists. Optimizing training and study design can lead to excellent results for interobserver reliability of the IUGA-ICS Complication Classification coding, with increased participant confidence in complication-coding ability.
Höller, Yvonne; Bergmann, Jürgen; Thomschewski, Aljoscha; Kronbichler, Martin; Höller, Peter; Crone, Julia S.; Schmid, Elisabeth V.; Butz, Kevin; Nardone, Raffaele; Trinka, Eugen
2013-01-01
Current research aims at identifying voluntary brain activation in patients who are behaviorally diagnosed as being unconscious, but are able to perform commands by modulating their brain activity patterns. This involves machine learning techniques and feature extraction methods such as applied in brain computer interfaces. In this study, we try to answer the question if features/classification methods which show advantages in healthy participants are also accurate when applied to data of patients with disorders of consciousness. A sample of healthy participants (N = 22), patients in a minimally conscious state (MCS; N = 5), and with unresponsive wakefulness syndrome (UWS; N = 9) was examined with a motor imagery task which involved imagery of moving both hands and an instruction to hold both hands firm. We extracted a set of 20 features from the electroencephalogram and used linear discriminant analysis, k-nearest neighbor classification, and support vector machines (SVM) as classification methods. In healthy participants, the best classification accuracies were seen with coherences (mean = .79; range = .53−.94) and power spectra (mean = .69; range = .40−.85). The coherence patterns in healthy participants did not match the expectation of central modulated -rhythm. Instead, coherence involved mainly frontal regions. In healthy participants, the best classification tool was SVM. Five patients had at least one feature-classifier outcome with p0.05 (none of which were coherence or power spectra), though none remained significant after false-discovery rate correction for multiple comparisons. The present work suggests the use of coherences in patients with disorders of consciousness because they show high reliability among healthy subjects and patient groups. However, feature extraction and classification is a challenging task in unresponsive patients because there is no ground truth to validate the results. PMID:24282545
Objective automated quantification of fluorescence signal in histological sections of rat lens.
Talebizadeh, Nooshin; Hagström, Nanna Zhou; Yu, Zhaohua; Kronschläger, Martin; Söderberg, Per; Wählby, Carolina
2017-08-01
Visual quantification and classification of fluorescent signals is the gold standard in microscopy. The purpose of this study was to develop an automated method to delineate cells and to quantify expression of fluorescent signal of biomarkers in each nucleus and cytoplasm of lens epithelial cells in a histological section. A region of interest representing the lens epithelium was manually demarcated in each input image. Thereafter, individual cell nuclei within the region of interest were automatically delineated based on watershed segmentation and thresholding with an algorithm developed in Matlab™. Fluorescence signal was quantified within nuclei, cytoplasms and juxtaposed backgrounds. The classification of cells as labelled or not labelled was based on comparison of the fluorescence signal within cells with local background. The classification rule was thereafter optimized as compared with visual classification of a limited dataset. The performance of the automated classification was evaluated by asking 11 independent blinded observers to classify all cells (n = 395) in one lens image. Time consumed by the automatic algorithm and visual classification of cells was recorded. On an average, 77% of the cells were correctly classified as compared with the majority vote of the visual observers. The average agreement among visual observers was 83%. However, variation among visual observers was high, and agreement between two visual observers was as low as 71% in the worst case. Automated classification was on average 10 times faster than visual scoring. The presented method enables objective and fast detection of lens epithelial cells and quantification of expression of fluorescent signal with an accuracy comparable with the variability among visual observers. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
The effect of call libraries and acoustic filters on the identification of bat echolocation.
Clement, Matthew J; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-09-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys.
The effect of call libraries and acoustic filters on the identification of bat echolocation
Clement, Matthew J; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-01-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys. PMID:25535563
The effect of call libraries and acoustic filters on the identification of bat echolocation
Clement, Matthew; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-01-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys.
1972-01-01
three species of Pseudoficalbia from New Guinea, While he was correct in his assignment of species, the characters, though they will separate a...and African material:, I have made no attempt to correct these errors, except in the Southeast Asian fauna, In a few cases, I have brought them to...current practice of lumping everything into one supposedly homogeneous genus.” While the statement may ultimately prove correct , I prefer to consider at
Hunskaar, Steinar
2011-01-01
Background The use of nurses for telephone-based triage in out-of-hours services is increasing in several countries. No investigations have been carried out in Norway into the quality of decisions made by nurses regarding our priority degree system. There are three levels: acute, urgent and non-urgent. Methods Nurses working in seven casualty clinics in out-of-hours districts in Norway (The Watchtowers) were all invited to participate in a study to assess priority grade on 20 written medical scenarios validated by an expert group. 83 nurses (response rate 76%) participated in the study. A one-out-of-five sample of the nurses assessed the same written cases after 3 months (n=18, response rate 90%) as a test–retest assessment. Results Among the acute, urgent and non-urgent scenarios, 82%, 74% and 81% were correctly classified according to national guidelines. There were significant differences in the proportion of correct classifications among the casualty clinics, but neither employment percentage nor profession or work experience affected the triage decision. The mean intraobserver variability measured by the Cohen kappa was 0.61 (CI 0.52 to 0.70), and there were significant differences in kappa with employment percentage. Casualty clinics and work experience did not affect intrarater agreement. Conclusion Correct classification of acute and non-urgent cases among nurses was quite high. Work experience and employment percentage did not affect triage decision. The intrarater agreement was good and about the same as in previous studies performed in other countries. Kappa increased significantly with increasing employment percentage. PMID:21262792
Hansen, Elisabeth Holm; Hunskaar, Steinar
2011-05-01
The use of nurses for telephone-based triage in out-of-hours services is increasing in several countries. No investigations have been carried out in Norway into the quality of decisions made by nurses regarding our priority degree system. There are three levels: acute, urgent and non-urgent. Nurses working in seven casualty clinics in out-of-hours districts in Norway (The Watchtowers) were all invited to participate in a study to assess priority grade on 20 written medical scenarios validated by an expert group. 83 nurses (response rate 76%) participated in the study. A one-out-of-five sample of the nurses assessed the same written cases after 3 months (n = 18, response rate 90%) as a test-retest assessment. Among the acute, urgent and non-urgent scenarios, 82%, 74% and 81% were correctly classified according to national guidelines. There were significant differences in the proportion of correct classifications among the casualty clinics, but neither employment percentage nor profession or work experience affected the triage decision. The mean intraobserver variability measured by the Cohen kappa was 0.61 (CI 0.52 to 0.70), and there were significant differences in kappa with employment percentage. Casualty clinics and work experience did not affect intrarater agreement. Correct classification of acute and non-urgent cases among nurses was quite high. Work experience and employment percentage did not affect triage decision. The intrarater agreement was good and about the same as in previous studies performed in other countries. Kappa increased significantly with increasing employment percentage.
Accuracy of dementia diagnosis: a direct comparison between radiologists and a computerized method.
Klöppel, Stefan; Stonnington, Cynthia M; Barnes, Josephine; Chen, Frederick; Chu, Carlton; Good, Catriona D; Mader, Irina; Mitchell, L Anne; Patel, Ameet C; Roberts, Catherine C; Fox, Nick C; Jack, Clifford R; Ashburner, John; Frackowiak, Richard S J
2008-11-01
There has been recent interest in the application of machine learning techniques to neuroimaging-based diagnosis. These methods promise fully automated, standard PC-based clinical decisions, unbiased by variable radiological expertise. We recently used support vector machines (SVMs) to separate sporadic Alzheimer's disease from normal ageing and from fronto-temporal lobar degeneration (FTLD). In this study, we compare the results to those obtained by radiologists. A binary diagnostic classification was made by six radiologists with different levels of experience on the same scans and information that had been previously analysed with SVM. SVMs correctly classified 95% (sensitivity/specificity: 95/95) of sporadic Alzheimer's disease and controls into their respective groups. Radiologists correctly classified 65-95% (median 89%; sensitivity/specificity: 88/90) of scans. SVM correctly classified another set of sporadic Alzheimer's disease in 93% (sensitivity/specificity: 100/86) of cases, whereas radiologists ranged between 80% and 90% (median 83%; sensitivity/specificity: 80/85). SVMs were better at separating patients with sporadic Alzheimer's disease from those with FTLD (SVM 89%; sensitivity/specificity: 83/95; compared to radiological range from 63% to 83%; median 71%; sensitivity/specificity: 64/76). Radiologists were always accurate when they reported a high degree of diagnostic confidence. The results show that well-trained neuroradiologists classify typical Alzheimer's disease-associated scans comparable to SVMs. However, SVMs require no expert knowledge and trained SVMs can readily be exchanged between centres for use in diagnostic classification. These results are encouraging and indicate a role for computerized diagnostic methods in clinical practice.
Accuracy of dementia diagnosis—a direct comparison between radiologists and a computerized method
Stonnington, Cynthia M.; Barnes, Josephine; Chen, Frederick; Chu, Carlton; Good, Catriona D.; Mader, Irina; Mitchell, L. Anne; Patel, Ameet C.; Roberts, Catherine C.; Fox, Nick C.; Jack, Clifford R.; Ashburner, John; Frackowiak, Richard S. J.
2008-01-01
There has been recent interest in the application of machine learning techniques to neuroimaging-based diagnosis. These methods promise fully automated, standard PC-based clinical decisions, unbiased by variable radiological expertise. We recently used support vector machines (SVMs) to separate sporadic Alzheimer's disease from normal ageing and from fronto-temporal lobar degeneration (FTLD). In this study, we compare the results to those obtained by radiologists. A binary diagnostic classification was made by six radiologists with different levels of experience on the same scans and information that had been previously analysed with SVM. SVMs correctly classified 95% (sensitivity/specificity: 95/95) of sporadic Alzheimer's disease and controls into their respective groups. Radiologists correctly classified 65–95% (median 89%; sensitivity/specificity: 88/90) of scans. SVM correctly classified another set of sporadic Alzheimer's disease in 93% (sensitivity/specificity: 100/86) of cases, whereas radiologists ranged between 80% and 90% (median 83%; sensitivity/specificity: 80/85). SVMs were better at separating patients with sporadic Alzheimer's disease from those with FTLD (SVM 89%; sensitivity/specificity: 83/95; compared to radiological range from 63% to 83%; median 71%; sensitivity/specificity: 64/76). Radiologists were always accurate when they reported a high degree of diagnostic confidence. The results show that well-trained neuroradiologists classify typical Alzheimer's disease-associated scans comparable to SVMs. However, SVMs require no expert knowledge and trained SVMs can readily be exchanged between centres for use in diagnostic classification. These results are encouraging and indicate a role for computerized diagnostic methods in clinical practice. PMID:18835868
NASA Technical Reports Server (NTRS)
Slater, P. N. (Principal Investigator)
1980-01-01
The feasibility of using a pointable imager to determine atmospheric parameters was studied. In particular the determination of the atmospheric extinction coefficient and the path radiance, the two quantities that have to be known in order to correct spectral signatures for atmospheric effects, was simulated. The study included the consideration of the geometry of ground irradiance and observation conditions for a pointable imager in a LANDSAT orbit as a function of time of year. A simulation study was conducted on the sensitivity of scene classification accuracy to changes in atmospheric condition. A two wavelength and a nonlinear regression method for determining the required atmospheric parameters were investigated. The results indicate the feasibility of using a pointable imaging system (1) for the determination of the atmospheric parameters required to improve classification accuracies in urban-rural transition zones and to apply in studies of bi-directional reflectance distribution function data and polarization effects; and (2) for the determination of the spectral reflectances of ground features.
Probability interpretations of intraclass reliabilities.
Ellis, Jules L
2013-11-20
Research where many organizations are rated by different samples of individuals such as clients, patients, or employees frequently uses reliabilities computed from intraclass correlations. Consumers of statistical information, such as patients and policy makers, may not have sufficient background for deciding which levels of reliability are acceptable. It is shown that the reliability is related to various probabilities that may be easier to understand, for example, the proportion of organizations that will be classed significantly above (or below) the mean and the probability that an organization is classed correctly given that it is classed significantly above (or below) the mean. One can view these probabilities as the amount of information of the classification and the correctness of the classification. These probabilities have an inverse relationship: given a reliability, one can 'buy' correctness at the cost of informativeness and conversely. This article discusses how this can be used to make judgments about the required level of reliabilities. Copyright © 2013 John Wiley & Sons, Ltd.
Metabolomics for organic food authentication: Results from a long-term field study in carrots.
Cubero-Leon, Elena; De Rudder, Olivier; Maquet, Alain
2018-01-15
Increasing demand for organic products and their premium prices make them an attractive target for fraudulent malpractices. In this study, a large-scale comparative metabolomics approach was applied to investigate the effect of the agronomic production system on the metabolite composition of carrots and to build statistical models for prediction purposes. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA) was applied successfully to predict the origin of the agricultural system of the harvested carrots on the basis of features determined by liquid chromatography-mass spectrometry. When the training set used to build the OPLS-DA models contained samples representative of each harvest year, the models were able to classify unknown samples correctly (100% correct classification). If a harvest year was left out of the training sets and used for predictions, the correct classification rates achieved ranged from 76% to 100%. The results therefore highlight the potential of metabolomic fingerprinting for organic food authentication purposes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Topobathymetric LiDAR point cloud processing and landform classification in a tidal environment
NASA Astrophysics Data System (ADS)
Skovgaard Andersen, Mikkel; Al-Hamdani, Zyad; Steinbacher, Frank; Rolighed Larsen, Laurids; Brandbyge Ernstsen, Verner
2017-04-01
Historically it has been difficult to create high resolution Digital Elevation Models (DEMs) in land-water transition zones due to shallow water depth and often challenging environmental conditions. This gap of information has been reflected as a "white ribbon" with no data in the land-water transition zone. In recent years, the technology of airborne topobathymetric Light Detection and Ranging (LiDAR) has proven capable of filling out the gap by simultaneously capturing topographic and bathymetric elevation information, using only a single green laser. We collected green LiDAR point cloud data in the Knudedyb tidal inlet system in the Danish Wadden Sea in spring 2014. Creating a DEM from a point cloud requires the general processing steps of data filtering, water surface detection and refraction correction. However, there is no transparent and reproducible method for processing green LiDAR data into a DEM, specifically regarding the procedure of water surface detection and modelling. We developed a step-by-step procedure for creating a DEM from raw green LiDAR point cloud data, including a procedure for making a Digital Water Surface Model (DWSM) (see Andersen et al., 2017). Two different classification analyses were applied to the high resolution DEM: A geomorphometric and a morphological classification, respectively. The classification methods were originally developed for a small test area; but in this work, we have used the classification methods to classify the complete Knudedyb tidal inlet system. References Andersen MS, Gergely Á, Al-Hamdani Z, Steinbacher F, Larsen LR, Ernstsen VB (2017). Processing and performance of topobathymetric lidar data for geomorphometric and morphological classification in a high-energy tidal environment. Hydrol. Earth Syst. Sci., 21: 43-63, doi:10.5194/hess-21-43-2017. Acknowledgements This work was funded by the Danish Council for Independent Research | Natural Sciences through the project "Process-based understanding and prediction of morphodynamics in a natural coastal system in response to climate change" (Steno Grant no. 10-081102) and by the Geocenter Denmark through the project "Closing the gap! - Coherent land-water environmental mapping (LAWA)" (Grant no. 4-2015).
Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning
ERIC Educational Resources Information Center
Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan
2009-01-01
In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…
Horstick, Olaf; Martinez, Eric; Guzman, Maria Guadalupe; Martin, Jose Luis San; Ranzinger, Silvia Runge
2015-01-01
Introduction: In 2009, the new World Health Organization (WHO) dengue case classification – dengue/severe dengue (D/SD) – was introduced, replacing the 1997 WHO dengue case classification: dengue fever/dengue haemorrhagic fever/dengue shock syndrome (DF/DHF/DSS). Methods: A 2-day expert consensus meeting in La Habana/Cuba aimed to (1) share the experiences from Pan American Health Organization (PAHO) member states when applying D/SD, (2) present national and local data using D/SD, and (3) agree with the presented evidence on a list of recommendations for or against the use of D/SD for PAHO, and also globally. Results: Eight key questions were discussed, concluding: (1) D/SD is useful describing disease progression because it considers the dynamic nature of the disease, (2) D/SD helps defining dengue cases correctly for clinical studies, because it defines more precisely disease severity and allows evaluating dynamically the progression of cases, (3) D/SD describes correctly all clinical forms of severe dengue. Further standards need to be developed regionally, especially related to severe organ involvement, (4) D/SD allows for pathophysiological research identifying – in a sequential manner – the clinical manifestations of dengue related to pathophysiological events, (5) the warning signs help identifying early cases at risk of shock (children and adults), pathophysiology of the warning signs deserves further studies, (6) D/SD helps treating individual dengue cases and also the reorganization of health-care services for outbreak management, (7) D/SD helps diagnosing dengue, in presumptive diagnosis and follow-up of the disease, because of its high sensitivity and high negative predictive value (NPV), and (8) there is currently no update of the International Disease Classification10 (ICD10) to include the new classification of dengue (D/SD); therefore, there are not enough experiences of epidemiological reporting. Once D/SD has been implemented in epidemiological surveillance, D/SD allows to (1) identify severity of dengue cases in real time, for any decision-making on actions, (2) measure and compare morbidity and mortality in countries, and also globally, and (3) trigger contingency plans early, not only based on the number of reported cases but also on the reported severity of cases. Conclusion: The expert panel recommends to (1) update ICD10, (2) include D/SD in country epidemiological reports, and (3) implement studies improving sensitivity/specificity of the dengue case definition. PMID:25630344
Hierarchically Structured Non-Intrusive Sign Language Recognition. Chapter 2
NASA Technical Reports Server (NTRS)
Zieren, Jorg; Zieren, Jorg; Kraiss, Karl-Friedrich
2007-01-01
This work presents a hierarchically structured approach at the nonintrusive recognition of sign language from a monocular frontal view. Robustness is achieved through sophisticated localization and tracking methods, including a combined EM/CAMSHIFT overlap resolution procedure and the parallel pursuit of multiple hypotheses about hands position and movement. This allows handling of ambiguities and automatically corrects tracking errors. A biomechanical skeleton model and dynamic motion prediction using Kalman filters represents high level knowledge. Classification is performed by Hidden Markov Models. 152 signs from German sign language were recognized with an accuracy of 97.6%.
Multiple directed graph large-class multi-spectral processor
NASA Technical Reports Server (NTRS)
Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki
1988-01-01
Numerical analysis techniques for the interpretation of high-resolution imaging-spectrometer data are described and demonstrated. The method proposed involves the use of (1) a hierarchical classifier with a tree structure generated automatically by a Fisher linear-discriminant-function algorithm and (2) a novel multiple-directed-graph scheme which reduces the local maxima and the number of perturbations required. Results for a 500-class test problem involving simulated imaging-spectrometer data are presented in tables and graphs; 100-percent-correct classification is achieved with an improvement factor of 5.
Cochrane, Guy R.; Lafferty, Kevin D.
2002-01-01
Highly reflective seafloor features imaged by sidescan sonar in nearshore waters off the Northern Channel Islands (California, USA) have been observed in subsequent submersible dives to be areas of thin sand covering bedrock. Adjacent areas of rocky seafloor, suitable as habitat for endangered species of abalone and rockfish, and encrusting organisms, cannot be differentiated from the areas of thin sand on the basis of acoustic backscatter (i.e. grey level) alone. We found second-order textural analysis of sidescan sonar data useful to differentiate the bottom types where data is not degraded by near-range distortion (caused by slant-range and ground-range corrections), and where data is not degraded by far-range signal attenuation. Hand editing based on submersible observations is necessary to completely convert the sidescan sonar image to a bottom character classification map suitable for habitat mapping.
Zapotoczny, Piotr; Kozera, Wojciech; Karpiesiuk, Krzysztof; Pawłowski, Rodian
2014-08-01
The effect of management systems on selected physical properties and chemical composition of m. longissimus dorsi was studied in pigs. Muscle texture parameters were determined by computer-assisted image analysis, and the color of muscle samples was evaluated using a spectrophotometer. Highly significant correlations were observed between chemical composition and selected texture variables in the analyzed images. Chemical composition was not correlated with color or spectral distribution. Subject to the applied classification methods and groups of variables included in the classification model, the experimental groups were identified correctly in 35-95%. No significant differences in the chemical composition of m. longissimus dorsi were observed between experimental groups. Significant differences were noted in color lightness (L*) and redness (a*). Copyright © 2014 Elsevier Ltd. All rights reserved.
Spelling in adolescents with dyslexia: errors and modes of assessment.
Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc
2014-01-01
In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three main error categories were distinguished: phonological, orthographic, and grammatical errors (on the basis of morphology and language-specific spelling rules). The results indicated that higher-education students with dyslexia made on average twice as many spelling errors as the controls, with effect sizes of d ≥ 2. When the errors were classified as phonological, orthographic, or grammatical, we found a slight dominance of phonological errors in students with dyslexia. Sentence dictation did not provide more information than word dictation in the correct classification of students with and without dyslexia. © Hammill Institute on Disabilities 2012.
Spatial methods for deriving crop rotation history
NASA Astrophysics Data System (ADS)
Mueller-Warrant, George W.; Trippe, Kristin M.; Whittaker, Gerald W.; Anderson, Nicole P.; Sullivan, Clare S.
2017-08-01
Benefits of converting 11 years of remote sensing classification data into cropping history of agricultural fields included measuring lengths of rotation cycles and identifying specific sequences of intervening crops grown between final years of old grass seed stands and establishment of new ones. Spatial and non-spatial methods were complementary. Individual-year classification errors were often correctable in spreadsheet-based non-spatial analysis, whereas their presence in spatial data generally led to exclusion of fields from further analysis. Markov-model testing of non-spatial data revealed that year-to-year cropping sequences did not match average frequencies for transitions among crops grown in western Oregon, implying that rotations into new grass seed stands were influenced by growers' desires to achieve specific objectives. Moran's I spatial analysis of length of time between consecutive grass seed stands revealed that clustering of fields was relatively uncommon, with high and low value clusters only accounting for 7.1 and 6.2% of fields.
Oesophageal diverticula: principles of management and appraisal of classification.
Borrie, J; Wilson, R L
1980-01-01
In this paper we review a consecutive series of 50 oesophageal diverticula, appraise clinical features and methods of management, and suggest an improvement on the World Health Organization classification. The link between oesophageal diverticula and motor disorders as assessed by oesophageal manometry is stressed. It is necessary to correct the functional disorder as well as the diverticulum if it is causing symptoms. A revised classification could be as follows: congenital--single or multiple; acquired--single (cricopharyngeal, mid-oesophageal, epiphrenic other) or multiple (for example, when cricopharyngeal and mid-oesophageal present together, or when there is intramural diverticulosis. Images PMID:6781091
Malinovsky, Yaakov; Albert, Paul S; Roy, Anindya
2016-03-01
In the context of group testing screening, McMahan, Tebbs, and Bilder (2012, Biometrics 68, 287-296) proposed a two-stage procedure in a heterogenous population in the presence of misclassification. In earlier work published in Biometrics, Kim, Hudgens, Dreyfuss, Westreich, and Pilcher (2007, Biometrics 63, 1152-1162) also proposed group testing algorithms in a homogeneous population with misclassification. In both cases, the authors evaluated performance of the algorithms based on the expected number of tests per person, with the optimal design being defined by minimizing this quantity. The purpose of this article is to show that although the expected number of tests per person is an appropriate evaluation criteria for group testing when there is no misclassification, it may be problematic when there is misclassification. Specifically, a valid criterion needs to take into account the amount of correct classification and not just the number of tests. We propose, a more suitable objective function that accounts for not only the expected number of tests, but also the expected number of correct classifications. We then show how using this objective function that accounts for correct classification is important for design when considering group testing under misclassification. We also present novel analytical results which characterize the optimal Dorfman (1943) design under the misclassification. © 2015, The International Biometric Society.
Use of genetic algorithm for the selection of EEG features
NASA Astrophysics Data System (ADS)
Asvestas, P.; Korda, A.; Kostopoulos, S.; Karanasiou, I.; Ouzounoglou, A.; Sidiropoulos, K.; Ventouras, E.; Matsopoulos, G.
2015-09-01
Genetic Algorithm (GA) is a popular optimization technique that can detect the global optimum of a multivariable function containing several local optima. GA has been widely used in the field of biomedical informatics, especially in the context of designing decision support systems that classify biomedical signals or images into classes of interest. The aim of this paper is to present a methodology, based on GA, for the selection of the optimal subset of features that can be used for the efficient classification of Event Related Potentials (ERPs), which are recorded during the observation of correct or incorrect actions. In our experiment, ERP recordings were acquired from sixteen (16) healthy volunteers who observed correct or incorrect actions of other subjects. The brain electrical activity was recorded at 47 locations on the scalp. The GA was formulated as a combinatorial optimizer for the selection of the combination of electrodes that maximizes the performance of the Fuzzy C Means (FCM) classification algorithm. In particular, during the evolution of the GA, for each candidate combination of electrodes, the well-known (Σ, Φ, Ω) features were calculated and were evaluated by means of the FCM method. The proposed methodology provided a combination of 8 electrodes, with classification accuracy 93.8%. Thus, GA can be the basis for the selection of features that discriminate ERP recordings of observations of correct or incorrect actions.
Ruoff, Kaspar; Karoui, Romdhane; Dufour, Eric; Luginbühl, Werner; Bosset, Jacques-Olivier; Bogdanov, Stefan; Amado, Renato
2005-03-09
The potential of front-face fluorescence spectroscopy for the authentication of unifloral and polyfloral honey types (n = 57 samples) previously classified using traditional methods such as chemical, pollen, and sensory analysis was evaluated. Emission spectra were recorded between 280 and 480 nm (excit: 250 nm), 305 and 500 nm (excit: 290 nm), and 380 and 600 nm (excit: 373 nm) directly on honey samples. In addition, excitation spectra (290-440 nm) were recorded with the emission measured at 450 nm. A total of four different spectral data sets were considered for data analysis. After normalization of the spectra, chemometric evaluation of the spectral data was carried out using principal component analysis (PCA) and linear discriminant analysis (LDA). The rate of correct classification ranged from 36% to 100% by using single spectral data sets (250, 290, 373, 450 nm) and from 73% to 100% by combining these four data sets. For alpine polyfloral honey and the unifloral varieties investigated (acacia, alpine rose, honeydew, chestnut, and rape), correct classification ranged from 96% to 100%. This preliminary study indicates that front-face fluorescence spectroscopy is a promising technique for the authentication of the botanical origin of honey. It is nondestructive, rapid, easy to use, and inexpensive. The use of additional excitation wavelengths between 320 and 440 nm could increase the correct classification of the less characteristic fluorescent varieties.
Huang, Y; Andueza, D; de Oliveira, L; Zawadzki, F; Prache, S
2015-11-01
Since consumers are showing increased interest in the origin and method of production of their food, it is important to be able to authenticate dietary history of animals by rapid and robust methods used in the ruminant products. Promising breakthroughs have been made in the use of spectroscopic methods on fat to discriminate pasture-fed and concentrate-fed lambs. However, questions remained on their discriminatory ability in more complex feeding conditions, such as concentrate-finishing after pasture-feeding. We compared the ability of visible reflectance spectroscopy (Vis RS, wavelength range: 400 to 700 nm) with that of visible-near-infrared reflectance spectroscopy (Vis-NIR RS, wavelength range: 400 to 2500 nm) to differentiate between carcasses of lambs reared with three feeding regimes, using partial least square discriminant analysis (PLS-DA) as a classification method. The sample set comprised perirenal fat of Romane male lambs fattened at pasture (P, n = 69), stall-fattened indoors on commercial concentrate and straw (S, n = 55) and finished indoors with concentrate and straw for 28 days after pasture-feeding (PS, n = 65). The overall correct classification rate was better for Vis-NIR RS than for Vis RS (99.0% v. 95.1%, P < 0.05). Vis-NIR RS allowed a correct classification rate of 98.6%, 100.0% and 98.5% for P, S and PS lambs, respectively, whereas Vis RS allowed a correct classification rate of 98.6%, 94.5% and 92.3% for P, S and PS lambs, respectively. This study suggests the likely implication of molecules absorbing light in the non-visible part of the Vis-NIR spectra (possibly fatty acids), together with carotenoid and haem pigments, in the discrimination of the three feeding regimes.
Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Packer, Craig
2016-06-01
Citizen science has the potential to expand the scope and scale of research in ecology and conservation, but many professional researchers remain skeptical of data produced by nonexperts. We devised an approach for producing accurate, reliable data from untrained, nonexpert volunteers. On the citizen science website www.snapshotserengeti.org, more than 28,000 volunteers classified 1.51 million images taken in a large-scale camera-trap survey in Serengeti National Park, Tanzania. Each image was circulated to, on average, 27 volunteers, and their classifications were aggregated using a simple plurality algorithm. We validated the aggregated answers against a data set of 3829 images verified by experts and calculated 3 certainty metrics-level of agreement among classifications (evenness), fraction of classifications supporting the aggregated answer (fraction support), and fraction of classifiers who reported "nothing here" for an image that was ultimately classified as containing an animal (fraction blank)-to measure confidence that an aggregated answer was correct. Overall, aggregated volunteer answers agreed with the expert-verified data on 98% of images, but accuracy differed by species commonness such that rare species had higher rates of false positives and false negatives. Easily calculated analysis of variance and post-hoc Tukey tests indicated that the certainty metrics were significant indicators of whether each image was correctly classified or classifiable. Thus, the certainty metrics can be used to identify images for expert review. Bootstrapping analyses further indicated that 90% of images were correctly classified with just 5 volunteers per image. Species classifications based on the plurality vote of multiple citizen scientists can provide a reliable foundation for large-scale monitoring of African wildlife. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Kosmala, Margaret; Lintott, Chris; Packer, Craig
2016-01-01
Abstract Citizen science has the potential to expand the scope and scale of research in ecology and conservation, but many professional researchers remain skeptical of data produced by nonexperts. We devised an approach for producing accurate, reliable data from untrained, nonexpert volunteers. On the citizen science website www.snapshotserengeti.org, more than 28,000 volunteers classified 1.51 million images taken in a large‐scale camera‐trap survey in Serengeti National Park, Tanzania. Each image was circulated to, on average, 27 volunteers, and their classifications were aggregated using a simple plurality algorithm. We validated the aggregated answers against a data set of 3829 images verified by experts and calculated 3 certainty metrics—level of agreement among classifications (evenness), fraction of classifications supporting the aggregated answer (fraction support), and fraction of classifiers who reported “nothing here” for an image that was ultimately classified as containing an animal (fraction blank)—to measure confidence that an aggregated answer was correct. Overall, aggregated volunteer answers agreed with the expert‐verified data on 98% of images, but accuracy differed by species commonness such that rare species had higher rates of false positives and false negatives. Easily calculated analysis of variance and post‐hoc Tukey tests indicated that the certainty metrics were significant indicators of whether each image was correctly classified or classifiable. Thus, the certainty metrics can be used to identify images for expert review. Bootstrapping analyses further indicated that 90% of images were correctly classified with just 5 volunteers per image. Species classifications based on the plurality vote of multiple citizen scientists can provide a reliable foundation for large‐scale monitoring of African wildlife. PMID:27111678
Statistical classification techniques for engineering and climatic data samples
NASA Technical Reports Server (NTRS)
Temple, E. C.; Shipman, J. R.
1981-01-01
Fisher's sample linear discriminant function is modified through an appropriate alteration of the common sample variance-covariance matrix. The alteration consists of adding nonnegative values to the eigenvalues of the sample variance covariance matrix. The desired results of this modification is to increase the number of correct classifications by the new linear discriminant function over Fisher's function. This study is limited to the two-group discriminant problem.
[From new genetic and histological classifications to direct treatment].
Compérat, Eva; Furudoï, Adeline; Varinot, Justine; Rioux-Leclerq, Nathalie
2016-08-01
The most important criterion for optimal cancer treatment is a correct classification of the tumour. During the last three years, several very important progresses have been made with a better definition of urothelial carcinoma (UC), especially from a molecular point of view. We start having a global understanding of UC, although many details are still not completely understood. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Wang, Kun-Ching
2015-01-14
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.
Elsebaie, H B; Dannawi, Z; Altaf, F; Zaidan, A; Al Mukhtar, M; Shaw, M J; Gibson, A; Noordeen, H
2016-02-01
The achievement of shoulder balance is an important measure of successful scoliosis surgery. No previously described classification system has taken shoulder balance into account. We propose a simple classification system for AIS based on two components which include the curve type and shoulder level. Altogether, three curve types have been defined according to the size and location of the curves, each curve pattern is subdivided into type A or B depending on the shoulder level. This classification was tested for interobserver reproducibility and intraobserver reliability. A retrospective analysis of the radiographs of 232 consecutive cases of AIS patients treated surgically between 2005 and 2009 was also performed. Three major types and six subtypes were identified. Type I accounted for 30 %, type II 28 % and type III 42 %. The retrospective analysis showed three patients developed a decompensation that required extension of the fusion. One case developed worsening of shoulder balance requiring further surgery. This classification was tested for interobserver and intraobserver reliability. The mean kappa coefficients for interobserver reproducibility ranged from 0.89 to 0.952, while the mean kappa value for intraobserver reliability was 0.964 indicating a good-to-excellent reliability. The treatment algorithm guides the spinal surgeon to achieve optimal curve correction and postoperative shoulder balance whilst fusing the smallest number of spinal segments. The high interobserver reproducibility and intraobserver reliability makes it an invaluable tool to describe scoliosis curves in everyday clinical practice.
Blood vessel classification into arteries and veins in retinal images
NASA Astrophysics Data System (ADS)
Kondermann, Claudia; Kondermann, Daniel; Yan, Michelle
2007-03-01
The prevalence of diabetes is expected to increase dramatically in coming years; already today it accounts for a major proportion of the health care budget in many countries. Diabetic Retinopathy (DR), a micro vascular complication very often seen in diabetes patients, is the most common cause of visual loss in working age population of developed countries today. Since the possibility of slowing or even stopping the progress of this disease depends on the early detection of DR, an automatic analysis of fundus images would be of great help to the ophthalmologist due to the small size of the symptoms and the large number of patients. An important symptom for DR are abnormally wide veins leading to an unusually low ratio of the average diameter of arteries to veins (AVR). There are also other diseases like high blood pressure or diseases of the pancreas with one symptom being an abnormal AVR value. To determine it, a classification of vessels as arteries or veins is indispensable. As to our knowledge despite the importance there have only been two approaches to vessel classification yet. Therefore we propose an improved method. We compare two feature extraction methods and two classification methods based on support vector machines and neural networks. Given a hand-segmentation of vessels our approach achieves 95.32% correctly classified vessel pixels. This value decreases by 10% on average, if the result of a segmentation algorithm is used as basis for the classification.
Reliable nanomaterial classification of powders using the volume-specific surface area method
NASA Astrophysics Data System (ADS)
Wohlleben, Wendel; Mielke, Johannes; Bianchin, Alvise; Ghanem, Antoine; Freiberger, Harald; Rauscher, Hubert; Gemeinert, Marion; Hodoroaba, Vasile-Dan
2017-02-01
The volume-specific surface area (VSSA) of a particulate material is one of two apparently very different metrics recommended by the European Commission for a definition of "nanomaterial" for regulatory purposes: specifically, the VSSA metric may classify nanomaterials and non-nanomaterials differently than the median size in number metrics, depending on the chemical composition, size, polydispersity, shape, porosity, and aggregation of the particles in the powder. Here we evaluate the extent of agreement between classification by electron microscopy (EM) and classification by VSSA on a large set of diverse particulate substances that represent all the anticipated challenges except mixtures of different substances. EM and VSSA are determined in multiple labs to assess also the level of reproducibility. Based on the results obtained on highly characterized benchmark materials from the NanoDefine EU FP7 project, we derive a tiered screening strategy for the purpose of implementing the definition of nanomaterials. We finally apply the screening strategy to further industrial materials, which were classified correctly and left only borderline cases for EM. On platelet-shaped nanomaterials, VSSA is essential to prevent false-negative classification by EM. On porous materials, approaches involving extended adsorption isotherms prevent false positive classification by VSSA. We find no false negatives by VSSA, neither in Tier 1 nor in Tier 2, despite real-world industrial polydispersity and diverse composition, shape, and coatings. The VSSA screening strategy is recommended for inclusion in a technical guidance for the implementation of the definition.
Socioeconomic status and misperception of body mass index among Mexican adults.
Arantxa Colchero, M; Caro-Vega, Yanink; Kaufer-Horwitz, Martha
2014-01-01
To estimate the association between perceived body mass index (BMI) and socioeconomic variables in adults in Mexico. We studied 32052 adults from the Mexican National Health and Nutrition Survey of 2006. We estimated BMI misperception by comparing the respondent's weight perception (as categories of BMI) with the corresponding category according to measured weight and height. Misperception was defined as respondent's perception of a BMI category different from their actual category. Socioeconomic status was assessed using household assets. Logistic and multinomial regression models by gender and BMI category were estimated. Adult women and men highly underestimate their BMI category. We found that the probability of a correct classification was lower than the probability of getting a correct result by chance alone. Better educated and more affluent individuals are more likely to have a correct perception of their weight status, particularly among overweight adults. Given that a correct perception of weight has been associated with an increased search of weight control and that our results show that the studied population underestimated their BMI, interventions providing definitions and consequences of overweight and obesity and encouraging the population to monitor their weight could be beneficial.
Multi-site evaluation of IKONOS data for classification of tropical coral reef environments
Andrefouet, S.; Kramer, Philip; Torres-Pulliza, D.; Joyce, K.E.; Hochberg, E.J.; Garza-Perez, R.; Mumby, P.J.; Riegl, Bernhard; Yamano, H.; White, W.H.; Zubia, M.; Brock, J.C.; Phinn, S.R.; Naseer, A.; Hatcher, B.G.; Muller-Karger, F. E.
2003-01-01
Ten IKONOS images of different coral reef sites distributed around the world were processed to assess the potential of 4-m resolution multispectral data for coral reef habitat mapping. Complexity of reef environments, established by field observation, ranged from 3 to 15 classes of benthic habitats containing various combinations of sediments, carbonate pavement, seagrass, algae, and corals in different geomorphologic zones (forereef, lagoon, patch reef, reef flats). Processing included corrections for sea surface roughness and bathymetry, unsupervised or supervised classification, and accuracy assessment based on ground-truth data. IKONOS classification results were compared with classified Landsat 7 imagery for simple to moderate complexity of reef habitats (5-11 classes). For both sensors, overall accuracies of the classifications show a general linear trend of decreasing accuracy with increasing habitat complexity. The IKONOS sensor performed better, with a 15-20% improvement in accuracy compared to Landsat. For IKONOS, overall accuracy was 77% for 4-5 classes, 71% for 7-8 classes, 65% in 9-11 classes, and 53% for more than 13 classes. The Landsat classification accuracy was systematically lower, with an average of 56% for 5-10 classes. Within this general trend, inter-site comparisons and specificities demonstrate the benefits of different approaches. Pre-segmentation of the different geomorphologic zones and depth correction provided different advantages in different environments. Our results help guide scientists and managers in applying IKONOS-class data for coral reef mapping applications. ?? 2003 Elsevier Inc. All rights reserved.
Xiao, Di; Zhao, Fei; Zhang, Huifang; Meng, Fanliang; Zhang, Jianzhong
2014-08-01
The typing of Mycoplasma pneumoniae mainly relies on the detection of nucleic acid, which is limited by the use of a single gene target, complex operation procedures, and a lengthy assay time. Here, matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) coupled to ClinProTools was used to discover MALDI-TOF MS biomarker peaks and to generate a classification model based on a genetic algorithm (GA) to differentiate between type 1 and type 2 M. pneumoniae isolates. Twenty-five M. pneumoniae strains were used to construct an analysis model, and 43 Mycoplasma strains were used for validation. For the GA typing model, the cross-validation values, which reflect the ability of the model to handle variability among the test spectra and the recognition capability value, which reflects the model's ability to correctly identify its component spectra, were all 100%. This model contained 7 biomarker peaks (m/z 3,318.8, 3,215.0, 5,091.8, 5,766.8, 6,337.1, 6,431.1, and 6,979.9) used to correctly identify 31 type 1 and 7 type 2 M. pneumoniae isolates from 43 Mycoplasma strains with a sensitivity and specificity of 100%. The strain distribution map and principle component analysis based on the GA classification model also clearly showed that the type 1 and type 2 M. pneumoniae isolates can be divided into two categories based on their peptide mass fingerprints. With the obvious advantages of being rapid, highly accurate, and highly sensitive and having a low cost and high throughput, MALDI-TOF MS ClinProTools is a powerful and reliable tool for M. pneumoniae typing. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Classification of weld defect based on information fusion technology for radiographic testing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Hongquan; Liang, Zeming, E-mail: heavenlzm@126.com; Gao, Jianmin
Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster–Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defectmore » feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.« less
Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying
2016-03-01
Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.
Hogan, R E; Wang, L; Bertrand, M E; Willmore, L J; Bucholz, R D; Nassif, A S; Csernansky, J G
2006-01-01
We objectively assessed surface structural changes of the hippocampus in mesial temporal sclerosis (MTS) and assessed the ability of large-deformation high-dimensional mapping (HDM-LD) to demonstrate hippocampal surface symmetry and predict group classification of MTS in right and left MTS groups compared with control subjects. Using eigenvector field analysis of HDM-LD segmentations of the hippocampus, we compared the symmetry of changes in the right and left MTS groups with a group of 15 matched controls. To assess the ability of HDM-LD to predict group classification, eigenvectors were selected by a logistic regression procedure when comparing the MTS group with control subjects. Multivariate analysis of variance on the coefficients from the first 9 eigenvectors accounted for 75% of the total variance between groups. The first 3 eigenvectors showed the largest differences between the control group and each of the MTS groups, but with eigenvector 2 showing the greatest difference in the MTS groups. Reconstruction of the hippocampal deformation vector fields due solely to eigenvector 2 shows symmetrical patterns in the right and left MTS groups. A "leave-one-out" (jackknife) procedure correctly predicted group classification in 14 of 15 (93.3%) left MTS subjects and all 15 right MTS subjects. Analysis of principal dimensions of hippocampal shape change suggests that MTS, after accounting for normal right-left asymmetries, affects the right and left hippocampal surface structure very symmetrically. Preliminary analysis using HDM-LD shows it can predict group classification of MTS and control hippocampi in this well-defined population of patients with MTS and mesial temporal lobe epilepsy (MTLE).
Classifying seismic waveforms from scratch: a case study in the alpine environment
NASA Astrophysics Data System (ADS)
Hammer, C.; Ohrnberger, M.; Fäh, D.
2013-01-01
Nowadays, an increasing amount of seismic data is collected by daily observatory routines. The basic step for successfully analyzing those data is the correct detection of various event types. However, the visually scanning process is a time-consuming task. Applying standard techniques for detection like the STA/LTA trigger still requires the manual control for classification. Here, we present a useful alternative. The incoming data stream is scanned automatically for events of interest. A stochastic classifier, called hidden Markov model, is learned for each class of interest enabling the recognition of highly variable waveforms. In contrast to other automatic techniques as neural networks or support vector machines the algorithm allows to start the classification from scratch as soon as interesting events are identified. Neither the tedious process of collecting training samples nor a time-consuming configuration of the classifier is required. An approach originally introduced for the volcanic task force action allows to learn classifier properties from a single waveform example and some hours of background recording. Besides a reduction of required workload this also enables to detect very rare events. Especially the latter feature provides a milestone point for the use of seismic devices in alpine warning systems. Furthermore, the system offers the opportunity to flag new signal classes that have not been defined before. We demonstrate the application of the classification system using a data set from the Swiss Seismological Survey achieving very high recognition rates. In detail we document all refinements of the classifier providing a step-by-step guide for the fast set up of a well-working classification system.
Minimalist approach to the classification of symmetry protected topological phases
NASA Astrophysics Data System (ADS)
Xiong, Zhaoxi
A number of proposals with differing predictions (e.g. group cohomology, cobordisms, group supercohomology, spin cobordisms, etc.) have been made for the classification of symmetry protected topological (SPT) phases. Here we treat various proposals on equal footing and present rigorous, general results that are independent of which proposal is correct. We do so by formulating a minimalist Generalized Cohomology Hypothesis, which is satisfied by existing proposals and captures essential aspects of SPT classification. From this Hypothesis alone, formulas relating classifications in different dimensions and/or protected by different symmetry groups are derived. Our formalism is expected to work for fermionic as well as bosonic phases, Floquet as well as stationary phases, and spatial as well as on-site symmetries.
Wold, Jens Petter; Veiseth-Kent, Eva; Høst, Vibeke; Løvland, Atle
2017-01-01
The main objective of this work was to develop a method for rapid and non-destructive detection and grading of wooden breast (WB) syndrome in chicken breast fillets. Near-infrared (NIR) spectroscopy was chosen as detection method, and an industrial NIR scanner was applied and tested for large scale on-line detection of the syndrome. Two approaches were evaluated for discrimination of WB fillets: 1) Linear discriminant analysis based on NIR spectra only, and 2) a regression model for protein was made based on NIR spectra and the estimated concentrations of protein were used for discrimination. A sample set of 197 fillets was used for training and calibration. A test set was recorded under industrial conditions and contained spectra from 79 fillets. The classification methods obtained 99.5-100% correct classification of the calibration set and 100% correct classification of the test set. The NIR scanner was then installed in a commercial chicken processing plant and could detect incidence rates of WB in large batches of fillets. Examples of incidence are shown for three broiler flocks where a high number of fillets (9063, 6330 and 10483) were effectively measured. Prevalence of WB of 0.1%, 6.6% and 8.5% were estimated for these flocks based on the complete sample volumes. Such an on-line system can be used to alleviate the challenges WB represents to the poultry meat industry. It enables automatic quality sorting of chicken fillets to different product categories. Manual laborious grading can be avoided. Incidences of WB from different farms and flocks can be tracked and information can be used to understand and point out main causes for WB in the chicken production. This knowledge can be used to improve the production procedures and reduce today's extensive occurrence of WB.
Santamaria-Fernandez, Rebeca; Wolff, Jean-Claude
2010-07-30
The potential of high-precision calcium and lead isotope ratio measurements using laser ablation coupled to multicollector inductively coupled plasma mass spectrometry (LA-MC-ICP-MS) to aid distinction between four genuine and five counterfeit pharmaceutical packaging samples and further classification of counterfeit packaging samples has been evaluated. We highlight the lack of reference materials for LA-MC-ICP-MS isotope ratio measurements in solids. In this case the problem is minimised by using National Institute of Standards and Technology Standard Reference Material (NIST SRM) 915a calcium carbonate (as solid pellets) and NIST SRM610 glass disc for sample bracketing external standardisation. In addition, a new reference material, NIST SRM915b calcium carbonate, has been characterised in-house for Ca isotope ratios and is used as a reference sample. Significant differences have been found between genuine and counterfeit samples; the method allows detection of counterfeits and aids further classification of packaging samples. Typical expanded uncertainties for measured-corrected Ca isotope ratio values ((43)Ca/(44)Ca and (42)Ca/(44)Ca) were found to be below 0.06% (k = 2, 95% confidence) and below 0.2% for measured-corrected Pb isotope ratios ((207)Pb/(206)Pb and (208)Pb/(206)Pb). This is the first time that Ca isotope ratios have been measured in packaging materials using LA coupled to a multicollector (MC)-ICP-MS instrument. The use of LA-MC-ICP-MS for direct measurement of Ca and Pb isotopic variations in cardboard/ink in packaging has definitive potential to aid counterfeit detection and classification. Copyright 2010 John Wiley & Sons, Ltd.
Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction
NASA Astrophysics Data System (ADS)
Zhang, W.; Li, X.; Xiao, W.
2018-05-01
The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.
Progressive Classification Using Support Vector Machines
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri; Kocurek, Michael
2009-01-01
An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user can halt this reclassification process at any point, thereby obtaining the best possible result for a given amount of computation time. Alternatively, the results can be displayed as they are generated, providing the user with real-time feedback about the current accuracy of classification.
A multi-temporal analysis approach for land cover mapping in support of nuclear incident response
NASA Astrophysics Data System (ADS)
Sah, Shagan; van Aardt, Jan A. N.; McKeown, Donald M.; Messinger, David W.
2012-06-01
Remote sensing can be used to rapidly generate land use maps for assisting emergency response personnel with resource deployment decisions and impact assessments. In this study we focus on constructing accurate land cover maps to map the impacted area in the case of a nuclear material release. The proposed methodology involves integration of results from two different approaches to increase classification accuracy. The data used included RapidEye scenes over Nine Mile Point Nuclear Power Station (Oswego, NY). The first step was building a coarse-scale land cover map from freely available, high temporal resolution, MODIS data using a time-series approach. In the case of a nuclear accident, high spatial resolution commercial satellites such as RapidEye or IKONOS can acquire images of the affected area. Land use maps from the two image sources were integrated using a probability-based approach. Classification results were obtained for four land classes - forest, urban, water and vegetation - using Euclidean and Mahalanobis distances as metrics. Despite the coarse resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. The classifications were augmented using this fused approach, with few supplementary advantages such as correction for cloud cover and independence from time of year. We concluded that this method would generate highly accurate land maps, using coarse spatial resolution time series satellite imagery and a single date, high spatial resolution, multi-spectral image.
Benchmark data on the separability among crops in the southern San Joaquin Valley of California
NASA Technical Reports Server (NTRS)
Morse, A.; Card, D. H.
1984-01-01
Landsat MSS data were input to a discriminant analysis of 21 crops on each of eight dates in 1979 using a total of 4,142 fields in southern Fresno County, California. The 21 crops, which together account for over 70 percent of the agricultural acreage in the southern San Joaquin Valley, were analyzed to quantify the spectral separability, defined as omission error, between all pairs of crops. On each date the fields were segregated into six groups based on the mean value of the MSS7/MSS5 ratio, which is correlated with green biomass. Discriminant analysis was run on each group on each date. The resulting contingency tables offer information that can be profitably used in conjunction with crop calendars to pick the best dates for a classification. The tables show expected percent correct classification and error rates for all the crops. The patterns in the contingency tables show that the percent correct classification for crops generally increases with the amount of greenness in the fields being classified. However, there are exceptions to this general rule, notably grain.
VizieR Online Data Catalog: Diffuse ionized gas database DIGEDA (Flores-Fajardo+ 2009)
NASA Astrophysics Data System (ADS)
Flores-Fajardo, N.; Morisset, C.; Binette, L.
2009-09-01
DIGEDA is a comprehensive database comprising 1061 DIG and HII region spectroscopic observations of 29 different galaxies (25 spiral galaxies and 4 irregulars) from 18 bibliographic references. This survey contains galaxies with significant spread in star formation rates, Halpha luminosities, distances, disk inclinations, slit positions and slit orientations. The 1061 observations obtained from these references were extracted by digitalization of published figures or tables. The data were subsequently normalized and incorporated in DIGEDA. This resulted in a table of 26 columns containing 1061 data lines or records (DIGEDA.dat file). We have not performed any reddening by dust correction or for the presence of underlying absorption lines, although we did use the reddening corrected ratios when made available by the authors. Missing entries are represented by (-1) in the corresponding data field. In DIGEDA the observed areas are classificated in three possible emission region types: HII region, transition zones or DIG. When this classification was not reported (no matter the criterion) for the authors, we introduce our own classification taking into account the value of |z| as described in the paper. (4 data files).
Document image improvement for OCR as a classification problem
NASA Astrophysics Data System (ADS)
Summers, Kristen M.
2003-01-01
In support of the goal of automatically selecting methods of enhancing an image to improve the accuracy of OCR on that image, we consider the problem of determining whether to apply each of a set of methods as a supervised classification problem for machine learning. We characterize each image according to a combination of two sets of measures: a set that are intended to reflect the degree of particular types of noise present in documents in a single font of Roman or similar script and a more general set based on connected component statistics. We consider several potential methods of image improvement, each of which constitutes its own 2-class classification problem, according to whether transforming the image with this method improves the accuracy of OCR. In our experiments, the results varied for the different image transformation methods, but the system made the correct choice in 77% of the cases in which the decision affected the OCR score (in the range [0,1]) by at least .01, and it made the correct choice 64% of the time overall.
NASA Astrophysics Data System (ADS)
Uríčková, Veronika; Sádecká, Jana
2015-09-01
The identification of the geographical origin of beverages is one of the most important issues in food chemistry. Spectroscopic methods provide a relative rapid and low cost alternative to traditional chemical composition or sensory analyses. This paper reviews the current state of development of ultraviolet (UV), visible (Vis), near infrared (NIR) and mid infrared (MIR) spectroscopic techniques combined with pattern recognition methods for determining geographical origin of both wines and distilled drinks. UV, Vis, and NIR spectra contain broad band(s) with weak spectral features limiting their discrimination ability. Despite this expected shortcoming, each of the three spectroscopic ranges (NIR, Vis/NIR and UV/Vis/NIR) provides average correct classification higher than 82%. Although average correct classification is similar for NIR and MIR regions, in some instances MIR data processing improves prediction. Advantage of using MIR is that MIR peaks are better defined and more easily assigned than NIR bands. In general, success in a classification depends on both spectral range and pattern recognition methods. The main problem still remains the construction of databanks needed for all of these methods.
Neurons from the adult human dentate nucleus: neural networks in the neuron classification.
Grbatinić, Ivan; Marić, Dušica L; Milošević, Nebojša T
2015-04-07
Topological (central vs. border neuron type) and morphological classification of adult human dentate nucleus neurons according to their quantified histomorphological properties using neural networks on real and virtual neuron samples. In the real sample 53.1% and 14.1% of central and border neurons, respectively, are classified correctly with total of 32.8% of misclassified neurons. The most important result present 62.2% of misclassified neurons in border neurons group which is even greater than number of correctly classified neurons (37.8%) in that group, showing obvious failure of network to classify neurons correctly based on computational parameters used in our study. On the virtual sample 97.3% of misclassified neurons in border neurons group which is much greater than number of correctly classified neurons (2.7%) in that group, again confirms obvious failure of network to classify neurons correctly. Statistical analysis shows that there is no statistically significant difference in between central and border neurons for each measured parameter (p>0.05). Total of 96.74% neurons are morphologically classified correctly by neural networks and each one belongs to one of the four histomorphological types: (a) neurons with small soma and short dendrites, (b) neurons with small soma and long dendrites, (c) neuron with large soma and short dendrites, (d) neurons with large soma and long dendrites. Statistical analysis supports these results (p<0.05). Human dentate nucleus neurons can be classified in four neuron types according to their quantitative histomorphological properties. These neuron types consist of two neuron sets, small and large ones with respect to their perykarions with subtypes differing in dendrite length i.e. neurons with short vs. long dendrites. Besides confirmation of neuron classification on small and large ones, already shown in literature, we found two new subtypes i.e. neurons with small soma and long dendrites and with large soma and short dendrites. These neurons are most probably equally distributed throughout the dentate nucleus as no significant difference in their topological distribution is observed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tamboer, P; Vorst, H C M; Ghebreab, S; Scholte, H S
2016-01-01
Meta-analytic studies suggest that dyslexia is characterized by subtle and spatially distributed variations in brain anatomy, although many variations failed to be significant after corrections of multiple comparisons. To circumvent issues of significance which are characteristic for conventional analysis techniques, and to provide predictive value, we applied a machine learning technique--support vector machine--to differentiate between subjects with and without dyslexia. In a sample of 22 students with dyslexia (20 women) and 27 students without dyslexia (25 women) (18-21 years), a classification performance of 80% (p < 0.001; d-prime = 1.67) was achieved on the basis of differences in gray matter (sensitivity 82%, specificity 78%). The voxels that were most reliable for classification were found in the left occipital fusiform gyrus (LOFG), in the right occipital fusiform gyrus (ROFG), and in the left inferior parietal lobule (LIPL). Additionally, we found that classification certainty (e.g. the percentage of times a subject was correctly classified) correlated with severity of dyslexia (r = 0.47). Furthermore, various significant correlations were found between the three anatomical regions and behavioural measures of spelling, phonology and whole-word-reading. No correlations were found with behavioural measures of short-term memory and visual/attentional confusion. These data indicate that the LOFG, ROFG and the LIPL are neuro-endophenotype and potentially biomarkers for types of dyslexia related to reading, spelling and phonology. In a second and independent sample of 876 young adults of a general population, the trained classifier of the first sample was tested, resulting in a classification performance of 59% (p = 0.07; d-prime = 0.65). This decline in classification performance resulted from a large percentage of false alarms. This study provided support for the use of machine learning in anatomical brain imaging.
Phaeochromocytoma [corrected] crisis.
Whitelaw, B C; Prague, J K; Mustafa, O G; Schulte, K-M; Hopkins, P A; Gilbert, J A; McGregor, A M; Aylwin, S J B
2014-01-01
Phaeochromocytoma [corrected] crisis is an endocrine emergency associated with significant mortality. There is little published guidance on the management of phaeochromocytoma [corrected] crisis. This clinical practice update summarizes the relevant published literature, including a detailed review of cases published in the past 5 years, and a proposed classification system. We review the recommended management of phaeochromocytoma [corrected] crisis including the use of alpha-blockade, which is strongly associated with survival of a crisis. Mechanical circulatory supportive therapy (including intra-aortic balloon pump or extra-corporeal membrane oxygenation) is strongly recommended for patients with sustained hypotension. Surgical intervention should be deferred until medical stabilization is achieved. © 2013 John Wiley & Sons Ltd.
Li, Jinyan; Fong, Simon; Wong, Raymond K; Millham, Richard; Wong, Kelvin K L
2017-06-28
Due to the high-dimensional characteristics of dataset, we propose a new method based on the Wolf Search Algorithm (WSA) for optimising the feature selection problem. The proposed approach uses the natural strategy established by Charles Darwin; that is, 'It is not the strongest of the species that survives, but the most adaptable'. This means that in the evolution of a swarm, the elitists are motivated to quickly obtain more and better resources. The memory function helps the proposed method to avoid repeat searches for the worst position in order to enhance the effectiveness of the search, while the binary strategy simplifies the feature selection problem into a similar problem of function optimisation. Furthermore, the wrapper strategy gathers these strengthened wolves with the classifier of extreme learning machine to find a sub-dataset with a reasonable number of features that offers the maximum correctness of global classification models. The experimental results from the six public high-dimensional bioinformatics datasets tested demonstrate that the proposed method can best some of the conventional feature selection methods up to 29% in classification accuracy, and outperform previous WSAs by up to 99.81% in computational time.
Koutsouleris, Nikolaos; Meisenzahl, Eva M.; Davatzikos, Christos; Bottlender, Ronald; Frodl, Thomas; Scheuerecker, Johanna; Schmitt, Gisela; Zetzsche, Thomas; Decker, Petra; Reiser, Maximilian; Möller, Hans-Jürgen; Gaser, Christian
2014-01-01
Context Identification of individuals at high risk of developing psychosis has relied on prodromal symptomatology. Recently, machine learning algorithms have been successfully used for magnetic resonance imaging–based diagnostic classification of neuropsychiatric patient populations. Objective To determine whether multivariate neuroanatomical pattern classification facilitates identification of individuals in different at-risk mental states (ARMS) of psychosis and enables the prediction of disease transition at the individual level. Design Multivariate neuroanatomical pattern classification was performed on the structural magnetic resonance imaging data of individuals in early or late ARMS vs healthy controls (HCs). The predictive power of the method was then evaluated by categorizing the baseline imaging data of individuals with transition to psychosis vs those without transition vs HCs after 4 years of clinical follow-up. Classification generalizability was estimated by cross-validation and by categorizing an independent cohort of 45 new HCs. Setting Departments of Psychiatry and Psychotherapy, Ludwig-Maximilians-University, Munich, Germany. Participants The first classification analysis included 20 early and 25 late at-risk individuals and 25 matched HCs. The second analysis consisted of 15 individuals with transition, 18 without transition, and 17 matched HCs. Main Outcome Measures Specificity, sensitivity, and accuracy of classification. Results The 3-group, cross-validated classification accuracies of the first analysis were 86% (HCs vs the rest), 91% (early at-risk individuals vs the rest), and 86% (late at-risk individuals vs the rest). The accuracies in the second analysis were 90% (HCs vs the rest), 88% (individuals with transition vs the rest), and 86% (individuals without transition vs the rest). Independent HCs were correctly classified in 96% (first analysis) and 93% (second analysis) of cases. Conclusions Different ARMSs and their clinical outcomes may be reliably identified on an individual basis by assessing patterns of whole-brain neuroanatomical abnormalities. These patterns may serve as valuable biomarkers for the clinician to guide early detection in the prodromal phase of psychosis. PMID:19581561
Deep learning application: rubbish classification with aid of an android device
NASA Astrophysics Data System (ADS)
Liu, Sijiang; Jiang, Bo; Zhan, Jie
2017-06-01
Deep learning is a very hot topic currently in pattern recognition and artificial intelligence researches. Aiming at the practical problem that people usually don't know correct classifications some rubbish should belong to, based on the powerful image classification ability of the deep learning method, we have designed a prototype system to help users to classify kinds of rubbish. Firstly the CaffeNet Model was adopted for our classification network training on the ImageNet dataset, and the trained network was deployed on a web server. Secondly an android app was developed for users to capture images of unclassified rubbish, upload images to the web server for analyzing backstage and retrieve the feedback, so that users can obtain the classification guide by an android device conveniently. Tests on our prototype system of rubbish classification show that: an image of one single type of rubbish with origin shape can be better used to judge its classification, while an image containing kinds of rubbish or rubbish with changed shape may fail to help users to decide rubbish's classification. However, the system still shows promising auxiliary function for rubbish classification if the network training strategy can be optimized further.
Le Huec, J C; Cogniet, A; Demezon, H; Rigal, J; Saddiki, R; Aunoble, S
2015-01-01
Pedicle subtraction osteotomies (PSO) enable correction of spinal deformities but remain difficult and are associated with high complication rates. This study aimed to prospectively review different post-operative complications and mechanical problems in patients who underwent PSO as treatment for sagittal imbalance as sequelae of degenerative disc disease or previous spinal fusion. This was a descriptive prospective single center study of 63 patients who underwent sagittal imbalance correction by PSO. Radiographic analysis of pre- and post-operative pelvic and spinal parameters was completed based on EOS images following 3D modeling. Global and sub-group analyses were completed based on the Roussouly classification. A systematic analysis of post-operative complications was conducted during hospital stay and at follow-up visits. Complications included 15 cases (20.2%) of bilateral leg pain, with transient neurological deficit in 6 cases (9.5%), and 9 cases (12.5%) of early surgical site infections. Intra-operative complications included five tears of the dura mater and two cases of excessive blood loss (>5,000 mL). Two mortalities occurred from major intracerebral bleeds in the early post-operative period. Mechanical complications were principally non-union (9 cases) and junctional kyphosis (3 cases). All 19 post-operative complications (28.1%) were revised at an average of 2 years following surgery. All mechanical complications were found in the patients who had insufficient imbalance correction and this was mainly associated with high PI (>60°) or a moderate PI (45-60º) combined with excess FBI pre-operatively that remained >10° post-operatively. Infection and neurologic complications following PSO are relatively common, and frequently reported in the literature. The principal cause of mechanical complications, such as non-union or junctional kyphosis, was insufficient sagittal correction, characterized by post-operative FBI >10°. The risks of insufficient correction are greater in patients with higher pelvic incidence and those patients who required very high correction.
Authorship Discovery in Blogs Using Bayesian Classification with Corrective Scaling
2008-06-01
4 2.3 W. Fucks ’ Diagram of n-Syllable Word Frequencies . . . . . . . . . . . . . . 5 3.1 Confusion Matrix for All Test Documents of 500...of the books which scholars believed he had. • Wilhelm Fucks discriminated between authors using the average number of syllables per word and average...distance between equal-syllabled words [8]. Fucks , too, concluded that a study such as his reveals a “possibility of a quantitative classification
Human-Centered Planning for Effective Task Autonomy
2012-05-01
observation o 6= onull:∑ o 6=onull p(o|s, aask) = αs (3.1) When the occupant is not available, we say it results in observation onull: p(onull|s, aask...they are paying attention to what it says . Uncertainty Many classification and inference algorithms give a measure of uncertainty - the probability...provide corrective feedback for handwriting recognition, email classification, and other domains (e.g., Mankoff, Abowd, and Hudson (2000); Scaffidi (2009
NASA Astrophysics Data System (ADS)
Yao, Sen; Li, Tao; Li, JieQing; Liu, HongGao; Wang, YuanZhong
2018-06-01
Boletus griseus and Boletus edulis are two well-known wild-grown edible mushrooms which have high nutrition, delicious flavor and high economic value distributing in Yunnan Province. In this study, a rapid method using Fourier transform infrared (FT-IR) and ultraviolet (UV) spectroscopies coupled with data fusion was established for the discrimination of Boletus mushrooms from seven different geographical origins with pattern recognition method. Initially, the spectra of 332 mushroom samples obtained from the two spectroscopic techniques were analyzed individually and then the classification performance based on data fusion strategy was investigated. Meanwhile, the latent variables (LVs) of FT-IR and UV spectra were extracted by partial least square discriminant analysis (PLS-DA) and two datasets were concatenated into a new matrix for data fusion. Then, the fusion matrix was further analyzed by support vector machine (SVM). Compared with single spectroscopic technique, data fusion strategy can improve the classification performance effectively. In particular, the accuracy of correct classification of SVM model in training and test sets were 99.10% and 100.00%, respectively. The results demonstrated that data fusion of FT-IR and UV spectra can provide higher synergic effect for the discrimination of different geographical origins of Boletus mushrooms, which may be benefit for further authentication and quality assessment of edible mushrooms.
Yao, Sen; Li, Tao; Li, JieQing; Liu, HongGao; Wang, YuanZhong
2018-06-05
Boletus griseus and Boletus edulis are two well-known wild-grown edible mushrooms which have high nutrition, delicious flavor and high economic value distributing in Yunnan Province. In this study, a rapid method using Fourier transform infrared (FT-IR) and ultraviolet (UV) spectroscopies coupled with data fusion was established for the discrimination of Boletus mushrooms from seven different geographical origins with pattern recognition method. Initially, the spectra of 332 mushroom samples obtained from the two spectroscopic techniques were analyzed individually and then the classification performance based on data fusion strategy was investigated. Meanwhile, the latent variables (LVs) of FT-IR and UV spectra were extracted by partial least square discriminant analysis (PLS-DA) and two datasets were concatenated into a new matrix for data fusion. Then, the fusion matrix was further analyzed by support vector machine (SVM). Compared with single spectroscopic technique, data fusion strategy can improve the classification performance effectively. In particular, the accuracy of correct classification of SVM model in training and test sets were 99.10% and 100.00%, respectively. The results demonstrated that data fusion of FT-IR and UV spectra can provide higher synergic effect for the discrimination of different geographical origins of Boletus mushrooms, which may be benefit for further authentication and quality assessment of edible mushrooms. Copyright © 2018 Elsevier B.V. All rights reserved.
Classification of cardiac patient states using artificial neural networks
Kannathal, N; Acharya, U Rajendra; Lim, Choo Min; Sadasivan, PK; Krishnan, SM
2003-01-01
Electrocardiogram (ECG) is a nonstationary signal; therefore, the disease indicators may occur at random in the time scale. This may require the patient be kept under observation for long intervals in the intensive care unit of hospitals for accurate diagnosis. The present study examined the classification of the states of patients with certain diseases in the intensive care unit using their ECG and an Artificial Neural Networks (ANN) classification system. The states were classified into normal, abnormal and life threatening. Seven significant features extracted from the ECG were fed as input parameters to the ANN for classification. Three neural network techniques, namely, back propagation, self-organizing maps and radial basis functions, were used for classification of the patient states. The ANN classifier in this case was observed to be correct in approximately 99% of the test cases. This result was further improved by taking 13 features of the ECG as input for the ANN classifier. PMID:19649222
Improving crop classification through attention to the timing of airborne radar acquisitions
NASA Technical Reports Server (NTRS)
Brisco, B.; Ulaby, F. T.; Protz, R.
1984-01-01
Radar remote sensors may provide valuable input to crop classification procedures because of (1) their independence of weather conditions and solar illumination, and (2) their ability to respond to differences in crop type. Manual classification of multidate synthetic aperture radar (SAR) imagery resulted in an overall accuracy of 83 percent for corn, forest, grain, and 'other' cover types. Forests and corn fields were identified with accuracies approaching or exceeding 90 percent. Grain fields and 'other' fields were often confused with each other, resulting in classification accuracies of 51 and 66 percent, respectively. The 83 percent correct classification represents a 10 percent improvement when compared to similar SAR data for the same area collected at alternate time periods in 1978. These results demonstrate that improvements in crop classification accuracy can be achieved with SAR data by synchronizing data collection times with crop growth stages in order to maximize differences in the geometric and dielectric properties of the cover types of interest.
NASA Technical Reports Server (NTRS)
Hill, C. L.
1984-01-01
A computer-implemented classification has been derived from Landsat-4 Thematic Mapper data acquired over Baldwin County, Alabama on January 15, 1983. One set of spectral signatures was developed from the data by utilizing a 3x3 pixel sliding window approach. An analysis of the classification produced from this technique identified forested areas. Additional information regarding only the forested areas. Additional information regarding only the forested areas was extracted by employing a pixel-by-pixel signature development program which derived spectral statistics only for pixels within the forested land covers. The spectral statistics from both approaches were integrated and the data classified. This classification was evaluated by comparing the spectral classes produced from the data against corresponding ground verification polygons. This iterative data analysis technique resulted in an overall classification accuracy of 88.4 percent correct for slash pine, young pine, loblolly pine, natural pine, and mixed hardwood-pine. An accuracy assessment matrix has been produced for the classification.
Selih, Vid S; Sala, Martin; Drgan, Viktor
2014-06-15
Inductively coupled plasma mass spectrometry and optical emission were used to determine the multi-element composition of 272 bottled Slovenian wines. To achieve geographical classification of the wines by their elemental composition, principal component analysis (PCA) and counter-propagation artificial neural networks (CPANN) have been used. From 49 elements measured, 19 were used to build the final classification models. CPANN was used for the final predictions because of its superior results. The best model gave 82% correct predictions for external set of the white wine samples. Taking into account the small size of whole Slovenian wine growing regions, we consider the classification results were very good. For the red wines, which were mostly represented from one region, even-sub region classification was possible with great precision. From the level maps of the CPANN model, some of the most important elements for classification were identified. Copyright © 2013 Elsevier Ltd. All rights reserved.
Paudel, M R; Mackenzie, M; Fallone, B G; Rathee, S
2013-08-01
To evaluate the metal artifacts in kilovoltage computed tomography (kVCT) images that are corrected using a normalized metal artifact reduction (NMAR) method with megavoltage CT (MVCT) prior images. Tissue characterization phantoms containing bilateral steel inserts are used in all experiments. Two MVCT images, one without any metal artifact corrections and the other corrected using a modified iterative maximum likelihood polychromatic algorithm for CT (IMPACT) are translated to pseudo-kVCT images. These are then used as prior images without tissue classification in an NMAR technique for correcting the experimental kVCT image. The IMPACT method in MVCT included an additional model for the pair∕triplet production process and the energy dependent response of the MVCT detectors. An experimental kVCT image, without the metal inserts and reconstructed using the filtered back projection (FBP) method, is artificially patched with the known steel inserts to get a reference image. The regular NMAR image containing the steel inserts that uses tissue classified kVCT prior and the NMAR images reconstructed using MVCT priors are compared with the reference image for metal artifact reduction. The Eclipse treatment planning system is used to calculate radiotherapy dose distributions on the corrected images and on the reference image using the Anisotropic Analytical Algorithm with 6 MV parallel opposed 5×10 cm2 fields passing through the bilateral steel inserts, and the results are compared. Gafchromic film is used to measure the actual dose delivered in a plane perpendicular to the beams at the isocenter. The streaking and shading in the NMAR image using tissue classifications are significantly reduced. However, the structures, including metal, are deformed. Some uniform regions appear to have eroded from one side. There is a large variation of attenuation values inside the metal inserts. Similar results are seen in commercially corrected image. Use of MVCT prior images without tissue classification in NMAR significantly reduces these problems. The radiation dose calculated on the reference image is close to the dose measured using the film. Compared to the reference image, the calculated dose difference in the conventional NMAR image, the corrected images using uncorrected MVCT image, and IMPACT corrected MVCT image as priors is ∼15.5%, ∼5%, and ∼2.7%, respectively, at the isocenter. The deformation and erosion of the structures present in regular NMAR corrected images can be largely reduced by using MVCT priors without tissue segmentation. The attenuation value of metal being incorrect, large dose differences relative to the true value can result when using the conventional NMAR image. This difference can be significantly reduced if MVCT images are used as priors. Reduced tissue deformation, better tissue visualization, and correct information about the electron density of the tissues and metals in the artifact corrected images could help delineate the structures better, as well as calculate radiation dose more correctly, thus enhancing the quality of the radiotherapy treatment planning.
A machine-learned computational functional genomics-based approach to drug classification.
Lötsch, Jörn; Ultsch, Alfred
2016-12-01
The public accessibility of "big data" about the molecular targets of drugs and the biological functions of genes allows novel data science-based approaches to pharmacology that link drugs directly with their effects on pathophysiologic processes. This provides a phenotypic path to drug discovery and repurposing. This paper compares the performance of a functional genomics-based criterion to the traditional drug target-based classification. Knowledge discovery in the DrugBank and Gene Ontology databases allowed the construction of a "drug target versus biological process" matrix as a combination of "drug versus genes" and "genes versus biological processes" matrices. As a canonical example, such matrices were constructed for classical analgesic drugs. These matrices were projected onto a toroid grid of 50 × 82 artificial neurons using a self-organizing map (SOM). The distance, respectively, cluster structure of the high-dimensional feature space of the matrices was visualized on top of this SOM using a U-matrix. The cluster structure emerging on the U-matrix provided a correct classification of the analgesics into two main classes of opioid and non-opioid analgesics. The classification was flawless with both the functional genomics and the traditional target-based criterion. The functional genomics approach inherently included the drugs' modulatory effects on biological processes. The main pharmacological actions known from pharmacological science were captures, e.g., actions on lipid signaling for non-opioid analgesics that comprised many NSAIDs and actions on neuronal signal transmission for opioid analgesics. Using machine-learned techniques for computational drug classification in a comparative assessment, a functional genomics-based criterion was found to be similarly suitable for drug classification as the traditional target-based criterion. This supports a utility of functional genomics-based approaches to computational system pharmacology for drug discovery and repurposing.
Stumpe, B; Engel, T; Steinweg, B; Marschner, B
2012-04-03
In the past, different slag materials were often used for landscaping and construction purposes or simply dumped. Nowadays German environmental laws strictly control the use of slags, but there is still a remaining part of 35% which is uncontrolled dumped in landfills. Since some slags have high heavy metal contents and different slag types have typical chemical and physical properties that will influence the risk potential and other characteristics of the deposits, an identification of the slag types is needed. We developed a FT-IR-based statistical method to identify different slags classes. Slags samples were collected at different sites throughout various cities within the industrial Ruhr area. Then, spectra of 35 samples from four different slags classes, ladle furnace (LF), blast furnace (BF), oxygen furnace steel (OF), and zinc furnace slags (ZF), were determined in the mid-infrared region (4000-400 cm(-1)). The spectra data sets were subject to statistical classification methods for the separation of separate spectral data of different slag classes. Principal component analysis (PCA) models for each slag class were developed and further used for soft independent modeling of class analogy (SIMCA). Precise classification of slag samples into four different slag classes were achieved using two different SIMCA models stepwise. At first, SIMCA 1 was used for classification of ZF as well as OF slags over the total spectral range. If no correct classification was found, then the spectrum was analyzed with SIMCA 2 at reduced wavenumbers for the classification of LF as well as BF spectra. As a result, we provide a time- and cost-efficient method based on FT-IR spectroscopy for processing and identifying large numbers of environmental slag samples.
ANALYSIS AND REDUCTION OF LANDSAT DATA FOR USE IN A HIGH PLAINS GROUND-WATER FLOW MODEL.
Thelin, Gail; Gaydas, Leonard; Donovan, Walter; Mladinich, Carol
1984-01-01
Data obtained from 59 Landsat scenes were used to estimate the areal extent of irrigated agriculture over the High Plains region of the United States for a ground-water flow model. This model provides information on current trends in the amount and distribution of water used for irrigation. The analysis and reduction process required that each Landsat scene be ratioed, interpreted, and aggregated. Data reduction by aggregation was an efficient technique for handling the volume of data analyzed. This process bypassed problems inherent in geometrically correcting and mosaicking the data at pixel resolution and combined the individual Landsat classification into one comprehensive data set.
NASA Astrophysics Data System (ADS)
Chen, B.; Chehdi, K.; De Oliveria, E.; Cariou, C.; Charbonnier, B.
2015-10-01
In this paper a new unsupervised top-down hierarchical classification method to partition airborne hyperspectral images is proposed. The unsupervised approach is preferred because the difficulty of area access and the human and financial resources required to obtain ground truth data, constitute serious handicaps especially over large areas which can be covered by airborne or satellite images. The developed classification approach allows i) a successive partitioning of data into several levels or partitions in which the main classes are first identified, ii) an estimation of the number of classes automatically at each level without any end user help, iii) a nonsystematic subdivision of all classes of a partition Pj to form a partition Pj+1, iv) a stable partitioning result of the same data set from one run of the method to another. The proposed approach was validated on synthetic and real hyperspectral images related to the identification of several marine algae species. In addition to highly accurate and consistent results (correct classification rate over 99%), this approach is completely unsupervised. It estimates at each level, the optimal number of classes and the final partition without any end user intervention.
Tone classification of syllable-segmented Thai speech based on multilayer perception
NASA Astrophysics Data System (ADS)
Satravaha, Nuttavudh; Klinkhachorn, Powsiri; Lass, Norman
2002-05-01
Thai is a monosyllabic tonal language that uses tone to convey lexical information about the meaning of a syllable. Thus to completely recognize a spoken Thai syllable, a speech recognition system not only has to recognize a base syllable but also must correctly identify a tone. Hence, tone classification of Thai speech is an essential part of a Thai speech recognition system. Thai has five distinctive tones (``mid,'' ``low,'' ``falling,'' ``high,'' and ``rising'') and each tone is represented by a single fundamental frequency (F0) pattern. However, several factors, including tonal coarticulation, stress, intonation, and speaker variability, affect the F0 pattern of a syllable in continuous Thai speech. In this study, an efficient method for tone classification of syllable-segmented Thai speech, which incorporates the effects of tonal coarticulation, stress, and intonation, as well as a method to perform automatic syllable segmentation, were developed. Acoustic parameters were used as the main discriminating parameters. The F0 contour of a segmented syllable was normalized by using a z-score transformation before being presented to a tone classifier. The proposed system was evaluated on 920 test utterances spoken by 8 speakers. A recognition rate of 91.36% was achieved by the proposed system.
Robust through-the-wall radar image classification using a target-model alignment procedure.
Smith, Graeme E; Mobasseri, Bijan G
2012-02-01
A through-the-wall radar image (TWRI) bears little resemblance to the equivalent optical image, making it difficult to interpret. To maximize the intelligence that may be obtained, it is desirable to automate the classification of targets in the image to support human operators. This paper presents a technique for classifying stationary targets based on the high-range resolution profile (HRRP) extracted from 3-D TWRIs. The dependence of the image on the target location is discussed using a system point spread function (PSF) approach. It is shown that the position dependence will cause a classifier to fail, unless the image to be classified is aligned to a classifier-training location. A target image alignment technique based on deconvolution of the image with the system PSF is proposed. Comparison of the aligned target images with measured images shows the alignment process introducing normalized mean squared error (NMSE) ≤ 9%. The HRRP extracted from aligned target images are classified using a naive Bayesian classifier supported by principal component analysis. The classifier is tested using a real TWRI of canonical targets behind a concrete wall and shown to obtain correct classification rates ≥ 97%. © 2011 IEEE
Semi-supervised anomaly detection - towards model-independent searches of new physics
NASA Astrophysics Data System (ADS)
Kuusela, Mikael; Vatanen, Tommi; Malmi, Eric; Raiko, Tapani; Aaltonen, Timo; Nagai, Yoshikazu
2012-06-01
Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm based on semi-supervised anomaly detection techniques, which does not require a MC training sample for the signal data. We first model the background using a multivariate Gaussian mixture model. We then search for deviations from this model by fitting to the observations a mixture of the background model and a number of additional Gaussians. This allows us to perform pattern recognition of any anomalous excess over the background. We show by a comparison to neural network classifiers that such an approach is a lot more robust against misspecification of the signal MC than supervised classification. In cases where there is an unexpected signal, a neural network might fail to correctly identify it, while anomaly detection does not suffer from such a limitation. On the other hand, when there are no systematic errors in the training data, both methods perform comparably.
Classification of surface types using SIR-C/X-SAR, Mount Everest Area, Tibet
Albright, Thomas P.; Painter, Thomas H.; Roberts, Dar A.; Shi, Jiancheng; Dozier, Jeff; Fielding, Eric
1998-01-01
Imaging radar is a promising tool for mapping snow and ice cover in alpine regions. It combines a high-resolution, day or night, all-weather imaging capability with sensitivity to hydrologic and climatic snow and ice parameters. We use the spaceborne imaging radar-C/X-band synthetic aperture radar (SIR-C/X-SAR) to map snow and glacial ice on the rugged north slope of Mount Everest. From interferometrically derived digital elevation data, we compute the terrain calibration factor and cosine of the local illumination angle. We then process and terrain-correct radar data sets acquired on April 16, 1994. In addition to the spectral data, we include surface slope to improve discrimination among several surface types. These data sets are then used in a decision tree to generate an image classification. This method is successful in identifying and mapping scree/talus, dry snow, dry snow-covered glacier, wet snow-covered glacier, and rock-covered glacier, as corroborated by comparison with existing surface cover maps and other ancillary information. Application of the classification scheme to data acquired on October 7 of the same year yields accurate results for most surface types but underreports the extent of dry snow cover.
Diagnosis of Chronic Kidney Disease Based on Support Vector Machine by Feature Selection Methods.
Polat, Huseyin; Danaei Mehr, Homay; Cetin, Aydin
2017-04-01
As Chronic Kidney Disease progresses slowly, early detection and effective treatment are the only cure to reduce the mortality rate. Machine learning techniques are gaining significance in medical diagnosis because of their classification ability with high accuracy rates. The accuracy of classification algorithms depend on the use of correct feature selection algorithms to reduce the dimension of datasets. In this study, Support Vector Machine classification algorithm was used to diagnose Chronic Kidney Disease. To diagnose the Chronic Kidney Disease, two essential types of feature selection methods namely, wrapper and filter approaches were chosen to reduce the dimension of Chronic Kidney Disease dataset. In wrapper approach, classifier subset evaluator with greedy stepwise search engine and wrapper subset evaluator with the Best First search engine were used. In filter approach, correlation feature selection subset evaluator with greedy stepwise search engine and filtered subset evaluator with the Best First search engine were used. The results showed that the Support Vector Machine classifier by using filtered subset evaluator with the Best First search engine feature selection method has higher accuracy rate (98.5%) in the diagnosis of Chronic Kidney Disease compared to other selected methods.
Motamedi, Mohammad; Müller, Rolf
2014-06-01
The biosonar beampatterns found across different bat species are highly diverse in terms of global and local shape properties such as overall beamwidth or the presence, location, and shape of multiple lobes. It may be hypothesized that some of this variability reflects evolutionary adaptation. To investigate this hypothesis, the present work has searched for patterns in the variability across a set of 283 numerical predictions of emission and reception beampatterns from 88 bat species belonging to four major families (Rhinolophidae, Hipposideridae, Phyllostomidae, Vespertilionidae). This was done using a lossy compression of the beampatterns that utilized real spherical harmonics as basis functions. The resulting vector representations showed differences between the families as well as between emission and reception. These differences existed in the means of the power spectra as well as in their distribution. The distributions were characterized in a low dimensional space found through principal component analysis. The distinctiveness of the beampatterns across the groups was corroborated by pairwise classification experiments that yielded correct classification rates between ~85 and ~98%. Beamwidth was a major factor but not the sole distinguishing feature in these classification experiments. These differences could be seen as an indication of adaptive trends at the beampattern level.
Banzato, T; Cherubini, G B; Atzori, M; Zotti, A
2018-05-01
An established deep neural network (DNN) based on transfer learning and a newly designed DNN were tested to predict the grade of meningiomas from magnetic resonance (MR) images in dogs and to determine the accuracy of classification of using pre- and post-contrast T1-weighted (T1W), and T2-weighted (T2W) MR images. The images were randomly assigned to a training set, a validation set and a test set, comprising 60%, 10% and 30% of images, respectively. The combination of DNN and MR sequence displaying the highest discriminating accuracy was used to develop an image classifier to predict the grading of new cases. The algorithm based on transfer learning using the established DNN did not provide satisfactory results, whereas the newly designed DNN had high classification accuracy. On the basis of classification accuracy, an image classifier built on the newly designed DNN using post-contrast T1W images was developed. This image classifier correctly predicted the grading of 8 out of 10 images not included in the data set. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Predict or classify: The deceptive role of time-locking in brain signal classification
NASA Astrophysics Data System (ADS)
Rusconi, Marco; Valleriani, Angelo
2016-06-01
Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal.
NASA Astrophysics Data System (ADS)
Movia, A.; Beinat, A.; Crosilla, F.
2015-04-01
The recognition of vegetation by the analysis of very high resolution (VHR) aerial images provides meaningful information about environmental features; nevertheless, VHR images frequently contain shadows that generate significant problems for the classification of the image components and for the extraction of the needed information. The aim of this research is to classify, from VHR aerial images, vegetation involved in the balance process of the environmental biochemical cycle, and to discriminate it with respect to urban and agricultural features. Three classification algorithms have been experimented in order to better recognize vegetation, and compared to NDVI index; unfortunately all these methods are conditioned by the presence of shadows on the images. Literature presents several algorithms to detect and remove shadows in the scene: most of them are based on the RGB to HSI transformations. In this work some of them have been implemented and compared with one based on RGB bands. Successively, in order to remove shadows and restore brightness on the images, some innovative algorithms, based on Procrustes theory, have been implemented and applied. Among these, we evaluate the capability of the so called "not-centered oblique Procrustes" and "anisotropic Procrustes" methods to efficiently restore brightness with respect to a linear correlation correction based on the Cholesky decomposition. Some experimental results obtained by different classification methods after shadows removal carried out with the innovative algorithms are presented and discussed.
Huang, Ai-Mei; Nguyen, Truong
2009-04-01
In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.
Moran, Lara; Andres, Sonia; Allen, Paul; Moloney, Aidan P
2018-08-01
Visible-near infrared spectroscopy (Vis-NIRS) has been suggested to have potential for authentication of food products. The aim of the present preliminary study was to assess if this technology can be used to authenticate the ageing time (3, 7, 14 and 21 days post mortem) of beef steaks from three different muscles (M. Longissimus thoracis, M. Gluteus medius and M. Semitendinosus). Various mathematical pre-treatments were applied to the spectra to correct scattering and overlapping effects, and then partial least squares-discrimination analysis (PLS-DA) procedures applied. The best models were specific for each muscle, and the ability of prediction of ageing time was validated using full (leave-one-out) cross-validation, whereas authentication performance was evaluated using the parameters of sensitivity, specificity and overall correct classification. The results indicate that overall correct classification ranging from 94.2 to 100% was achieved, depending on the muscle. In conclusion, Vis-NIRS technology seems a valid tool for the authentication of ageing time of beef steaks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Weinstein, A; Bordwell, B; Stone, B; Tibbetts, C; Rothfield, N F
1983-02-01
The sensitivity and specificity of the presence of antibodies to native DNA and low serum C3 levels were investigated in a prospective study in 98 patients with systemic lupus erythematosus who were followed for a mean of 38.4 months. Hospitalized patients, patients with other connective tissue diseases, and subjects without any disease served as the control group. Seventy-two percent of the patients with systemic lupus erythematosus had a high DNA-binding value (more than 33 percent) initially, and an additional 20 percent had a high DNA-binding value later in the course of the illness. Similarly, C3 levels were low (less than 81 mg/100 ml) in 38 percent of the patients with systemic lupus erythematosus initially and in 66 percent of the patients at any time during the study. High DNA-binding and low C3 levels each showed extremely high predictive value (94 percent) for the diagnosis of systemic lupus erythematosus when applied in a patient population in which that diagnosis was considered. The presence of both abnormalities was 100 percent correct in predicting the diagnosis os systemic lupus erythematosus. Both tests should be included in future criteria for the diagnosis and classification of systemic lupus erythematosus.
Coulomb-free and Coulomb-distorted recolliding quantum orbits in photoelectron holography
NASA Astrophysics Data System (ADS)
Maxwell, A. S.; Figueira de Morisson Faria, C.
2018-06-01
We perform a detailed analysis of the different types of orbits in the Coulomb quantum orbit strong-field approximation (CQSFA), ranging from direct to those undergoing hard collisions. We show that some of them exhibit clear counterparts in the standard formulations of the strong-field approximation for direct and rescattered above-threshold ionization, and show that the standard orbit classification commonly used in Coulomb-corrected models is over-simplified. We identify several types of rescattered orbits, such as those responsible for the low-energy structures reported in the literature, and determine the momentum regions in which they occur. We also find formerly overlooked interference patterns caused by backscattered Coulomb-corrected orbits and assess their effect on photoelectron angular distributions. These orbits improve the agreement of photoelectron angular distributions computed with the CQSFA with the outcome of ab initio methods for high energy phtotoelectrons perpendicular to the field polarization axis.
Using the regulation of accuracy to study performance when the correct answer is not known.
Luna, Karlos; Martín-Luengo, Beatriz
2017-08-01
We examined memory performance in multiple-choice questions when correct answers were not always present. How do participants answer when they are aware that the correct alternative may not be present? To answer this question we allowed participants to decide on the number of alternatives in their final answer (the plurality option), and whether they wanted to report or withhold their answer (report option). We also studied the memory benefits when both the plurality and the report options were available. In two experiments participants watched a crime and then answered questions with five alternatives. Half of the questions were presented with the correct alternative and half were not. Participants selected one alternative and rated confidence, then selected three alternatives and again rated confidence, and finally indicated whether they preferred the answer with one or with three alternatives (plurality option). Lastly, they decided whether to report or withhold the answer (report option). Results showed that participants' confidence in their selections was higher, that they chose more single answers, and that they preferred to report more often when the correct alternative was presented. We also attempted to classify a posteriori questions as either presented with or without the correct alternative from participants' selection. Classification was better than chance, and encouraging, but the forensic application of the classification technique is still limited since there was a large percentage of responses that were incorrectly classified. Our results also showed that the memory benefits of both plurality and report options overlap. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Morrow, Andrew N.; Matthews, Kenneth L., II; Bujenovic, Steven
2008-03-01
Positron emission tomography (PET) and computed tomography (CT) together are a powerful diagnostic tool, but imperfect image quality allows false positive and false negative diagnoses to be made by any observer despite experience and training. This work investigates PET acquisition mode, reconstruction method and a standard uptake value (SUV) correction scheme on the classification of lesions as benign or malignant in PET/CT images, in an anthropomorphic phantom. The scheme accounts for partial volume effect (PVE) and PET resolution. The observer draws a region of interest (ROI) around the lesion using the CT dataset. A simulated homogenous PET lesion of the same shape as the drawn ROI is blurred with the point spread function (PSF) of the PET scanner to estimate the PVE, providing a scaling factor to produce a corrected SUV. Computer simulations showed that the accuracy of the corrected PET values depends on variations in the CT-drawn boundary and the position of the lesion with respect to the PET image matrix, especially for smaller lesions. Correction accuracy was affected slightly by mismatch of the simulation PSF and the actual scanner PSF. The receiver operating characteristic (ROC) study resulted in several observations. Using observer drawn ROIs, scaled tumor-background ratios (TBRs) more accurately represented actual TBRs than unscaled TBRs. For the PET images, 3D OSEM outperformed 2D OSEM, 3D OSEM outperformed 3D FBP, and 2D OSEM outperformed 2D FBP. The correction scheme significantly increased sensitivity and slightly increased accuracy for all acquisition and reconstruction modes at the cost of a small decrease in specificity.
NASA Astrophysics Data System (ADS)
Callegari, Mattia; Marin, Carlo; Notarnicola, Claudia; Carturan, Luca; Covi, Federico; Galos, Stephan; Seppi, Roberto
2016-10-01
In mountain regions and their forelands, glaciers are key source of melt water during the middle and late ablation season, when most of the winter snow has already melted. Furthermore, alpine glaciers are recognized as sensitive indicators of climatic fluctuations. Monitoring glacier extent changes and glacier surface characteristics (i.e. snow, firn and bare ice coverage) is therefore important for both hydrological applications and climate change studies. Satellite remote sensing data have been widely employed for glacier surface classification. Many approaches exploit optical data, such as from Landsat. Despite the intuitive visual interpretation of optical images and the demonstrated capability to discriminate glacial surface thanks to the combination of different bands, one of the main disadvantages of available high-resolution optical sensors is their dependence on cloud conditions and low revisit time frequency. Therefore, operational monitoring strategies relying only on optical data have serious limitations. Since SAR data are insensitive to clouds, they are potentially a valid alternative to optical data for glacier monitoring. Compared to past SAR missions, the new Sentinel-1 mission provides much higher revisit time frequency (two acquisitions each 12 days) over the entire European Alps, and this number will be doubled once the Sentinel1-b will be in orbit (April 2016). In this work we present a method for glacier surface classification by exploiting dual polarimetric Sentinel-1 data. The method consists of a supervised approach based on Support Vector Machine (SVM). In addition to the VV and VH signals, we tested the contribution of local incidence angle, extracted from a digital elevation model and orbital information, as auxiliary input feature in order to account for the topographic effects. By exploiting impossible temporal transition between different classes (e.g. if at a given date one pixel is classified as rock it cannot be classified as glacier ice in a following date) we here propose an innovative post classification correction based on SVM classification probabilities. Optical data, i.e. Landsat-8 and Sentinel-2, have been employed, when available, for training sample collection. Detailed field observations from two glaciers located in the Ortles-Cevedale massif (Eastern Italian Alps) have been employed for validation.
Stefano, A; Gallivanone, F; Messa, C; Gilardi, M C; Gastiglioni, I
2014-12-01
The aim of this work is to evaluate the metabolic impact of Partial Volume Correction (PVC) on the measurement of the Standard Uptake Value (SUV) from [18F]FDG PET-CT oncological studies for treatment monitoring purpose. Twenty-nine breast cancer patients with bone lesions (42 lesions in total) underwent [18F]FDG PET-CT studies after surgical resection of breast cancer primitives, and before (PET-II) chemotherapy and hormone treatment. PVC of bone lesion uptake was performed on the two [18F]FDG PET-CT studies, using a method based on Recovery Coefficients (RC) and on an automatic measurement of lesion metabolic volume. Body-weight average SUV was calculated for each lesion, with and without PVC. The accuracy, reproducibility, clinical feasibility and the metabolic impact on treatment response of the considered PVC method was evaluated. The PVC method was found clinically feasible in bone lesions, with an accuracy of 93% for lesion sphere-equivalent diameter >1 cm. Applying PVC, average SUV values increased, from 7% up to 154% considering both PET-I and PET-II studies, proving the need of the correction. As main finding, PVC modified the therapy response classification in 6 cases according to EORTC 1999 classification and in 5 cases according to PERCIST 1.0 classification. PVC has an important metabolic impact on the assessment of tumor response to treatment by [18F]FDG PET-CT oncological studies.
Fractures of the cervical spine
Marcon, Raphael Martus; Cristante, Alexandre Fogaça; Teixeira, William Jacobsen; Narasaki, Douglas Kenji; Oliveira, Reginaldo Perilo; de Barros Filho, Tarcísio Eloy Pessoa
2013-01-01
OBJECTIVES: The aim of this study was to review the literature on cervical spine fractures. METHODS: The literature on the diagnosis, classification, and treatment of lower and upper cervical fractures and dislocations was reviewed. RESULTS: Fractures of the cervical spine may be present in polytraumatized patients and should be suspected in patients complaining of neck pain. These fractures are more common in men approximately 30 years of age and are most often caused by automobile accidents. The cervical spine is divided into the upper cervical spine (occiput-C2) and the lower cervical spine (C3-C7), according to anatomical differences. Fractures in the upper cervical spine include fractures of the occipital condyle and the atlas, atlanto-axial dislocations, fractures of the odontoid process, and hangman's fractures in the C2 segment. These fractures are characterized based on specific classifications. In the lower cervical spine, fractures follow the same pattern as in other segments of the spine; currently, the most widely used classification is the SLIC (Subaxial Injury Classification), which predicts the prognosis of an injury based on morphology, the integrity of the disc-ligamentous complex, and the patient's neurological status. It is important to correctly classify the fracture to ensure appropriate treatment. Nerve or spinal cord injuries, pseudarthrosis or malunion, and postoperative infection are the main complications of cervical spine fractures. CONCLUSIONS: Fractures of the cervical spine are potentially serious and devastating if not properly treated. Achieving the correct diagnosis and classification of a lesion is the first step toward identifying the most appropriate treatment, which can be either surgical or conservative. PMID:24270959
Hyperspectral image segmentation using a cooperative nonparametric approach
NASA Astrophysics Data System (ADS)
Taher, Akar; Chehdi, Kacem; Cariou, Claude
2013-10-01
In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.
[Differentiation between moisture lesions and pressure ulcers using photographs in a critical area].
Valls-Matarín, Josefa; Del Cotillo-Fuente, Mercedes; Pujol-Vila, María; Ribal-Prior, Rosa; Sandalinas-Mulero, Inmaculada
2016-01-01
To identify difficulties for nurses in differentiating between moisture lesions and pressure ulcers, proper classification of pressure ulcers to assess the adequate classification of the Grupo Nacional para el Estudio y Asesoramiento de Úlceras por Presión y Heridas Crónicas (GNEAUPP) and the degree of agreement in the correct assessment by type and category of injury. Cross-sectional study in a critical area during 2014. All nurses who agreed to participate were included. They performed a questionnaire with 14 photographs validated by experts of moisture lesions or pressure ulcers in the sacral area and buttocks, with 6 possible answers: Pressure ulcer category I, II, III, IV, moisture lesions and unknown. Demographics and knowledge of the classification system of the pressure ulcers were collected according to GNEAUPP. It involved 98% of the population (n=56); 98.2% knew the classification system of the GNEAUPP; 35.2% of moisture lesions were considered as pressure ulcers, most of them as a category II (18.9%). The 14.8% of the pressure ulcers photographs were identified as moisture lesions and 16.1% were classified in another category. The agreement between nurses earned a global Kappa index of .38 (95% CI: .29-.57). There are difficulties differentiating between pressure ulcers and moisture lesions, especially within initial categories. Nurses have the perception they know the pressure ulcers classification, but they do not classify them correctly. The degree of concordance in the diagnosis of skin lesions was low. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Paradella, W. R.; Vitorello, I.
1982-01-01
Several aspects of computer-assisted analysis techniques for image enhancement and thematic classification by which LANDSAT MSS imagery may be treated quantitatively are explained. On geological applications, computer processing of digital data allows, possibly, the fullest use of LANDSAT data, by displaying enhanced and corrected data for visual analysis and by evaluating and assigning each spectral pixel information to a given class.
Effect of the atmosphere on the classification of LANDSAT data. [Identifying sugar canes in Brazil
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Morimoto, T.; Kumar, R.; Molion, L. C. B.
1979-01-01
The author has identified the following significant results. In conjunction with Turner's model for the correction of satellite data for atmospheric interference, the LOWTRAN-3 computer was used to calculate the atmospheric interference. Use of the program improved the contrast between different natural targets in the MSS LANDSAT data of Brasilia, Brazil. The classification accuracy of sugar canes was improved by about 9% in the multispectral data of Ribeirao Preto, Sao Paulo.
NASA Technical Reports Server (NTRS)
Haralick, R. H. (Principal Investigator); Bosley, R. J.
1974-01-01
The author has identified the following significant results. A procedure was developed to extract cross-band textural features from ERTS MSS imagery. Evolving from a single image texture extraction procedure which uses spatial dependence matrices to measure relative co-occurrence of nearest neighbor grey tones, the cross-band texture procedure uses the distribution of neighboring grey tone N-tuple differences to measure the spatial interrelationships, or co-occurrences, of the grey tone N-tuples present in a texture pattern. In both procedures, texture is characterized in such a way as to be invariant under linear grey tone transformations. However, the cross-band procedure complements the single image procedure by extracting texture information and spectral information contained in ERTS multi-images. Classification experiments show that when used alone, without spectral processing, the cross-band texture procedure extracts more information than the single image texture analysis. Results show an improvement in average correct classification from 86.2% to 88.8% for ERTS image no. 1021-16333 with the cross-band texture procedure. However, when used together with spectral features, the single image texture plus spectral features perform better than the cross-band texture plus spectral features, with an average correct classification of 93.8% and 91.6%, respectively.
Reproducibility of neuroimaging analyses across operating systems
Glatard, Tristan; Lewis, Lindsay B.; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C.
2015-01-01
Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed. PMID:25964757
Reproducibility of neuroimaging analyses across operating systems.
Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C
2015-01-01
Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.
[Classification in medicine. An introductory reflection on its aim and object].
Giere, W
2007-07-01
Human beings are born with the ability to recognize Gestalt and to classify. However, all classifications depend on their circumstances and intentions. There is no ultimate classification, and there is no one correct classification in medicine either. Examples for classifications of diagnoses, symptoms and procedures are discussed. The path to gaining knowledge and the basic difference between collecting data (patient file) and sorting data (register) will be illustrated using the BAIK information model. Additionally the model shows how the doctor can profit from the active electronic patient file which automatically offers him other relevant information for his current decision and saves time. "Without classification no new knowledge, no new knowledge through classification". This paradox will be solved eventually: a change of paradigms requires the overcoming of the currently valid classification system in medicine as well. Finally more precise recommendations will be given on how doctors can be freed from the burden of the need to classify and how the whole health system can gain much more valid data without limiting the doctors' freedom and creativity through co-ordinated use of IT, all while saving money at the same time.
Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).
Bevilacqua, Marta; Marini, Federico
2014-08-01
The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.
Wang, Kun-Ching
2015-01-01
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. PMID:25594590
Classification of oxidative stress based on its intensity
Lushchak, Volodymyr I.
2014-01-01
In living organisms production of reactive oxygen species (ROS) is counterbalanced by their elimination and/or prevention of formation which in concert can typically maintain a steady-state (stationary) ROS level. However, this balance may be disturbed and lead to elevated ROS levels called oxidative stress. To our best knowledge, there is no broadly acceptable system of classification of oxidative stress based on its intensity due to which proposed here system may be helpful for interpretation of experimental data. Oxidative stress field is the hot topic in biology and, to date, many details related to ROS-induced damage to cellular components, ROS-based signaling, cellular responses and adaptation have been disclosed. However, it is common situation when researchers experience substantial difficulties in the correct interpretation of oxidative stress development especially when there is a need to characterize its intensity. Careful selection of specific biomarkers (ROS-modified targets) and some system may be helpful here. A classification of oxidative stress based on its intensity is proposed here. According to this classification there are four zones of function in the relationship between “Dose/concentration of inducer” and the measured “Endpoint”: I – basal oxidative stress (BOS); II – low intensity oxidative stress (LOS); III – intermediate intensity oxidative stress (IOS); IV – high intensity oxidative stress (HOS). The proposed classification will be helpful to describe experimental data where oxidative stress is induced and systematize it based on its intensity, but further studies will be in need to clear discriminate between stress of different intensity. PMID:26417312
[High complication rate after surgical treatment of ankle fractures].
Bjørslev, Naja; Ebskov, Lars; Lind, Marianne; Mersø, Camilla
2014-08-04
The purpose of this study was to determine the quality and re-operation rate of the surgical treatment of ankle fractures at a large university hospital. X-rays and patient records of 137 patients surgically treated for ankle fractures were analyzed for: 1) correct classification according to Lauge-Hansen, 2) if congruity of the ankle joint was achieved, 3) selection and placement of the hardware, and 4) the surgeon's level of education. Totally 32 of 137 did not receive an optimal treatment, 11 were re-operated. There was no clear correlation between incorrect operation and the surgeon's level of education.
Granular support vector machines with association rules mining for protein homology prediction.
Tang, Yuchun; Jin, Bo; Zhang, Yan-Qing
2005-01-01
Protein homology prediction between protein sequences is one of critical problems in computational biology. Such a complex classification problem is common in medical or biological information processing applications. How to build a model with superior generalization capability from training samples is an essential issue for mining knowledge to accurately predict/classify unseen new samples and to effectively support human experts to make correct decisions. A new learning model called granular support vector machines (GSVM) is proposed based on our previous work. GSVM systematically and formally combines the principles from statistical learning theory and granular computing theory and thus provides an interesting new mechanism to address complex classification problems. It works by building a sequence of information granules and then building support vector machines (SVM) in some of these information granules on demand. A good granulation method to find suitable granules is crucial for modeling a GSVM with good performance. In this paper, we also propose an association rules-based granulation method. For the granules induced by association rules with high enough confidence and significant support, we leave them as they are because of their high "purity" and significant effect on simplifying the classification task. For every other granule, a SVM is modeled to discriminate the corresponding data. In this way, a complex classification problem is divided into multiple smaller problems so that the learning task is simplified. The proposed algorithm, here named GSVM-AR, is compared with SVM by KDDCUP04 protein homology prediction data. The experimental results show that finding the splitting hyperplane is not a trivial task (we should be careful to select the association rules to avoid overfitting) and GSVM-AR does show significant improvement compared to building one single SVM in the whole feature space. Another advantage is that the utility of GSVM-AR is very good because it is easy to be implemented. More importantly and more interestingly, GSVM provides a new mechanism to address complex classification problems.
Tsai, Yu Hsin; Stow, Douglas; Weeks, John
2013-01-01
The goal of this study was to map and quantify the number of newly constructed buildings in Accra, Ghana between 2002 and 2010 based on high spatial resolution satellite image data. Two semi-automated feature detection approaches for detecting and mapping newly constructed buildings based on QuickBird very high spatial resolution satellite imagery were analyzed: (1) post-classification comparison; and (2) bi-temporal layerstack classification. Feature Analyst software based on a spatial contextual classifier and ENVI Feature Extraction that uses a true object-based image analysis approach of image segmentation and segment classification were evaluated. Final map products representing new building objects were compared and assessed for accuracy using two object-based accuracy measures, completeness and correctness. The bi-temporal layerstack method generated more accurate results compared to the post-classification comparison method due to less confusion with background objects. The spectral/spatial contextual approach (Feature Analyst) outperformed the true object-based feature delineation approach (ENVI Feature Extraction) due to its ability to more reliably delineate individual buildings of various sizes. Semi-automated, object-based detection followed by manual editing appears to be a reliable and efficient approach for detecting and enumerating new building objects. A bivariate regression analysis was performed using neighborhood-level estimates of new building density regressed on a census-derived measure of socio-economic status, yielding an inverse relationship with R2 = 0.31 (n = 27; p = 0.00). The primary utility of the new building delineation results is to support spatial analyses of land cover and land use and demographic change. PMID:24415810
Reservoir Identification: Parameter Characterization or Feature Classification
NASA Astrophysics Data System (ADS)
Cao, J.
2017-12-01
The ultimate goal of oil and gas exploration is to find the oil or gas reservoirs with industrial mining value. Therefore, the core task of modern oil and gas exploration is to identify oil or gas reservoirs on the seismic profiles. Traditionally, the reservoir is identify by seismic inversion of a series of physical parameters such as porosity, saturation, permeability, formation pressure, and so on. Due to the heterogeneity of the geological medium, the approximation of the inversion model and the incompleteness and noisy of the data, the inversion results are highly uncertain and must be calibrated or corrected with well data. In areas where there are few wells or no well, reservoir identification based on seismic inversion is high-risk. Reservoir identification is essentially a classification issue. In the identification process, the underground rocks are divided into reservoirs with industrial mining value and host rocks with non-industrial mining value. In addition to the traditional physical parameters classification, the classification may be achieved using one or a few comprehensive features. By introducing the concept of seismic-print, we have developed a new reservoir identification method based on seismic-print analysis. Furthermore, we explore the possibility to use deep leaning to discover the seismic-print characteristics of oil and gas reservoirs. Preliminary experiments have shown that the deep learning of seismic data could distinguish gas reservoirs from host rocks. The combination of both seismic-print analysis and seismic deep learning is expected to be a more robust reservoir identification method. The work was supported by NSFC under grant No. 41430323 and No. U1562219, and the National Key Research and Development Program under Grant No. 2016YFC0601
U.S. Fish and Wildlife Service 1979 wetland classification: a review
Cowardin, L.M.; Golet, F.C.
1995-01-01
In 1979 the US Fish and Wildlife Service published and adopted a classification of wetlands and deepwater habitats of the United States. The system was designed for use in a national inventory of wetlands. It was intended to be ecologically based, to furnish the mapping units needed for the inventory, and to provide national consistency in terminology and definition. We review the performance of the classification after 13 years of use. The definition of wetland is based on national lists of hydric soils and plants that occur in wetlands. Our experience suggests that wetland classifications must facilitate mapping and inventory because these data gathering functions are essential to management and preservation of the wetland resource, but the definitions and taxa must have ecological basis. The most serious problem faced in construction of the classification was lack of data for many of the diverse wetland types. Review of the performance of the classification suggests that, for the most part, it was successful in accomplishing its objectives, but that problem areas should be corrected and modification could strengthen its utility. The classification, at least in concept, could be applied outside the United States. Experience gained in use of the classification can furnish guidance as to pitfalls to be avoided in the wetland classification process.
The neuropsychology of male adults with high-functioning autism or asperger syndrome.
Wilson, C Ellie; Happé, Francesca; Wheelwright, Sally J; Ecker, Christine; Lombardo, Michael V; Johnston, Patrick; Daly, Eileen; Murphy, Clodagh M; Spain, Debbie; Lai, Meng-Chuan; Chakrabarti, Bhismadev; Sauter, Disa A; Baron-Cohen, Simon; Murphy, Declan G M
2014-10-01
Autism Spectrum Disorder (ASD) is diagnosed on the basis of behavioral symptoms, but cognitive abilities may also be useful in characterizing individuals with ASD. One hundred seventy-eight high-functioning male adults, half with ASD and half without, completed tasks assessing IQ, a broad range of cognitive skills, and autistic and comorbid symptomatology. The aims of the study were, first, to determine whether significant differences existed between cases and controls on cognitive tasks, and whether cognitive profiles, derived using a multivariate classification method with data from multiple cognitive tasks, could distinguish between the two groups. Second, to establish whether cognitive skill level was correlated with degree of autistic symptom severity, and third, whether cognitive skill level was correlated with degree of comorbid psychopathology. Fourth, cognitive characteristics of individuals with Asperger Syndrome (AS) and high-functioning autism (HFA) were compared. After controlling for IQ, ASD and control groups scored significantly differently on tasks of social cognition, motor performance, and executive function (P's < 0.05). To investigate cognitive profiles, 12 variables were entered into a support vector machine (SVM), which achieved good classification accuracy (81%) at a level significantly better than chance (P < 0.0001). After correcting for multiple correlations, there were no significant associations between cognitive performance and severity of either autistic or comorbid symptomatology. There were no significant differences between AS and HFA groups on the cognitive tasks. Cognitive classification models could be a useful aid to the diagnostic process when used in conjunction with other data sources-including clinical history. © 2014 International Society for Autism Research, Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Morimoto, T.
1980-01-01
The author has identified the following significant results. Multispectral scanner data for Brasilia was corrected for atmospheric interference using the LOWTRAN-3 computer program and the analytical solution of the radiative transfer equation. This improved the contrast between two natural targets and the corrected images of two different dates were more similar than the original ones. Corrected images of MSS data for Ribeirao Preto gave a classification accuracy for sugar cane about 10% higher as compared to the original images.
Trachsel, D S; Bitschnau, C; Waldern, N; Weishaupt, M A; Schwarzwald, C C
2010-11-01
Frequent supraventricular or ventricular arrhythmias during and after exercise are considered pathological in horses. Prevalence of arrhythmias seen in apparently healthy horses is still a matter of debate and may depend on breed, athletic condition and exercise intensity. To determine intra- and interobserver agreement for detection of arrhythmias at rest, during and after exercise using a telemetric electrocardiography device. The electrocardiogram (ECG) recordings of 10 healthy Warmblood horses (5 of which had an intracardiac catheter in place) undergoing a standardised treadmill exercise test were analysed at rest (R), during warm-up (W), during exercise (E), as well as during 0-5 min (PE(0-5)) and 6-45 min (PE(6-45)) recovery after exercise. The number and time of occurrence of physiological and pathological 'rhythm events' were recorded. Events were classified according to origin and mode of conduction. The agreement of 3 independent, blinded observers with different experience in ECG reading was estimated considering time of occurrence and classification of events. For correct timing and classification, intraobserver agreement for observer 1 was 97% (R), 100% (W), 20% (E), 82% (PE(0-5)) and 100% (PE(6-45)). Interobserver agreement between observer 1 vs. observer 2 and between observer 1 vs. 3, respectively, was 96 and 92.6% (R), 83 and 31% (W), 0 and 13% (E), 23 and 18% (PE(0-5)), and 67 and 55% (PE(6-45)). When including the events with correct timing but disagreement for classification, the intraobserver agreement increased to 94% during PE(0-5) and the interobserver agreement reached 83 and 50% (W), 20 and 50% (E), 41 and 47% (PE(0-5)), and 83.5 and 65% (PE(6-45)). The interobserver agreement increased with observer experience. Intra- and interobserver agreement for recognition and classification of events was good at R, but poor during E and poor-moderate during recovery periods. These results highlight the limitations of stress ECG in horses and the need for high-quality recordings and adequate observer training. © 2010 EVJ Ltd.
Phenological features for winter rapeseed identification in Ukraine using satellite data
NASA Astrophysics Data System (ADS)
Kravchenko, Oleksiy
2014-05-01
Winter rapeseed is one of the major oilseed crops in Ukraine that is characterized by high profitability and often grown with violations of the crop rotation requirements leading to soil degradation. Therefore, rapeseed identification using satellite data is a promising direction for operational estimation of the crop acreage and rotation control. Crop acreage of rapeseed is about 0.5-3% of total area of Ukraine, which poses a major problem for identification using satellite data [1]. While winter rapeseed could be classified using biomass features observed during autumn vegetation, these features are quite unstable due to field to field differences in planting dates as well as spatial and temporal heterogeneity in soil moisture availability. Due to this fact autumn biomass level features could be used only locally (at NUTS-3 level) and are not suitable for large-scale country wide crop identification. We propose to use crop parameters at flowering phenological stage for crop identification and present a method for parameters estimation using time-series of moderate resolution data. Rapeseed flowering could be observed as a bell-shaped peak in red reflectance time series. However the duration of the flowering period that is observable by satellite data is about only two weeks, which is quite short period taking into account inevitable cloud coverage issues. Thus we need daily time series to resolve the flowering peak and due to this we are limited to moderate resolution data. We used daily atmospherically corrected MODIS data coming from Terra and Aqua satellites within 90-160 DOY period to perform features calculations. Empirical BRDF correction is used to minimize angular effects. We used Gaussian Processes Regression (GPR) for temporal interpolation to minimize errors due to residual could coverage, atmospheric correction and a mixed pixel problems. We estimate 12 parameters for each time series. They are red and near-infrared (NIR) reflectance and the timing at four stages: before and after the flowering, at the peak flowering and at the maximum NIR level. We used Support Vector Machine for data classification. The most relevant feature for classification is flowering peak timing followed by flowering peak magnitude. The dependency of the peak time on the latitude as a sole feature could be used to reject 90% of non-rapeseed pixels that is greatly reduces the imbalance of the classification problem. To assess the accuracy of our approach we performed a stratified area frame sampling survey in Odessa region (NUTS-2 level) in 2013. The omission error is about 12.6% while commission error is higher at the level of 22%. This fact is explained by high viewing angle composition criterion that is used in our approach to mitigate high cloud coverage problem. However the errors are quite stable spatially and could be easily corrected by regression technique. To do this we performed area estimation for Odessa region using regression estimator and obtained good area estimation accuracy with 4.6% error (1σ). [1] Gallego, F.J., et al., Efficiency assessment of using satellite data for crop area estimation in Ukraine. Int. J. Appl. Earth Observ. Geoinf. (2014), http://dx.doi.org/10.1016/j.jag.2013.12.013
NASA Astrophysics Data System (ADS)
Prochazka, D.; Mazura, M.; Samek, O.; Rebrošová, K.; Pořízka, P.; Klus, J.; Prochazková, P.; Novotný, J.; Novotný, K.; Kaiser, J.
2018-01-01
In this work, we investigate the impact of data provided by complementary laser-based spectroscopic methods on multivariate classification accuracy. Discrimination and classification of five Staphylococcus bacterial strains and one strain of Escherichia coli is presented. The technique that we used for measurements is a combination of Raman spectroscopy and Laser-Induced Breakdown Spectroscopy (LIBS). Obtained spectroscopic data were then processed using Multivariate Data Analysis algorithms. Principal Components Analysis (PCA) was selected as the most suitable technique for visualization of bacterial strains data. To classify the bacterial strains, we used Neural Networks, namely a supervised version of Kohonen's self-organizing maps (SOM). We were processing results in three different ways - separately from LIBS measurements, from Raman measurements, and we also merged data from both mentioned methods. The three types of results were then compared. By applying the PCA to Raman spectroscopy data, we observed that two bacterial strains were fully distinguished from the rest of the data set. In the case of LIBS data, three bacterial strains were fully discriminated. Using a combination of data from both methods, we achieved the complete discrimination of all bacterial strains. All the data were classified with a high success rate using SOM algorithm. The most accurate classification was obtained using a combination of data from both techniques. The classification accuracy varied, depending on specific samples and techniques. As for LIBS, the classification accuracy ranged from 45% to 100%, as for Raman Spectroscopy from 50% to 100% and in case of merged data, all samples were classified correctly. Based on the results of the experiments presented in this work, we can assume that the combination of Raman spectroscopy and LIBS significantly enhances discrimination and classification accuracy of bacterial species and strains. The reason is the complementarity in obtained chemical information while using these two methods.
Single-Frame Terrain Mapping Software for Robotic Vehicles
NASA Technical Reports Server (NTRS)
Rankin, Arturo L.
2011-01-01
This software is a component in an unmanned ground vehicle (UGV) perception system that builds compact, single-frame terrain maps for distribution to other systems, such as a world model or an operator control unit, over a local area network (LAN). Each cell in the map encodes an elevation value, terrain classification, object classification, terrain traversability, terrain roughness, and a confidence value into four bytes of memory. The input to this software component is a range image (from a lidar or stereo vision system), and optionally a terrain classification image and an object classification image, both registered to the range image. The single-frame terrain map generates estimates of the support surface elevation, ground cover elevation, and minimum canopy elevation; generates terrain traversability cost; detects low overhangs and high-density obstacles; and can perform geometry-based terrain classification (ground, ground cover, unknown). A new origin is automatically selected for each single-frame terrain map in global coordinates such that it coincides with the corner of a world map cell. That way, single-frame terrain maps correctly line up with the world map, facilitating the merging of map data into the world map. Instead of using 32 bits to store the floating-point elevation for a map cell, the vehicle elevation is assigned to the map origin elevation and reports the change in elevation (from the origin elevation) in terms of the number of discrete steps. The single-frame terrain map elevation resolution is 2 cm. At that resolution, terrain elevation from 20.5 to 20.5 m (with respect to the vehicle's elevation) is encoded into 11 bits. For each four-byte map cell, bits are assigned to encode elevation, terrain roughness, terrain classification, object classification, terrain traversability cost, and a confidence value. The vehicle s current position and orientation, the map origin, and the map cell resolution are all included in a header for each map. The map is compressed into a vector prior to delivery to another system.
Kayani, Babar; Konan, Sujith; Pietrzak, Jurek R T; Haddad, Fares S
2018-03-27
The objective of this study was to compare macroscopic bone and soft tissue injury between robotic-arm assisted total knee arthroplasty (RA-TKA) and conventional jig-based total knee arthroplasty (CJ-TKA) and create a validated classification system for reporting iatrogenic bone and periarticular soft tissue injury after TKA. This study included 30 consecutive CJ-TKAs followed by 30 consecutive RA-TKAs performed by a single surgeon. Intraoperative photographs of the femur, tibia, and periarticular soft tissues were taken before implantation of prostheses. Using these outcomes, the macroscopic soft tissue injury (MASTI) classification system was developed to grade iatrogenic bone and soft tissue injuries. Interobserver and Intraobserver validity of the proposed classification system was assessed. Patients undergoing RA-TKA had reduced medial soft tissue injury in both passively correctible (P < .05) and noncorrectible varus deformities (P < .05); more pristine femoral (P < .05) and tibial (P < .05) bone resection cuts; and improved MASTI scores compared to CJ-TKA (P < .05). There was high interobserver (intraclass correlation coefficient 0.92 [95% confidence interval: 0.88-0.96], P < .05) and intraobserver agreement (intraclass correlation coefficient 0.94 [95% confidence interval: 0.92-0.97], P < .05) of the proposed MASTI classification system. There is reduced bone and periarticular soft tissue injury in patients undergoing RA-TKA compared to CJ-TKA. The proposed MASTI classification system is a reproducible grading scheme for describing iatrogenic bone and soft tissue injury in TKA. RA-TKA is associated with reduced bone and soft tissue injury compared with conventional jig-based TKA. The proposed MASTI classification may facilitate further research correlating macroscopic soft tissue injury during TKA to long-term clinical and functional outcomes. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cardille, J. A.; Lee, J.
2017-12-01
With the opening of the Landsat archive, there is a dramatically increased potential for creating high-quality time series of land use/land-cover (LULC) classifications derived from remote sensing. Although LULC time series are appealing, their creation is typically challenging in two fundamental ways. First, there is a need to create maximally correct LULC maps for consideration at each time step; and second, there is a need to have the elements of the time series be consistent with each other, without pixels that flip improbably between covers due only to unavoidable, stray classification errors. We have developed the Bayesian Updating of Land Cover - Unsupervised (BULC-U) algorithm to address these challenges simultaneously, and introduce and apply it here for two related but distinct purposes. First, with minimal human intervention, we produced an internally consistent, high-accuracy LULC time series in rapidly changing Mato Grosso, Brazil for a time interval (1986-2000) in which cropland area more than doubled. The spatial and temporal resolution of the 59 LULC snapshots allows users to witness the establishment of towns and farms at the expense of forest. The new time series could be used by policy-makers and analysts to unravel important considerations for conservation and management, including the timing and location of past development, the rate and nature of changes in forest connectivity, the connection with road infrastructure, and more. The second application of BULC-U is to sharpen the well-known GlobCover 2009 classification from 300m to 30m, while improving accuracy measures for every class. The greatly improved resolution and accuracy permits a better representation of the true LULC proportions, the use of this map in models, and quantification of the potential impacts of changes. Given that there may easily be thousands and potentially millions of images available to harvest for an LULC time series, it is imperative to build useful algorithms requiring minimal human intervention. Through image segmentation and classification, BULC-U allows us to use both the spectral and spatial characteristics of imagery to sharpen classifications and create time series. It is hoped that this study may allow us and other users of this new method to consider time series across ever larger areas.
Spectral feature design in high dimensional multispectral data
NASA Technical Reports Server (NTRS)
Chen, Chih-Chien Thomas; Landgrebe, David A.
1988-01-01
The High resolution Imaging Spectrometer (HIRIS) is designed to acquire images simultaneously in 192 spectral bands in the 0.4 to 2.5 micrometers wavelength region. It will make possible the collection of essentially continuous reflectance spectra at a spectral resolution sufficient to extract significantly enhanced amounts of information from return signals as compared to existing systems. The advantages of such high dimensional data come at a cost of increased system and data complexity. For example, since the finer the spectral resolution, the higher the data rate, it becomes impractical to design the sensor to be operated continuously. It is essential to find new ways to preprocess the data which reduce the data rate while at the same time maintaining the information content of the high dimensional signal produced. Four spectral feature design techniques are developed from the Weighted Karhunen-Loeve Transforms: (1) non-overlapping band feature selection algorithm; (2) overlapping band feature selection algorithm; (3) Walsh function approach; and (4) infinite clipped optimal function approach. The infinite clipped optimal function approach is chosen since the features are easiest to find and their classification performance is the best. After the preprocessed data has been received at the ground station, canonical analysis is further used to find the best set of features under the criterion that maximal class separability is achieved. Both 100 dimensional vegetation data and 200 dimensional soil data were used to test the spectral feature design system. It was shown that the infinite clipped versions of the first 16 optimal features had excellent classification performance. The overall probability of correct classification is over 90 percent while providing for a reduced downlink data rate by a factor of 10.
A robust automatic phase correction method for signal dense spectra
NASA Astrophysics Data System (ADS)
Bao, Qingjia; Feng, Jiwen; Chen, Li; Chen, Fang; Liu, Zao; Jiang, Bin; Liu, Chaoyang
2013-09-01
A robust automatic phase correction method for Nuclear Magnetic Resonance (NMR) spectra is presented. In this work, a new strategy combining ‘coarse tuning' with ‘fine tuning' is introduced to correct various spectra accurately. In the ‘coarse tuning' procedure, a new robust baseline recognition method is proposed for determining the positions of the tail ends of the peaks, and then the preliminary phased spectra are obtained by minimizing the objective function based on the height difference of these tail ends. After the ‘coarse tuning', the peaks in the preliminary corrected spectra can be categorized into three classes: positive, negative, and distorted. Based on the classification result, a new custom negative penalty function used in the step of ‘fine tuning' is constructed to avoid the negative peak points in the spectra excluded in the negative peaks and distorted peaks. Finally, the fine phased spectra can be obtained by minimizing the custom negative penalty function. This method is proven to be very robust for it is tolerant to low signal-to-noise ratio, large baseline distortion and independent of the starting search points of phasing parameters. The experimental results on both 1D metabonomics spectra with over-crowded peaks and 2D spectra demonstrate the high efficiency of this automatic method.
Hyperspectral analysis of seagrass in Redfish Bay, Texas
NASA Astrophysics Data System (ADS)
Wood, John S.
Remote sensing using multi- and hyperspectral imaging and analysis has been used in resource management for quite some time, and for a variety of purposes. In the studies to follow, hyperspectral imagery of Redfish Bay is used to discriminate between species of seagrasses found below the water surface. Water attenuates and reflects light and energy from the electromagnetic spectrum, and as a result, subsurface analysis can be more complex than that performed in the terrestrial world. In the following studies, an iterative process is developed, using ENVI image processing software and ArcGIS software. Band selection was based on recommendations developed empirically in conjunction with ongoing research into depth corrections, which were applied to the imagery bands (a default depth of 65 cm was used). Polygons generated, classified and aggregated within ENVI are reclassified in ArcGIS using field site data that was randomly selected for that purpose. After the first iteration, polygons that remain classified as 'Mixed' are subjected to another iteration of classification in ENVI, then brought into ArcGIS and reclassified. Finally, when that classification scheme is exhausted, a supervised classification is performed, using a 'Maximum Likelihood' classification technique, which assigned the remaining polygons to the classification that was most like the training polygons, by digital number value. Producer's Accuracy by classification ranged from 23.33 % for the 'MixedMono' class to 66.67% for the 'Bare' class; User's Accuracy by classification ranged from 22.58% for the 'MixedMono' class to 69.57% for the 'Bare' classification. An overall accuracy of 37.93% was achieved. Producers and Users Accuracies for Halodule were 29% and 39%, respectively; for Thalassia, they were 46% and 40%. Cohen's Kappa Coefficient was calculated at .2988. We then returned to the field and collected spectral signatures of monotypic stands of seagrass at varying depths and at three sensor levels: above the water surface, just below the air/water interface, and at the canopy position, when it differed from the subsurface position. Analysis of plots of these spectral curves, after applying depth corrections and Multiplicative Scatter Correction, indicates that there are detectable spectral differences between Halodule and Thalassia species at all three positions. Further analysis indicated that only above-surface spectral signals could reliably be used to discriminate between species, because there was an overlap of the standard deviations in the other two positions. A recommendation for wavelengths that would produce increased accuracy in hyperspectral image analysis was made, based on areas where there is a significant amount of difference between the mean spectral signatures, and no overlap of the standard deviations in our samples. The original hyperspectral imagery was reprocessed, using the bands recommended from the research above (approximately 535, 600, 620, 638, and 656 nm). A depth raster was developed from various available sources, which was resampled and reclassified to reflect values for water absorption and water scattering, which were then applied to each band using the depth correction algorithm. Processing followed the iterative classification methods described above. Accuracy for this round of processing improved; overall accuracy increased from 38% to 57%. Improvements were noted in Producer's Accuracy, with the 'Bare' vi classification increasing from 67% to 73%, Halodule increasing from 29% to 63%, Thalassia increasing slightly, from 46% to 50%, and 'MixedMono' improving from 23% to 42%. User's Accuracy also improved, with the 'Bare' class increasing from 69% to 70%, Halodule increasing from 39% to 67%, Thalassia increasing from 40% to 7%, and 'MixedMono' increasing from 22.5% to 35%. A very recent report shows the mean percent cover of seagrasses in Redfish Bay and Corpus Christi Bay combined for all species at 68.6%, and individually by species: Halodule 39.8%, Thalassia 23.7%, Syringodium 4%, Ruppia 1% and Halophila 0.1%. Our study classifies 15% as 'Bare', 23% Halodule, 18% Thalassia, and 2% Ruppia. In addition, we classify 5% as 'Mixed', 22% as 'MixedMono', 12% as 'Bare/Halodule Mix', and 3% 'Bare/Thalassia Mix'. Aggregating the 'Bare' and 'Bare/species' classes would equate to approximately 30%, very close to what this new study produces. Other classes are quite similar, when considering that their study includes no 'Mixed' classifications. This series of research studies illustrates the application and utility of hyperspectral imagery and associated processing to mapping shallow benthic habitats. It also demonstrates that the technology is rapidly changing and adapting, which will lead to even further increases in accuracy. Future studies with hyperspectral imaging should include extensive spectral field collection, and the application of a depth correction.
Mayes, Susan Dickerson; Calhoun, Susan L; Murray, Michael J; Morrow, Jill D; Yurich, Kirsten K L; Cothren, Shiyoko; Purichia, Heather; Bouder, James N
2011-02-01
Little is known about the validity of Gilliam Asperger's Disorder Scale (GADS), although it is widely used. This study of 199 children with high functioning autism or Asperger's disorder, 195 with low functioning autism, and 83 with attention deficit hyperactivity disorder (ADHD) showed high classification accuracy (autism vs. ADHD) for clinicians' GADS Quotients (92%), and somewhat lower accuracy (77%) for parents' Quotients. Both children with high and low functioning autism had clinicians' Quotients (M=99 and 101, respectively) similar to the Asperger's Disorder mean of 100 for the GADS normative sample. Children with high functioning autism scored significantly higher on the cognitive patterns subscale than children with low functioning autism, and the latter had higher scores on the remaining subscales: social interaction, restricted patterns of behavior, and pragmatic skills. Using the clinicians' Quotient and Cognitive Patterns score, 70% of children were correctly identified as having high or low functioning autism or ADHD.
Hsia, C C; Liou, K J; Aung, A P W; Foo, V; Huang, W; Biswas, J
2009-01-01
Pressure ulcers are common problems for bedridden patients. Caregivers need to reposition the sleeping posture of a patient every two hours in order to reduce the risk of getting ulcers. This study presents the use of Kurtosis and skewness estimation, principal component analysis (PCA) and support vector machines (SVMs) for sleeping posture classification using cost-effective pressure sensitive mattress that can help caregivers to make correct sleeping posture changes for the prevention of pressure ulcers.
Deployment and Performance of the NASA D3R During the GPM OLYMPEx Field Campaign
NASA Technical Reports Server (NTRS)
Chandrasekar, V.; Beauchamp, Robert M.; Chen, Haonan; Vega, Manuel; Schwaller, Mathew; Willie, Delbert; Dabrowski, Aaron; Kumar, Mohit; Petersen, Walter; Wolff, David
2016-01-01
The NASA D3R was successfully deployed and operated throughout the NASA OLYMPEx field campaign. A differential phase based attenuation correction technique has been implemented for D3R observations. Hydrometeor classification has been demonstrated for five distinct classes using Ku-band observations of both convection and stratiform rain. The stratiform rain hydrometeor classification is compared against LDR observations and shows good agreement in identification of mixed-phase hydrometeors in the melting layer.
NASA Astrophysics Data System (ADS)
Pérez Rosas, Osvaldo G.; Rivera Martínez, José L.; Maldonado Cano, Luis A.; López Rodríguez, Mario; Amaya Reyes, Laura M.; Cano Martínez, Elizabeth; García Vázquez, Mireya S.; Ramírez Acosta, Alejandro A.
2017-09-01
The automatic identification and classification of musical genres based on the sound similarities to form musical textures, it is a very active investigation area. In this context it has been created recognition systems of musical genres, formed by time-frequency characteristics extraction methods and by classification methods. The selection of this methods are important for a good development in the recognition systems. In this article they are proposed the Mel-Frequency Cepstral Coefficients (MFCC) methods as a characteristic extractor and Support Vector Machines (SVM) as a classifier for our system. The stablished parameters of the MFCC method in the system by our time-frequency analysis, represents the gamma of Mexican culture musical genres in this article. For the precision of a classification system of musical genres it is necessary that the descriptors represent the correct spectrum of each gender; to achieve this we must realize a correct parametrization of the MFCC like the one we present in this article. With the system developed we get satisfactory detection results, where the least identification percentage of musical genres was 66.67% and the one with the most precision was 100%.
Combining multiple decisions: applications to bioinformatics
NASA Astrophysics Data System (ADS)
Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.
2008-01-01
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.
McLeod, Adam; Bochniewicz, Elaine M; Lum, Peter S; Holley, Rahsaan J; Emmer, Geoff; Dromerick, Alexander W
2016-02-01
To improve measurement of upper extremity (UE) use in the community by evaluating the feasibility of using body-worn sensor data and machine learning models to distinguish productive prehensile and bimanual UE activity use from extraneous movements associated with walking. Comparison of machine learning classification models with criterion standard of manually scored videos of performance in UE prosthesis users. Rehabilitation hospital training apartment. Convenience sample of UE prosthesis users (n=5) and controls (n=13) similar in age and hand dominance (N=18). Participants were filmed executing a series of functional activities; a trained observer annotated each frame to indicate either UE movement directed at functional activity or walking. Synchronized data from an inertial sensor attached to the dominant wrist were similarly classified as indicating either a functional use or walking. These data were used to train 3 classification models to predict the functional versus walking state given the associated sensor information. Models were trained over 4 trials: on UE amputees and controls and both within subject and across subject. Model performance was also examined with and without preprocessing (centering) in the across-subject trials. Percent correct classification. With the exception of the amputee/across-subject trial, at least 1 model classified >95% of test data correctly for all trial types. The top performer in the amputee/across-subject trial classified 85% of test examples correctly. We have demonstrated that computationally lightweight classification models can use inertial data collected from wrist-worn sensors to reliably distinguish prosthetic UE movements during functional use from walking-associated movement. This approach has promise in objectively measuring real-world UE use of prosthetic limbs and may be helpful in clinical trials and in measuring response to treatment of other UE pathologies. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Meltzer, H Y; Matsubara, S; Lee, J C
1989-10-01
The pKi values of 13 reference typical and 7 reference atypical antipsychotic drugs (APDs) for rat striatal dopamine D-1 and D-2 receptor binding sites and cortical serotonin (5-HT2) receptor binding sites were determined. The atypical antipsychotics had significantly lower pKi values for the D-2 but not 5-HT2 binding sites. There was a trend for a lower pKi value for the D-1 binding site for the atypical APD. The 5-HT2 and D-1 pKi values were correlated for the typical APD whereas the 5-HT2 and D-2 pKi values were correlated for the atypical APD. A stepwise discriminant function analysis to determine the independent contribution of each pKi value for a given binding site to the classification as a typical or atypical APD entered the D-2 pKi value first, followed by the 5-HT2 pKi value. The D-1 pKi value was not entered. A discriminant function analysis correctly classified 19 of 20 of these compounds plus 14 of 17 additional test compounds as typical or atypical APD for an overall correct classification rate of 89.2%. The major contributors to the discriminant function were the D-2 and 5-HT2 pKi values. A cluster analysis based only on the 5-HT2/D2 ratio grouped 15 of 17 atypical + one typical APD in one cluster and 19 of 20 typical + two atypical APDs in a second cluster, for an overall correct classification rate of 91.9%. When the stepwise discriminant function was repeated for all 37 compounds, only the D-2 and 5-HT2 pKi values were entered into the discriminant function.(ABSTRACT TRUNCATED AT 250 WORDS)
Yang, Jianji J; Cohen, Aaron M; Cohen, Aaron; McDonagh, Marian S
2008-11-06
Automatic document classification can be valuable in increasing the efficiency in updating systematic reviews (SR). In order for the machine learning process to work well, it is critical to create and maintain high-quality training datasets consisting of expert SR inclusion/exclusion decisions. This task can be laborious, especially when the number of topics is large and source data format is inconsistent.To approach this problem, we build an automated system to streamline the required steps, from initial notification of update in source annotation files to loading the data warehouse, along with a web interface to monitor the status of each topic. In our current collection of 26 SR topics, we were able to standardize almost all of the relevance judgments and recovered PMIDs for over 80% of all articles. Of those PMIDs, over 99% were correct in a manual random sample study. Our system performs an essential function in creating training and evaluation data sets for SR text mining research.
Yang, Jianji J.; Cohen, Aaron M.; McDonagh, Marian S.
2008-01-01
Automatic document classification can be valuable in increasing the efficiency in updating systematic reviews (SR). In order for the machine learning process to work well, it is critical to create and maintain high-quality training datasets consisting of expert SR inclusion/exclusion decisions. This task can be laborious, especially when the number of topics is large and source data format is inconsistent. To approach this problem, we build an automated system to streamline the required steps, from initial notification of update in source annotation files to loading the data warehouse, along with a web interface to monitor the status of each topic. In our current collection of 26 SR topics, we were able to standardize almost all of the relevance judgments and recovered PMIDs for over 80% of all articles. Of those PMIDs, over 99% were correct in a manual random sample study. Our system performs an essential function in creating training and evaluation datasets for SR text mining research. PMID:18999194
Multimethod latent class analysis
Nussbeck, Fridtjof W.; Eid, Michael
2015-01-01
Correct and, hence, valid classifications of individuals are of high importance in the social sciences as these classifications are the basis for diagnoses and/or the assignment to a treatment. The via regia to inspect the validity of psychological ratings is the multitrait-multimethod (MTMM) approach. First, a latent variable model for the analysis of rater agreement (latent rater agreement model) will be presented that allows for the analysis of convergent validity between different measurement approaches (e.g., raters). Models of rater agreement are transferred to the level of latent variables. Second, the latent rater agreement model will be extended to a more informative MTMM latent class model. This model allows for estimating (i) the convergence of ratings, (ii) method biases in terms of differential latent distributions of raters and differential associations of categorizations within raters (specific rater bias), and (iii) the distinguishability of categories indicating if categories are satisfyingly distinct from each other. Finally, an empirical application is presented to exemplify the interpretation of the MTMM latent class model. PMID:26441714
Villarreal, Diana; Laffargue, Andreina; Posada, Huver; Bertrand, Benoit; Lashermes, Philippe; Dussert, Stephane
2009-12-09
In a previous study, the effectiveness of chlorogenic acids, fatty acids (FA), and elements was compared for the discrimination of Arabica varieties and growing terroirs. Since FA provided the best results, the aim of the present work was to validate their discrimination ability using an extended experimental design, including twice the number of location x variety combinations and 2 years of study. It also aimed at understanding how the environment influences FA composition through correlation analysis using different climatic parameters. Percentages of correct classification of known samples remained very high, independent of the classification criterion. However, cross-validation tests across years indicated that prediction of unknown locations was less efficient than that of unknown genotypes. Environmental temperature during the development of coffee beans had a dramatic influence on their FA composition. Analysis of climate patterns over years enabled us to understand the efficient location discrimination within a single year but only moderate efficiency across years.
QRS slopes for assessment of myocardial damage in chronic chagasic patients
NASA Astrophysics Data System (ADS)
Pueyo, E.; Laciar, E.; Anzuola, E.; Laguna, P.; Jané, R.
2007-11-01
In this study the slopes of the QRS complex are evaluated for determination of the degree of myocardial damage in chronic chagasic patients. Previous studies have demonstrated the ability of the slope indices to reflect alterations in the conduction velocity of the cardiac impulse. Results obtained in the present study show that chronic chagasic patients have significantly flatter QRS slopes as compared to healthy subjects. Not only that but the extent of slope lessening turns out to be proportional to the degree of myocardial damage caused by the disease. Additionally, when incorporating the slope indices into a classification analysis together with other indices indicative of the presence of ventricular late potentials obtained from high resolution electrocardiography, results show that the percentages of correct classification increase up to 62.5%, which means eight points above the percentages obtained prior to incorporation of the slope indices. It can be concluded that QRS slopes have great potential for assessing the degree of severity associated with Chagas' disease.
Kelly, J F Daniel; Downey, Gerard
2005-05-04
Fourier transform infrared spectroscopy and attenuated total reflection sampling have been used to detect adulteration of single strength apple juice samples. The sample set comprised 224 authentic apple juices and 480 adulterated samples. Adulterants used included partially inverted cane syrup (PICS), beet sucrose (BS), high fructose corn syrup (HFCS), and a synthetic solution of fructose, glucose, and sucrose (FGS). Adulteration was carried out on individual apple juice samples at levels of 10, 20, 30, and 40% w/w. Spectral data were compressed by principal component analysis and analyzed using k-nearest neighbors and partial least squares regression techniques. Prediction results for the best classification models achieved an overall (authentic plus adulterated) correct classification rate of 96.5, 93.9, 92.2, and 82.4% for PICS, BS, HFCS, and FGS adulterants, respectively. This method shows promise as a rapid screening technique for the detection of a broad range of potential adulterants in apple juice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paudel, M. R.; Mackenzie, M.; Rathee, S.
2013-08-15
Purpose: To evaluate the metal artifacts in kilovoltage computed tomography (kVCT) images that are corrected using a normalized metal artifact reduction (NMAR) method with megavoltage CT (MVCT) prior images.Methods: Tissue characterization phantoms containing bilateral steel inserts are used in all experiments. Two MVCT images, one without any metal artifact corrections and the other corrected using a modified iterative maximum likelihood polychromatic algorithm for CT (IMPACT) are translated to pseudo-kVCT images. These are then used as prior images without tissue classification in an NMAR technique for correcting the experimental kVCT image. The IMPACT method in MVCT included an additional model formore » the pair/triplet production process and the energy dependent response of the MVCT detectors. An experimental kVCT image, without the metal inserts and reconstructed using the filtered back projection (FBP) method, is artificially patched with the known steel inserts to get a reference image. The regular NMAR image containing the steel inserts that uses tissue classified kVCT prior and the NMAR images reconstructed using MVCT priors are compared with the reference image for metal artifact reduction. The Eclipse treatment planning system is used to calculate radiotherapy dose distributions on the corrected images and on the reference image using the Anisotropic Analytical Algorithm with 6 MV parallel opposed 5 × 10 cm{sup 2} fields passing through the bilateral steel inserts, and the results are compared. Gafchromic film is used to measure the actual dose delivered in a plane perpendicular to the beams at the isocenter.Results: The streaking and shading in the NMAR image using tissue classifications are significantly reduced. However, the structures, including metal, are deformed. Some uniform regions appear to have eroded from one side. There is a large variation of attenuation values inside the metal inserts. Similar results are seen in commercially corrected image. Use of MVCT prior images without tissue classification in NMAR significantly reduces these problems. The radiation dose calculated on the reference image is close to the dose measured using the film. Compared to the reference image, the calculated dose difference in the conventional NMAR image, the corrected images using uncorrected MVCT image, and IMPACT corrected MVCT image as priors is ∼15.5%, ∼5%, and ∼2.7%, respectively, at the isocenter.Conclusions: The deformation and erosion of the structures present in regular NMAR corrected images can be largely reduced by using MVCT priors without tissue segmentation. The attenuation value of metal being incorrect, large dose differences relative to the true value can result when using the conventional NMAR image. This difference can be significantly reduced if MVCT images are used as priors. Reduced tissue deformation, better tissue visualization, and correct information about the electron density of the tissues and metals in the artifact corrected images could help delineate the structures better, as well as calculate radiation dose more correctly, thus enhancing the quality of the radiotherapy treatment planning.« less
Steen, P.J.; Zorn, T.G.; Seelbach, P.W.; Schaeffer, J.S.
2008-01-01
Traditionally, fish habitat requirements have been described from local-scale environmental variables. However, recent studies have shown that studying landscape-scale processes improves our understanding of what drives species assemblages and distribution patterns across the landscape. Our goal was to learn more about constraints on the distribution of Michigan stream fish by examining landscape-scale habitat variables. We used classification trees and landscape-scale habitat variables to create and validate presence-absence models and relative abundance models for Michigan stream fishes. We developed 93 presence-absence models that on average were 72% correct in making predictions for an independent data set, and we developed 46 relative abundance models that were 76% correct in making predictions for independent data. The models were used to create statewide predictive distribution and abundance maps that have the potential to be used for a variety of conservation and scientific purposes. ?? Copyright by the American Fisheries Society 2008.
Analysis of thematic mapper simulator data collected over eastern North Dakota
NASA Technical Reports Server (NTRS)
Anderson, J. E. (Principal Investigator)
1982-01-01
The results of the analysis of aircraft-acquired thematic mapper simulator (TMS) data, collected to investigate the utility of thematic mapper data in crop area and land cover estimates, are discussed. Results of the analysis indicate that the seven-channel TMS data are capable of delineating the 13 crop types included in the study to an overall pixel classification accuracy of 80.97% correct, with relative efficiencies for four crop types examined between 1.62 and 26.61. Both supervised and unsupervised spectral signature development techniques were evaluated. The unsupervised methods proved to be inferior (based on analysis of variance) for the majority of crop types considered. Given the ground truth data set used for spectral signature development as well as evaluation of performance, it is possible to demonstrate which signature development technique would produce the highest percent correct classification for each crop type.
Quantifying color variation: Improved formulas for calculating hue with segment classification.
Smith, Stacey D
2014-03-01
Differences in color form a major component of biological variation, and quantifying these differences is the first step to understanding their evolutionary and ecological importance. One common method for measuring color variation is segment classification, which uses three variables (chroma, hue, and brightness) to describe the height and shape of reflectance curves. This study provides new formulas for calculating hue (the variable that describes the "type" of color) to give correct values in all regions of color space. • Reflectance spectra were obtained from the literature, and chroma, hue, and brightness were computed for each spectrum using the original formulas as well as the new formulas. Only the new formulas result in correct values in the blue-green portion of color space. • Use of the new formulas for calculating hue will result in more accurate color quantification for a broad range of biological applications.
Eloqayli, Haytham; Al-Yousef, Ali; Jaradat, Raid
2018-02-15
Despite the high prevalence of chronic neck pain, there is limited consensus about the primary etiology, risk factors, diagnostic criteria and therapeutic outcome. Here, we aimed to determine if Ferritin and Vitamin D are modifiable risk factors with chronic neck pain using slandered statistics and artificial intelligence neural network (ANN). Fifty-four patients with chronic neck pain treated between February 2016 and August 2016 in King Abdullah University Hospital and 54 patients age matched controls undergoing outpatient or minor procedures were enrolled. Patients and control demographic parameters, height, weight and single measurement of serum vitamin D, Vitamin B12, ferritin, calcium, phosphorus, zinc were obtained. An ANN prediction model was developed. The statistical analysis reveals that patients with chronic neck pain have significantly lower serum Vitamin D and Ferritin (p-value <.05). 90% of patients with chronic neck pain were females. Multilayer Feed Forward Neural Network with Back Propagation(MFFNN) prediction model were developed and designed based on vitamin D and ferritin as input variables and CNP as output. The ANN model output results show that, 92 out of 108 samples were correctly classified with 85% classification accuracy. Although Iron and vitamin D deficiency cannot be isolated as the sole risk factors of chronic neck pain, they should be considered as two modifiable risk. The high prevalence of chronic neck pain, hypovitaminosis D and low ferritin amongst women is of concern. Bioinformatics predictions with artificial neural network can be of future benefit in classification and prediction models for chronic neck pain. We hope this initial work will encourage a future larger cohort study addressing vitamin D and iron correction as modifiable factors and the application of artificial intelligence models in clinical practice.
Can early hepatic fibrosis stages be discriminated by combining ultrasonic parameters?
Bouzitoune, Razika; Meziri, Mahmoud; Machado, Christiano Bittencourt; Padilla, Frédéric; Pereira, Wagner Coelho de Albuquerque
2016-05-01
In this study, we put forward a new approach to classify early stages of fibrosis based on a multiparametric characterization using backscatter ultrasonic signals. Ultrasonic parameters, such as backscatter coefficient (Bc), speed of sound (SoS), attenuation coefficient (Ac), mean scatterer spacing (MSS), and spectral slope (SS), have shown their potential to differentiate between healthy and pathologic samples in different organs (eye, breast, prostate, liver). Recently, our group looked into the characterization of stages of hepatic fibrosis using the parameters cited above. The results showed that none of them could individually distinguish between the different stages. Therefore, we explored a multiparametric approach by combining these parameters in two and three, to test their potential to discriminate between the stages of liver fibrosis: F0 (normal), F1, F3, and/without F4 (cirrhosis), according to METAVIR Score. Discriminant analysis showed that the most relevant individual parameter was Bc, followed by SoS, SS, MSS, and Ac. The combination of (Bc, SoS) along with the four stages was the best in differentiating between the stages of fibrosis and correctly classified 85% of the liver samples with a high level of significance (p<0.0001). Nevertheless, when taking into account only stages F0, F1, and F3, the discriminant analysis showed that the parameters (Bc, SoS) and (Bc, Ac) had a better classification (93%) with a high level of significance (p<0.0001). The combination of the three parameters (Bc, SoS, and Ac) led to a 100% correct classification. In conclusion, the current findings show that the multiparametric approach has great potential in differentiating between the stages of fibrosis, and thus could play an important role in the diagnosis and follow-up of hepatic fibrosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Gan, Heng-Hui; Soukoulis, Christos; Fisk, Ian
2014-03-01
In the present work, we have evaluated for first time the feasibility of APCI-MS volatile compound fingerprinting in conjunction with chemometrics (PLS-DA) as a new strategy for rapid and non-destructive food classification. For this purpose 202 clarified monovarietal juices extracted from apples differing in their botanical and geographical origin were used for evaluation of the performance of APCI-MS as a classification tool. For an independent test set PLS-DA analyses of pre-treated spectral data gave 100% and 94.2% correct classification rate for the classification by cultivar and geographical origin, respectively. Moreover, PLS-DA analysis of APCI-MS in conjunction with GC-MS data revealed that masses within the spectral ACPI-MS data set were related with parent ions or fragments of alkyesters, carbonyl compounds (hexanal, trans-2-hexenal) and alcohols (1-hexanol, 1-butanol, cis-3-hexenol) and had significant discriminating power both in terms of cultivar and geographical origin. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Hripcsak, George; Knirsch, Charles; Zhou, Li; Wilcox, Adam; Melton, Genevieve B
2007-03-01
Data mining in electronic medical records may facilitate clinical research, but much of the structured data may be miscoded, incomplete, or non-specific. The exploitation of narrative data using natural language processing may help, although nesting, varying granularity, and repetition remain challenges. In a study of community-acquired pneumonia using electronic records, these issues led to poor classification. Limiting queries to accurate, complete records led to vastly reduced, possibly biased samples. We exploited knowledge latent in the electronic records to improve classification. A similarity metric was used to cluster cases. We defined discordance as the degree to which cases within a cluster give different answers for some query that addresses a classification task of interest. Cases with higher discordance are more likely to be incorrectly classified, and can be reviewed manually to adjust the classification, improve the query, or estimate the likely accuracy of the query. In a study of pneumonia--in which the ICD9-CM coding was found to be very poor--the discordance measure was statistically significantly correlated with classification correctness (.45; 95% CI .15-.62).
Blind identification of image manipulation type using mixed statistical moments
NASA Astrophysics Data System (ADS)
Jeong, Bo Gyu; Moon, Yong Ho; Eom, Il Kyu
2015-01-01
We present a blind identification of image manipulation types such as blurring, scaling, sharpening, and histogram equalization. Motivated by the fact that image manipulations can change the frequency characteristics of an image, we introduce three types of feature vectors composed of statistical moments. The proposed statistical moments are generated from separated wavelet histograms, the characteristic functions of the wavelet variance, and the characteristic functions of the spatial image. Our method can solve the n-class classification problem. Through experimental simulations, we demonstrate that our proposed method can achieve high performance in manipulation type detection. The average rate of the correctly identified manipulation types is as high as 99.22%, using 10,800 test images and six manipulation types including the authentic image.
Adaptive sleep-wake discrimination for wearable devices.
Karlen, Walter; Floreano, Dario
2011-04-01
Sleep/wake classification systems that rely on physiological signals suffer from intersubject differences that make accurate classification with a single, subject-independent model difficult. To overcome the limitations of intersubject variability, we suggest a novel online adaptation technique that updates the sleep/wake classifier in real time. The objective of the present study was to evaluate the performance of a newly developed adaptive classification algorithm that was embedded on a wearable sleep/wake classification system called SleePic. The algorithm processed ECG and respiratory effort signals for the classification task and applied behavioral measurements (obtained from accelerometer and press-button data) for the automatic adaptation task. When trained as a subject-independent classifier algorithm, the SleePic device was only able to correctly classify 74.94 ± 6.76% of the human-rated sleep/wake data. By using the suggested automatic adaptation method, the mean classification accuracy could be significantly improved to 92.98 ± 3.19%. A subject-independent classifier based on activity data only showed a comparable accuracy of 90.44 ± 3.57%. We demonstrated that subject-independent models used for online sleep-wake classification can successfully be adapted to previously unseen subjects without the intervention of human experts or off-line calibration.
Contribution of non-negative matrix factorization to the classification of remote sensing images
NASA Astrophysics Data System (ADS)
Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.
2008-10-01
Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.
Automotive System for Remote Surface Classification.
Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail
2017-04-01
In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.
Stein, Dan J; Phillips, Katharine A
2013-05-17
The revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM) provides a useful opportunity to revisit debates about the nature of psychiatric classification. An important debate concerns the involvement of mental health consumers in revisions of the classification. One perspective argues that psychiatric classification is a scientific process undertaken by scientific experts and that including consumers in the revision process is merely pandering to political correctness. A contrasting perspective is that psychiatric classification is a process driven by a range of different values and that the involvement of patients and patient advocates would enhance this process. Here we draw on our experiences with input from the public during the deliberations of the Obsessive Compulsive-Spectrum Disorders subworkgroup of DSM-5, to help make the argument that psychiatric classification does require reasoned debate on a range of different facts and values, and that it is appropriate for scientist experts to review their nosological recommendations in the light of rigorous consideration of patient experience and feedback.
Automotive System for Remote Surface Classification
Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail
2017-01-01
In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions. PMID:28368297
NASA Astrophysics Data System (ADS)
Zink, Frank Edward
The detection and classification of pulmonary nodules is of great interest in chest radiography. Nodules are often indicative of primary cancer, and their detection is particularly important in asymptomatic patients. The ability to classify nodules as calcified or non-calcified is important because calcification is a positive indicator that the nodule is benign. Dual-energy methods offer the potential to improve both the detection and classification of nodules by allowing the formation of material-selective images. Tissue-selective images can improve detection by virtue of the elimination of obscuring rib structure. Bone -selective images are essentially calcium images, allowing classification of the nodule. A dual-energy technique is introduced which uses a computed radiography system to acquire dual-energy chest radiographs in a single-exposure. All aspects of the dual-energy technique are described, with particular emphasis on scatter-correction, beam-hardening correction, and noise-reduction algorithms. The adaptive noise-reduction algorithm employed improves material-selective signal-to-noise ratio by up to a factor of seven with minimal sacrifice in selectivity. A clinical comparison study is described, undertaken to compare the dual-energy technique to conventional chest radiography for the tasks of nodule detection and classification. Observer performance data were collected using the Free Response Observer Characteristic (FROC) method and the bi-normal Alternative FROC (AFROC) performance model. Results of the comparison study, analyzed using two common multiple observer statistical models, showed that the dual-energy technique was superior to conventional chest radiography for detection of nodules at a statistically significant level (p < .05). Discussion of the comparison study emphasizes the unique combination of data collection and analysis techniques employed, as well as the limitations of comparison techniques in the larger context of technology assessment.
Promoting consistent use of the communication function classification system (CFCS).
Cunningham, Barbara Jane; Rosenbaum, Peter; Hidecker, Mary Jo Cooley
2016-01-01
We developed a Knowledge Translation (KT) intervention to standardize the way speech-language pathologists working in Ontario Canada's Preschool Speech and Language Program (PSLP) used the Communication Function Classification System (CFCS). This tool was being used as part of a provincial program evaluation and standardizing its use was critical for establishing reliability and validity within the provincial dataset. Two theoretical foundations - Diffusion of Innovations and the Communication Persuasion Matrix - were used to develop and disseminate the intervention to standardize use of the CFCS among a cohort speech-language pathologists. A descriptive pre-test/post-test study was used to evaluate the intervention. Fifty-two participants completed an electronic pre-test survey, reviewed intervention materials online, and then immediately completed an electronic post-test survey. The intervention improved clinicians' understanding of how the CFCS should be used, their intentions to use the tool in the standardized way, and their abilities to make correct classifications using the tool. Findings from this work will be shared with representatives of the Ontario PSLP. The intervention may be disseminated to all speech-language pathologists working in the program. This study can be used as a model for developing and disseminating KT interventions for clinicians in paediatric rehabilitation. The Communication Function Classification System (CFCS) is a new tool that allows speech-language pathologists to classify children's skills into five meaningful levels of function. There is uncertainty and inconsistent practice in the field about the methods for using this tool. This study used combined two theoretical frameworks to develop an intervention to standardize use of the CFCS among a cohort of speech-language pathologists. The intervention effectively increased clinicians' understanding of the methods for using the CFCS, ability to make correct classifications, and intention to use the tool in the standardized way in the future.
Classification of asteroid spectra using a neural network
NASA Technical Reports Server (NTRS)
Howell, E. S.; Merenyi, E.; Lebofsky, L. A.
1994-01-01
The 52-color asteroid survey (Bell et al., 1988) together with the 8-color asteroid survey (Zellner et al., 1985) provide a data set of asteroid spectra spanning 0.3-2.5 micrometers. An artificial neural network clusters these asteroid spectra based on their similarity to each other. We have also trained the neural network with a categorization learning output layer in a supervised mode to associate the established clusters with taxonomic classes. Results of our classification agree with Tholen's classification based on the 8-color data alone. When extending the spectral range using the 52-color survey data, we find that some modification of the Tholen classes is indicated to produce a cleaner, self-consistent set of taxonomic classes. After supervised training using our modified classes, the network correctly classifies both the training examples, and additional spectra into the correct class with an average of 90% accuracy. Our classification supports the separation of the K class from the S class, as suggested by Bell et al. (1987), based on the near-infrared spectrum. We define two end-member subclasses which seem to have compositional significance within the S class: the So class, which is olivine-rich and red, and the Sp class, which is pyroxene-rich and less red. The remaining S-class asteroids have intermediate compositions of both olivine and pyroxene and moderately red continua. The network clustering suggests some additional structure within the E-, M-, and P-class asteroids, even in the absence of albedo information, which is the only discriminant between these in the Tholen classification. New relationships are seen between the C class and related G, B, and F classes. However, in both cases, the number of spectra is too small to interpret or determine the significance of these separations.
A likelihood ratio model for the determination of the geographical origin of olive oil.
Własiuk, Patryk; Martyna, Agnieszka; Zadora, Grzegorz
2015-01-01
Food fraud or food adulteration may be of forensic interest for instance in the case of suspected deliberate mislabeling. On account of its potential health benefits and nutritional qualities, geographical origin determination of olive oil might be of special interest. The use of a likelihood ratio (LR) model has certain advantages in contrast to typical chemometric methods because the LR model takes into account the information about the sample rarity in a relevant population. Such properties are of particular interest to forensic scientists and therefore it has been the aim of this study to examine the issue of olive oil classification with the use of different LR models and their pertinence under selected data pre-processing methods (logarithm based data transformations) and feature selection technique. This was carried out on data describing 572 Italian olive oil samples characterised by the content of 8 fatty acids in the lipid fraction. Three classification problems related to three regions of Italy (South, North and Sardinia) have been considered with the use of LR models. The correct classification rate and empirical cross entropy were taken into account as a measure of performance of each model. The application of LR models in determining the geographical origin of olive oil has proven to be satisfactorily useful for the considered issues analysed in terms of many variants of data pre-processing since the rates of correct classifications were close to 100% and considerable reduction of information loss was observed. The work also presents a comparative study of the performance of the linear discriminant analysis in considered classification problems. An approach to the choice of the value of the smoothing parameter is highlighted for the kernel density estimation based LR models as well. Copyright © 2014 Elsevier B.V. All rights reserved.
Semantic Building FAÇADE Segmentation from Airborne Oblique Images
NASA Astrophysics Data System (ADS)
Lin, Y.; Nex, F.; Yang, M. Y.
2018-05-01
With the introduction of airborne oblique camera systems and the improvement of photogrammetric techniques, high-resolution 2D and 3D data can be acquired in urban areas. This high-resolution data allows us to perform detailed investigations on building roofs and façades which can contribute to LoD3 city modeling. Normally, façade segmentation is achieved from terrestrial views. In this paper, we address the problem from aerial views by using high resolution oblique aerial images as the data source in urban areas. In addition to traditional image features, such as RGB and SIFT, normal vector and planarity are also extracted from dense matching point clouds. Then, these 3D geometrical features are projected back to 2D space to assist façade interpretation. Random forest is trained and applied to label façade pixels. Fully connected conditional random field (CRF), capturing long-range spatial interactions, is used as a post-processing to refine our classification results. Its pairwise potential is defined by a linear combination of Gaussian kernels and the CRF model is efficiently solved by mean field approximation. Experiments show that 3D features can significantly improve classification results. Also, fully connected CRF performs well in correcting noisy pixels.
The history of female genital tract malformation classifications and proposal of an updated system.
Acién, Pedro; Acién, Maribel I
2011-01-01
A correct classification of malformations of the female genital tract is essential to prevent unnecessary and inadequate surgical operations and to compare reproductive results. An ideal classification system should be based on aetiopathogenesis and should suggest the appropriate therapeutic strategy. We conducted a systematic review of relevant articles found in PubMed, Scopus, Scirus and ISI webknowledge, and analysis of historical collections of 'female genital malformations' and 'classifications'. Of 124 full-text articles assessed for eligibility, 64 were included because they contained original general, partial or modified classifications. All the existing classifications were analysed and grouped. The unification of terms and concepts was also analysed. Traditionally, malformations of the female genital tract have been catalogued and classified as Müllerian malformations due to agenesis, lack of fusion, the absence of resorption and lack of posterior development of the Müllerian ducts. The American Fertility Society classification of the late 1980s included seven basic groups of malformations also considering the Müllerian development and the relationship of the malformations to fertility. Other classifications are based on different aspects: functional, defects in vertical fusion, embryological or anatomical (Vagina, Cervix, Uterus, Adnex and Associated Malformation: VCUAM classification). However, an embryological-clinical classification system seems to be the most appropriate. Accepting the need for a new classification system of genitourinary malformations that considers the experience gained from the application of the current classification systems, the aetiopathogenesis and that also suggests the appropriate treatment, we proposed an update of our embryological-clinical classification as a new system with six groups of female genitourinary anomalies.
Ryan, D; Shephard, S; Kelly, F L
2016-09-01
This study investigates temporal stability in the scale microchemistry of brown trout Salmo trutta in feeder streams of a large heterogeneous lake catchment and rates of change after migration into the lake. Laser-ablation inductively coupled plasma mass spectrometry was used to quantify the elemental concentrations of Na, Mg, Mn, Cu, Zn, Ba and Sr in archived (1997-2002) scales of juvenile S. trutta collected from six major feeder streams of Lough Mask, County Mayo, Ireland. Water-element Ca ratios within these streams were determined for the fish sampling period and for a later period (2013-2015). Salmo trutta scale Sr and Ba concentrations were significantly (P < 0·05) correlated with stream water sample Sr:Ca and Ba:Ca ratios respectively from both periods, indicating multi-annual stability in scale and water-elemental signatures. Discriminant analysis of scale chemistries correctly classified 91% of sampled juvenile S. trutta to their stream of origin using a cross-validated classification model. This model was used to test whether assumed post-depositional change in scale element concentrations reduced correct natal stream classification of S. trutta in successive years after migration into Lough Mask. Fish residing in the lake for 1-3 years could be reliably classified to their most likely natal stream, but the probability of correct classification diminished strongly with longer lake residence. Use of scale chemistry to identify natal streams of lake S. trutta should focus on recent migrants, but may not require contemporary water chemistry data. © 2016 The Fisheries Society of the British Isles.
A multitemporal (1979-2009) land-use/land-cover dataset of the binational Santa Cruz Watershed
2011-01-01
Trends derived from multitemporal land-cover data can be used to make informed land management decisions and to help managers model future change scenarios. We developed a multitemporal land-use/land-cover dataset for the binational Santa Cruz watershed of southern Arizona, United States, and northern Sonora, Mexico by creating a series of land-cover maps at decadal intervals (1979, 1989, 1999, and 2009) using Landsat Multispectral Scanner and Thematic Mapper data and a classification and regression tree classifier. The classification model exploited phenological changes of different land-cover spectral signatures through the use of biseasonal imagery collected during the (dry) early summer and (wet) late summer following rains from the North American monsoon. Landsat images were corrected to remove atmospheric influences, and the data were converted from raw digital numbers to surface reflectance values. The 14-class land-cover classification scheme is based on the 2001 National Land Cover Database with a focus on "Developed" land-use classes and riverine "Forest" and "Wetlands" cover classes required for specific watershed models. The classification procedure included the creation of several image-derived and topographic variables, including digital elevation model derivatives, image variance, and multitemporal Kauth-Thomas transformations. The accuracy of the land-cover maps was assessed using a random-stratified sampling design, reference aerial photography, and digital imagery. This showed high accuracy results, with kappa values (the statistical measure of agreement between map and reference data) ranging from 0.80 to 0.85.
Shubham, Divya; Kawthalkar, Anjali S
2018-05-01
To assess the feasibility of the PALM-COEIN system for the classification of abnormal uterine bleeding (AUB) in low-resource settings and to suggest modifications. A prospective study was conducted among women with AUB who were admitted to the gynecology ward of a tertiary care hospital and research center in central India between November 2014 and October 2016. All patients were managed as per department protocols. The causes of AUB were classified before treatment using the PALM-COEIN system (classification I) and on the basis of the histopathology reports of the hysterectomy specimens (classification II); the results were compared using classification II as the gold standard. The study included 200 women with AUB; hysterectomy was performed in 174 women. Preoperative classification of AUB per the PALM-COEIN system was correct in 130 (65.0%) women. Adenomyosis (evaluated by transvaginal ultrasonography) and endometrial hyperplasia (evaluated by endometrial curettage) were underdiagnosed. The PALM-COEIN classification system helps in deciding the best treatment modality for women with AUB on a case-by-case basis. The incorporation of suggested modifications will further strengthen its utility as a pretreatment classification system in low-resource settings. © 2017 International Federation of Gynecology and Obstetrics.
Sevel, Landrew S; Boissoneault, Jeff; Letzen, Janelle E; Robinson, Michael E; Staud, Roland
2018-05-30
Chronic fatigue syndrome (CFS) is a disorder associated with fatigue, pain, and structural/functional abnormalities seen during magnetic resonance brain imaging (MRI). Therefore, we evaluated the performance of structural MRI (sMRI) abnormalities in the classification of CFS patients versus healthy controls and compared it to machine learning (ML) classification based upon self-report (SR). Participants included 18 CFS patients and 15 healthy controls (HC). All subjects underwent T1-weighted sMRI and provided visual analogue-scale ratings of fatigue, pain intensity, anxiety, depression, anger, and sleep quality. sMRI data were segmented using FreeSurfer and 61 regions based on functional and structural abnormalities previously reported in patients with CFS. Classification was performed in RapidMiner using a linear support vector machine and bootstrap optimism correction. We compared ML classifiers based on (1) 61 a priori sMRI regional estimates and (2) SR ratings. The sMRI model achieved 79.58% classification accuracy. The SR (accuracy = 95.95%) outperformed both sMRI models. Estimates from multiple brain areas related to cognition, emotion, and memory contributed strongly to group classification. This is the first ML-based group classification of CFS. Our findings suggest that sMRI abnormalities are useful for discriminating CFS patients from HC, but SR ratings remain most effective in classification tasks.
Effect of foot shape on the three-dimensional position of foot bones.
Ledoux, William R; Rohr, Eric S; Ching, Randal P; Sangeorzan, Bruce J
2006-12-01
To eliminate some of the ambiguity in describing foot shape, we developed three-dimensional (3D), objective measures of foot type based on computerized tomography (CT) scans. Feet were classified via clinical examination as pes cavus (high arch), neutrally aligned (normal arch), asymptomatic pes planus (flat arch with no pain), or symptomatic pes planus (flat arch with pain). We enrolled 10 subjects of each foot type; if both feet were of the same foot type, then each foot was scanned (n=65 total). Partial weightbearing (20% body weight) CT scans were performed. We generated embedded coordinate systems for each foot bone by assuming uniform density and calculating the inertial matrix. Cardan angles were used to describe five bone-to-bone relationships, resulting in 15 angular measurements. Significant differences were found among foot types for 12 of the angles. The angles were also used to develop a classification tree analysis, which determined the correct foot type for 64 of the 65 feet. Our measure provides insight into how foot bone architecture differs between foot types. The classification tree analysis demonstrated that objective measures can be used to discriminate between feet with high, normal, and low arches. Copyright (c) 2006 Orthopaedic Research Society.
Husak, G.J.; Marshall, M. T.; Michaelsen, J.; Pedreros, Diego; Funk, Christopher C.; Galu, G.
2008-01-01
Reliable estimates of cropped area (CA) in developing countries with chronic food shortages are essential for emergency relief and the design of appropriate market-based food security programs. Satellite interpretation of CA is an effective alternative to extensive and costly field surveys, which fail to represent the spatial heterogeneity at the country-level. Bias-corrected, texture based classifications show little deviation from actual crop inventories, when estimates derived from aerial photographs or field measurements are used to remove systematic errors in medium resolution estimates. In this paper, we demonstrate a hybrid high-medium resolution technique for Central Ethiopia that combines spatially limited unbiased estimates from IKONOS images, with spatially extensive Landsat ETM+ interpretations, land-cover, and SRTM-based topography. Logistic regression is used to derive the probability of a location being crop. These individual points are then aggregated to produce regional estimates of CA. District-level analysis of Landsat based estimates showed CA totals which supported the estimates of the Bureau of Agriculture and Rural Development. Continued work will evaluate the technique in other parts of Africa, while segmentation algorithms will be evaluated, in order to automate classification of medium resolution imagery for routine CA estimation in the future.
NASA Astrophysics Data System (ADS)
Husak, G. J.; Marshall, M. T.; Michaelsen, J.; Pedreros, D.; Funk, C.; Galu, G.
2008-07-01
Reliable estimates of cropped area (CA) in developing countries with chronic food shortages are essential for emergency relief and the design of appropriate market-based food security programs. Satellite interpretation of CA is an effective alternative to extensive and costly field surveys, which fail to represent the spatial heterogeneity at the country-level. Bias-corrected, texture based classifications show little deviation from actual crop inventories, when estimates derived from aerial photographs or field measurements are used to remove systematic errors in medium resolution estimates. In this paper, we demonstrate a hybrid high-medium resolution technique for Central Ethiopia that combines spatially limited unbiased estimates from IKONOS images, with spatially extensive Landsat ETM+ interpretations, land-cover, and SRTM-based topography. Logistic regression is used to derive the probability of a location being crop. These individual points are then aggregated to produce regional estimates of CA. District-level analysis of Landsat based estimates showed CA totals which supported the estimates of the Bureau of Agriculture and Rural Development. Continued work will evaluate the technique in other parts of Africa, while segmentation algorithms will be evaluated, in order to automate classification of medium resolution imagery for routine CA estimation in the future.
Periodic activation function and a modified learning algorithm for the multivalued neuron.
Aizenberg, Igor
2010-12-01
In this paper, we consider a new periodic activation function for the multivalued neuron (MVN). The MVN is a neuron with complex-valued weights and inputs/output, which are located on the unit circle. Although the MVN outperforms many other neurons and MVN-based neural networks have shown their high potential, the MVN still has a limited capability of learning highly nonlinear functions. A periodic activation function, which is introduced in this paper, makes it possible to learn nonlinearly separable problems and non-threshold multiple-valued functions using a single multivalued neuron. We call this neuron a multivalued neuron with a periodic activation function (MVN-P). The MVN-Ps functionality is much higher than that of the regular MVN. The MVN-P is more efficient in solving various classification problems. A learning algorithm based on the error-correction rule for the MVN-P is also presented. It is shown that a single MVN-P can easily learn and solve those benchmark classification problems that were considered unsolvable using a single neuron. It is also shown that a universal binary neuron, which can learn nonlinearly separable Boolean functions, and a regular MVN are particular cases of the MVN-P.
McEntire, John E.; Kuo, Kenneth C.; Smith, Mark E.; Stalling, David L.; Richens, Jack W.; Zumwalt, Robert W.; Gehrke, Charles W.; Papermaster, Ben W.
1989-01-01
A wide spectrum of modified nucleosides has been quantified by high-performance liquid chromatography in serum of 49 male lung cancer patients, 35 patients with other cancers, and 48 patients hospitalized for nonneoplastic diseases. Data for 29 modified nucleoside peaks were normalized to an internal standard and analyzed by discriminant analysis and stepwise discriminant analysis. A model based on peaks selected by a stepwise discriminant procedure correctly classified 79% of the cancer and 75% of the noncancer subjects. It also demonstrated 84% sensitivity and 79% specificity when comparing lung cancer to noncancer subjects, and 80% sensitivity and 55% specificity in comparing lung cancer to other cancers. The nucleoside peaks having the greatest influence on the models varied dependent on the subgroups compared, confirming the importance of quantifying a wide array of nucleosides. These data support and expand previous studies which reported the utility of measuring modified nucleoside levels in serum and show that precise measurement of an array of 29 modified nucleosides in serum by high-performance liquid chromatography with UV scanning with subsequent data modeling may provide a clinically useful approach to patient classification in diagnosis and subsequent therapeutic monitoring.
Tamboer, P.; Vorst, H.C.M.; Ghebreab, S.; Scholte, H.S.
2016-01-01
Meta-analytic studies suggest that dyslexia is characterized by subtle and spatially distributed variations in brain anatomy, although many variations failed to be significant after corrections of multiple comparisons. To circumvent issues of significance which are characteristic for conventional analysis techniques, and to provide predictive value, we applied a machine learning technique – support vector machine – to differentiate between subjects with and without dyslexia. In a sample of 22 students with dyslexia (20 women) and 27 students without dyslexia (25 women) (18–21 years), a classification performance of 80% (p < 0.001; d-prime = 1.67) was achieved on the basis of differences in gray matter (sensitivity 82%, specificity 78%). The voxels that were most reliable for classification were found in the left occipital fusiform gyrus (LOFG), in the right occipital fusiform gyrus (ROFG), and in the left inferior parietal lobule (LIPL). Additionally, we found that classification certainty (e.g. the percentage of times a subject was correctly classified) correlated with severity of dyslexia (r = 0.47). Furthermore, various significant correlations were found between the three anatomical regions and behavioural measures of spelling, phonology and whole-word-reading. No correlations were found with behavioural measures of short-term memory and visual/attentional confusion. These data indicate that the LOFG, ROFG and the LIPL are neuro-endophenotype and potentially biomarkers for types of dyslexia related to reading, spelling and phonology. In a second and independent sample of 876 young adults of a general population, the trained classifier of the first sample was tested, resulting in a classification performance of 59% (p = 0.07; d-prime = 0.65). This decline in classification performance resulted from a large percentage of false alarms. This study provided support for the use of machine learning in anatomical brain imaging. PMID:27114899
Kotsianos, D; Rock, C; Wirth, S; Linsenmaier, U; Brandl, R; Fischer, T; Euler, E; Mutschler, W; Pfeifer, K J; Reiser, M
2002-01-01
To analyze a prototype mobile C-arm 3D image amplifier in the detection and classification of experimental tibial condylar fractures with multiplanar reconstructions (MPR). Human knee specimens (n = 22) with tibial condylar fractures were examined with a prototype C-arm (ISO-C-3D, Siemens AG), plain films (CR) and spiral CT (CT). The motorized C-arm provides fluoroscopic images during a 190 degrees orbital rotation computing a 119 mm data cube. From these 3D data sets MP reconstructions were obtained. All images were evaluated by four independent readers for the detection and assessment of fracture lines. All fractures were classified according to the Müller AO classification. To confirm the results, the specimens were finally surgically dissected. 97 % of the tibial condylar fractures were easily seen and correctly classified according to the Müller AO classification on MP reconstruction of the ISO-C-3D. There is no significant difference between ISO-C and CT in detection and correct classification of fractures, but ISO-CD-3D is significant by better than CR. The evaluation of fractures with the ISO-C is better than with plain films alone and comparable to CT scans. The three-dimensional reconstruction of the ISO-C can provide important information which cannot be obtained from plain films. The ISO-C-3D may be useful in planning operative reconstructions and evaluating surgical results in orthopaedic surgery of the limbs.
NASA Astrophysics Data System (ADS)
Siregar, V. P.; Agus, S. B.; Subarno, T.; Prabowo, N. W.
2018-05-01
The availability of satellite imagery with a variety of spatial resolution, both free access and commercial become as an option in utilizing the remote sensing technology. Variability of the water column is one of the factors affecting the interpretation results when mapping marine shallow waters. This study aimed to evaluate the influence of water column correction (depth-invariant index) on the accuracy of shallow water habitat classification results using OBIA. This study was conducted in North of Kepulauan Seribu, precisely in Harapan Island and its surrounding areas. Habitat class schemes were based on field observations, which were then used to build habitat classes on satellite imagery. The water column correction was applied to the three pairs of SPOT-7 multispectral bands, which were subsequently used in object-based classification. Satellite image classification was performed with four different approaches, namely (i) using DII transformed bands with single pair band input (B1B2), (ii) multi pairs bands (B1B2, B1B3, and B2B3), (iii) combination of multi pairs band and initial bands, and (iv) only using initial bands. The accuracy test results of the four inputs show the values of Overall Accuracy and Kappa Statistics, respectively 55.84 and 0.48; 68.53 and 0.64; 78.68 and 0.76; 77.66 and 0.74. It shows that the best results when using DII and initial band combination for shallow water benthic classification in this study site.
12 CFR 702.103 - Applicability of risk-based net worth requirement.
Code of Federal Regulations, 2011 CFR
2011-01-01
... AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.103 Applicability of risk-based net worth requirement. For purposes of § 702.102, a credit union is defined as “complex” and a...
Kangas, Michael J; Burks, Raychelle M; Atwater, Jordyn; Lukowicz, Rachel M; Garver, Billy; Holmes, Andrea E
2018-02-01
With the increasing availability of digital imaging devices, colorimetric sensor arrays are rapidly becoming a simple, yet effective tool for the identification and quantification of various analytes. Colorimetric arrays utilize colorimetric data from many colorimetric sensors, with the multidimensional nature of the resulting data necessitating the use of chemometric analysis. Herein, an 8 sensor colorimetric array was used to analyze select acid and basic samples (0.5 - 10 M) to determine which chemometric methods are best suited for classification quantification of analytes within clusters. PCA, HCA, and LDA were used to visualize the data set. All three methods showed well-separated clusters for each of the acid or base analytes and moderate separation between analyte concentrations, indicating that the sensor array can be used to identify and quantify samples. Furthermore, PCA could be used to determine which sensors showed the most effective analyte identification. LDA, KNN, and HQI were used for identification of analyte and concentration. HQI and KNN could be used to correctly identify the analytes in all cases, while LDA correctly identified 95 of 96 analytes correctly. Additional studies demonstrated that controlling for solvent and image effects was unnecessary for all chemometric methods utilized in this study.
Pani, Danilo; Barabino, Gianluca; Citi, Luca; Meloni, Paolo; Raspopovic, Stanisa; Micera, Silvestro; Raffo, Luigi
2016-09-01
The control of upper limb neuroprostheses through the peripheral nervous system (PNS) can allow restoring motor functions in amputees. At present, the important aspect of the real-time implementation of neural decoding algorithms on embedded systems has been often overlooked, notwithstanding the impact that limited hardware resources have on the efficiency/effectiveness of any given algorithm. Present study is addressing the optimization of a template matching based algorithm for PNS signals decoding that is a milestone for its real-time, full implementation onto a floating-point digital signal processor (DSP). The proposed optimized real-time algorithm achieves up to 96% of correct classification on real PNS signals acquired through LIFE electrodes on animals, and can correctly sort spikes of a synthetic cortical dataset with sufficiently uncorrelated spike morphologies (93% average correct classification) comparably to the results obtained with top spike sorter (94% on average on the same dataset). The power consumption enables more than 24 h processing at the maximum load, and latency model has been derived to enable a fair performance assessment. The final embodiment demonstrates the real-time performance onto a low-power off-the-shelf DSP, opening to experiments exploiting the efferent signals to control a motor neuroprosthesis.
[Analgesia in intensive care medicine].
Ortlepp, J R; Luethje, F; Walz, R
2016-02-01
The administration of sedatives and analgesics on the intensive care unit (ICU) is routine daily practice. The correct discrimination between delirium, pain and anxiety or confusion is essential for the strategy and selection of medication. The correct pain therapy and sedation are essential for patient quality of life on the ICU and for the prognosis. The aim of this article is to present state of the art recommendations on the classification of pain and pain therapy on the ICU. An online search was carried out in PubMed for publications on the topics of "pain" and "ICU". Critical care patients are frequently subjected to many procedures and situations which can cause pain. The perception of pain is, among other things, influenced by the degree of orientation, anxiety and the degree of sedation. The administration of analgesics and non-pharmacological approaches are effective in reducing the stress perceived by patients. The main aim is improvement in the awareness of nursing and medical personnel for pain inducers and pain perception in ICU patients. The classification of pain must be made objectively. Therapeutic targets must be defined and in addition to the correct selection of pain medication, non-pharmacological approaches must also be consistently implemented.
NASA Astrophysics Data System (ADS)
Dondurur, Mehmet
The primary objective of this study was to determine the degree to which modern SAR systems can be used to obtain information about the Earth's vegetative resources. Information obtainable from microwave synthetic aperture radar (SAR) data was compared with that obtainable from LANDSAT-TM and SPOT data. Three hypotheses were tested: (a) Classification of land cover/use from SAR data can be accomplished on a pixel-by-pixel basis with the same overall accuracy as from LANDSAT-TM and SPOT data. (b) Classification accuracy for individual land cover/use classes will differ between sensors. (c) Combining information derived from optical and SAR data into an integrated monitoring system will improve overall and individual land cover/use class accuracies. The study was conducted with three data sets for the Sleeping Bear Dunes test site in the northwestern part of Michigan's lower peninsula, including an October 1982 LANDSAT-TM scene, a June 1989 SPOT scene and C-, L- and P-Band radar data from the Jet Propulsion Laboratory AIRSAR. Reference data were derived from the Michigan Resource Information System (MIRIS) and available color infrared aerial photos. Classification and rectification of data sets were done using ERDAS Image Processing Programs. Classification algorithms included Maximum Likelihood, Mahalanobis Distance, Minimum Spectral Distance, ISODATA, Parallelepiped, and Sequential Cluster Analysis. Classified images were rectified as necessary so that all were at the same scale and oriented north-up. Results were analyzed with contingency tables and percent correctly classified (PCC) and Cohen's Kappa (CK) as accuracy indices using CSLANT and ImagePro programs developed for this study. Accuracy analyses were based upon a 1.4 by 6.5 km area with its long axis east-west. Reference data for this subscene total 55,770 15 by 15 m pixels with sixteen cover types, including seven level III forest classes, three level III urban classes, two level II range classes, two water classes, one wetland class and one agriculture class. An initial analysis was made without correcting the 1978 MIRIS reference data to the different dates of the TM, SPOT and SAR data sets. In this analysis, highest overall classification accuracy (PCC) was 87% with the TM data set, with both SPOT and C-Band SAR at 85%, a difference statistically significant at the 0.05 level. When the reference data were corrected for land cover change between 1978 and 1991, classification accuracy with the C-Band SAR data increased to 87%. Classification accuracy differed from sensor to sensor for individual land cover classes, Combining sensors into hypothetical multi-sensor systems resulted in higher accuracies than for any single sensor. Combining LANDSAT -TM and C-Band SAR yielded an overall classification accuracy (PCC) of 92%. The results of this study indicate that C-Band SAR data provide an acceptable substitute for LANDSAT-TM or SPOT data when land cover information is desired of areas where cloud cover obscures the terrain. Even better results can be obtained by integrating TM and C-Band SAR data into a multi-sensor system.
NASA Astrophysics Data System (ADS)
Li, Jingwan; Sharma, Ashish; Evans, Jason; Johnson, Fiona
2018-01-01
Addressing systematic biases in regional climate model simulations of extreme rainfall is a necessary first step before assessing changes in future rainfall extremes. Commonly used bias correction methods are designed to match statistics of the overall simulated rainfall with observations. This assumes that change in the mix of different types of extreme rainfall events (i.e. convective and non-convective) in a warmer climate is of little relevance in the estimation of overall change, an assumption that is not supported by empirical or physical evidence. This study proposes an alternative approach to account for the potential change of alternate rainfall types, characterized here by synoptic weather patterns (SPs) using self-organizing maps classification. The objective of this study is to evaluate the added influence of SPs on the bias correction, which is achieved by comparing the corrected distribution of future extreme rainfall with that using conventional quantile mapping. A comprehensive synthetic experiment is first defined to investigate the conditions under which the additional information of SPs makes a significant difference to the bias correction. Using over 600,000 synthetic cases, statistically significant differences are found to be present in 46% cases. This is followed by a case study over the Sydney region using a high-resolution run of the Weather Research and Forecasting (WRF) regional climate model, which indicates a small change in the proportions of the SPs and a statistically significant change in the extreme rainfall over the region, although the differences between the changes obtained from the two bias correction methods are not statistically significant.
Shadow detection and removal in RGB VHR images for land use unsupervised classification
NASA Astrophysics Data System (ADS)
Movia, A.; Beinat, A.; Crosilla, F.
2016-09-01
Nowadays, high resolution aerial images are widely available thanks to the diffusion of advanced technologies such as UAVs (Unmanned Aerial Vehicles) and new satellite missions. Although these developments offer new opportunities for accurate land use analysis and change detection, cloud and terrain shadows actually limit benefits and possibilities of modern sensors. Focusing on the problem of shadow detection and removal in VHR color images, the paper proposes new solutions and analyses how they can enhance common unsupervised classification procedures for identifying land use classes related to the CO2 absorption. To this aim, an improved fully automatic procedure has been developed for detecting image shadows using exclusively RGB color information, and avoiding user interaction. Results show a significant accuracy enhancement with respect to similar methods using RGB based indexes. Furthermore, novel solutions derived from Procrustes analysis have been applied to remove shadows and restore brightness in the images. In particular, two methods implementing the so called "anisotropic Procrustes" and the "not-centered oblique Procrustes" algorithms have been developed and compared with the linear correlation correction method based on the Cholesky decomposition. To assess how shadow removal can enhance unsupervised classifications, results obtained with classical methods such as k-means, maximum likelihood, and self-organizing maps, have been compared to each other and with a supervised clustering procedure.
Comparison of four approaches to a rock facies classification problem
Dubois, M.K.; Bohling, Geoffrey C.; Chakrabarti, S.
2007-01-01
In this study, seven classifiers based on four different approaches were tested in a rock facies classification problem: classical parametric methods using Bayes' rule, and non-parametric methods using fuzzy logic, k-nearest neighbor, and feed forward-back propagating artificial neural network. Determining the most effective classifier for geologic facies prediction in wells without cores in the Panoma gas field, in Southwest Kansas, was the objective. Study data include 3600 samples with known rock facies class (from core) with each sample having either four or five measured properties (wire-line log curves), and two derived geologic properties (geologic constraining variables). The sample set was divided into two subsets, one for training and one for testing the ability of the trained classifier to correctly assign classes. Artificial neural networks clearly outperformed all other classifiers and are effective tools for this particular classification problem. Classical parametric models were inadequate due to the nature of the predictor variables (high dimensional and not linearly correlated), and feature space of the classes (overlapping). The other non-parametric methods tested, k-nearest neighbor and fuzzy logic, would need considerable improvement to match the neural network effectiveness, but further work, possibly combining certain aspects of the three non-parametric methods, may be justified. ?? 2006 Elsevier Ltd. All rights reserved.
Weight-elimination neural networks applied to coronary surgery mortality prediction.
Ennett, Colleen M; Frize, Monique
2003-06-01
The objective was to assess the effectiveness of the weight-elimination cost function in improving classification performance of artificial neural networks (ANNs) and to observe how changing the a priori distribution of the training set affects network performance. Backpropagation feedforward ANNs with and without weight-elimination estimated mortality for coronary artery surgery patients. The ANNs were trained and tested on cases with 32 input variables describing the patient's medical history; the output variable was in-hospital mortality (mortality rates: training 3.7%, test 3.8%). Artificial training sets with mortality rates of 20%, 50%, and 80% were created to observe the impact of training with a higher-than-normal prevalence. When the results were averaged, weight-elimination networks achieved higher sensitivity rates than those without weight-elimination. Networks trained on higher-than-normal prevalence achieved higher sensitivity rates at the cost of lower specificity and correct classification. The weight-elimination cost function can improve the classification performance when the network is trained with a higher-than-normal prevalence. A network trained with a moderately high artificial mortality rate (artificial mortality rate of 20%) can improve the sensitivity of the model without significantly affecting other aspects of the model's performance. The ANN mortality model achieved comparable performance as additive and statistical models for coronary surgery mortality estimation in the literature.
NASA Astrophysics Data System (ADS)
Oh, Jungsu S.; Kim, Jae Seung; Chae, Sun Young; Oh, Minyoung; Oh, Seung Jun; Cha, Seung Nam; Chang, Ho-Jong; Lee, Chong Sik; Lee, Jae Hong
2017-03-01
We present an optimized voxelwise statistical parametric mapping (SPM) of partial-volume (PV)-corrected positron emission tomography (PET) of 11C Pittsburgh Compound B (PiB), incorporating the anatomical precision of magnetic resonance image (MRI) and amyloid β (A β) burden-specificity of PiB PET. First, we applied region-based partial-volume correction (PVC), termed the geometric transfer matrix (GTM) method, to PiB PET, creating MRI-based lobar parcels filled with mean PiB uptakes. Then, we conducted a voxelwise PVC by multiplying the original PET by the ratio of a GTM-based PV-corrected PET to a 6-mm-smoothed PV-corrected PET. Finally, we conducted spatial normalizations of the PV-corrected PETs onto the study-specific template. As such, we increased the accuracy of the SPM normalization and the tissue specificity of SPM results. Moreover, lobar smoothing (instead of whole-brain smoothing) was applied to increase the signal-to-noise ratio in the image without degrading the tissue specificity. Thereby, we could optimize a voxelwise group comparison between subjects with high and normal A β burdens (from 10 patients with Alzheimer's disease, 30 patients with Lewy body dementia, and 9 normal controls). Our SPM framework outperformed than the conventional one in terms of the accuracy of the spatial normalization (85% of maximum likelihood tissue classification volume) and the tissue specificity (larger gray matter, and smaller cerebrospinal fluid volume fraction from the SPM results). Our SPM framework optimized the SPM of a PV-corrected A β PET in terms of anatomical precision, normalization accuracy, and tissue specificity, resulting in better detection and localization of A β burdens in patients with Alzheimer's disease and Lewy body dementia.
Data preprocessing methods of FT-NIR spectral data for the classification cooking oil
NASA Astrophysics Data System (ADS)
Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli
2014-12-01
This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.
NASA Astrophysics Data System (ADS)
Gross, W.; Boehler, J.; Twizer, K.; Kedem, B.; Lenz, A.; Kneubuehler, M.; Wellig, P.; Oechslin, R.; Schilling, H.; Rotman, S.; Middelmann, W.
2016-10-01
Hyperspectral remote sensing data can be used for civil and military applications to robustly detect and classify target objects. High spectral resolution of hyperspectral data can compensate for the comparatively low spatial resolution, which allows for detection and classification of small targets, even below image resolution. Hyperspectral data sets are prone to considerable spectral redundancy, affecting and limiting data processing and algorithm performance. As a consequence, data reduction strategies become increasingly important, especially in view of near-real-time data analysis. The goal of this paper is to analyze different strategies for hyperspectral band selection algorithms and their effect on subpixel classification for different target and background materials. Airborne hyperspectral data is used in combination with linear target simulation procedures to create a representative amount of target-to-background ratios for evaluation of detection limits. Data from two different airborne hyperspectral sensors, AISA Eagle and Hawk, are used to evaluate transferability of band selection when using different sensors. The same target objects were recorded to compare the calculated detection limits. To determine subpixel classification results, pure pixels from the target materials are extracted and used to simulate mixed pixels with selected background materials. Target signatures are linearly combined with different background materials in varying ratios. The commonly used classification algorithms Adaptive Coherence Estimator (ACE) is used to compare the detection limit for the original data with several band selection and data reduction strategies. The evaluation of the classification results is done by assuming a fixed false alarm ratio and calculating the mean target-to-background ratio of correctly detected pixels. The results allow drawing conclusions about specific band combinations for certain target and background combinations. Additionally, generally useful wavelength ranges are determined and the optimal amount of principal components is analyzed.
Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W
2004-09-01
Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.
Ice/water Classification of Sentinel-1 Images
NASA Astrophysics Data System (ADS)
Korosov, Anton; Zakhvatkina, Natalia; Muckenhuber, Stefan
2015-04-01
Sea Ice monitoring and classification relies heavily on synthetic aperture radar (SAR) imagery. These sensors record data either only at horizontal polarization (RADARSAT-1) or vertically polarized (ERS-1 and ERS-2) or at dual polarization (Radarsat-2, Sentinel-1). Many algorithms have been developed to discriminate sea ice types and open water using single polarization images. Ice type classification, however, is still ambiguous in some cases. Sea ice classification in single polarization SAR images has been attempted using various methods since the beginning of the ERS programme. The robust classification using only SAR images that can provide useful results under varying sea ice types and open water tend to be not generally applicable in operational regime. The new generation SAR satellites have capability to deliver images in several polarizations. This gives improved possibility to develop sea ice classification algorithms. In this study we use data from Sentinel-1 at dual-polarization, i.e. HH (horizontally transmitted and horizontally received) and HV (horizontally transmitted, vertically received). This mode assembles wide SAR image from several narrower SAR beams, resulting to an image of 500 x 500 km with 50 m resolution. A non-linear scheme for classification of Sentinel-1 data has been developed. The processing allows to identify three classes: ice, calm water and rough water at 1 km spatial resolution. The raw sigma0 data in HH and HV polarization are first corrected for thermal and random noise by extracting the background thermal noise level and smoothing the image with several filters. At the next step texture characteristics are computed in a moving window using a Gray Level Co-occurence Matrix (GLCM). A neural network is applied at the last step for processing array of the most informative texture characteristics and ice/water classification. The main results are: * the most informative texture characteristics to be used for sea ice classification were revealed; * the best set of parameters including the window size, number of levels of quantization of sigma0 values and co-occurence distance was found; * a support vector machine (SVM) was trained on results of visual classification of 30 Sentinel-1 images. Despite the general high accuracy of the neural network (95% of true positive classification) problems with classification of young newly formed ice and rough water arise due to the similar average backscatter and texture. Other methods of smoothing and computation of texture characteristics (e.g. computation of GLCM from a variable size window) is assessed. The developed scheme will be utilized in NRT processing of Sentinel-1 data at NERSC within the MyOcean2 project.
Effect of various hallux valgus reconstruction on sesamoid location: a radiographic study.
Huang, Eddie H; Charlton, Timothy P; Ajayi, Samuel; Thordarson, David B
2013-01-01
The correction of sesamoid subluxation is an important component of hallux valgus reconstruction with some surgeons feeling that the sesamoids can be pulled back under the first metatarsal head when imbricating the medial capsule during surgery. The purpose of this study was to radiographically assess the effect of an osteotomy on sesamoid location relative to the second metatarsal. This is a retrospective radiographic study review of 165 patients with hallux valgus treated with reconstructive osteotomies. Patients were included if they underwent a scarf or basilar osteotomy for hallux valgus but were excluded if they had inflammatory arthropathy or lesser metatarsal osteotomy. A modified McBride soft tissue procedure was performed in conjunction with the basilar and scarf osteotomies. Each patient's preoperative and postoperative radiographs were evaluated for hallux valgus angle, intermetatarsal 1-2 angle, tibial sesamoid classification, and lateral sesamoid location relative to the second metatarsal. The greatest correction of both hallux valgus and intermetatrsal 1-2 angle was achieved in basilar osteotomies (20.6 degrees and 9.7 degrees, respectively), then scarf osteotomies (14.4 degrees and 8.7 degrees, respectively). Basilar and scarf osteotomies both corrected medial sesamoid subluxation relative to the first metatarsal head an average of 2-3 classification stages. All osteotomies had minimal lateral sesamoid location change relative to the second metatarsal. The majority of sesamoid correction correlated with the intermetatarsal 1-2 correction. The concept that medial capsular plication pulls the sesamoids beneath the first metatarsal (ie, changes the location of the sesamoids relative to the second metatarsal) was not supported by our results. Level III, retrospective case series.
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
A modified method for MRF segmentation and bias correction of MR image with intensity inhomogeneity.
Xie, Mei; Gao, Jingjing; Zhu, Chongjin; Zhou, Yan
2015-01-01
Markov random field (MRF) model is an effective method for brain tissue classification, which has been applied in MR image segmentation for decades. However, it falls short of the expected classification in MR images with intensity inhomogeneity for the bias field is not considered in the formulation. In this paper, we propose an interleaved method joining a modified MRF classification and bias field estimation in an energy minimization framework, whose initial estimation is based on k-means algorithm in view of prior information on MRI. The proposed method has a salient advantage of overcoming the misclassifications from the non-interleaved MRF classification for the MR image with intensity inhomogeneity. In contrast to other baseline methods, experimental results also have demonstrated the effectiveness and advantages of our algorithm via its applications in the real and the synthetic MR images.
Tan, Jin; Li, Rong; Jiang, Zi-Tao
2015-10-01
We report an application of data fusion for chemometric classification of 135 canned samples of Chinese lager beers by manufacturer based on the combination of fluorescence, UV and visible spectroscopies. Right-angle synchronous fluorescence spectra (SFS) at three wavelength difference Δλ=30, 60 and 80 nm and visible spectra in the range 380-700 nm of undiluted beers were recorded. UV spectra in the range 240-400 nm of diluted beers were measured. A classification model was built using principal component analysis (PCA) and linear discriminant analysis (LDA). LDA with cross-validation showed that the data fusion could achieve 78.5-86.7% correct classification (sensitivity), while those rates using individual spectroscopies ranged from 42.2% to 70.4%. The results demonstrated that the fluorescence, UV and visible spectroscopies complemented each other, yielding higher synergic effect. Copyright © 2015 Elsevier Ltd. All rights reserved.
Robust tissue classification for reproducible wound assessment in telemedicine environments
NASA Astrophysics Data System (ADS)
Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves
2010-04-01
In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.
LAPR: An experimental aircraft pushbroom scanner
NASA Technical Reports Server (NTRS)
Wharton, S. W.; Irons, J. I.; Heugel, F.
1980-01-01
A three band Linear Array Pushbroom Radiometer (LAPR) was built and flown on an experimental basis by NASA at the Goddard Space Flight Center. The functional characteristics of the instrument and the methods used to preprocess the data, including radiometric correction, are described. The radiometric sensitivity of the instrument was tested and compared to that of the Thematic Mapper and the Multispectral Scanner. The radiometric correction procedure was evaluated quantitatively, using laboratory testing, and qualitatively, via visual examination of the LAPR test flight imagery. Although effective radiometric correction could not yet be demonstrated via laboratory testing, radiometric distortion did not preclude the visual interpretation or parallel piped classification of the test imagery.
Effects of EPI distortion correction pipelines on the connectome in Parkinson's Disease
NASA Astrophysics Data System (ADS)
Galvis, Justin; Mezher, Adam F.; Ragothaman, Anjanibhargavi; Villalon-Reina, Julio E.; Fletcher, P. Thomas; Thompson, Paul M.; Prasad, Gautam
2016-03-01
Echo-planar imaging (EPI) is commonly used for diffusion-weighted imaging (DWI) but is susceptible to nonlinear geometric distortions arising from inhomogeneities in the static magnetic field. These inhomogeneities can be measured and corrected using a fieldmap image acquired during the scanning process. In studies where the fieldmap image is not collected, these distortions can be corrected, to some extent, by nonlinearly registering the diffusion image to a corresponding anatomical image, either a T1- or T2-weighted image. Here we compared two EPI distortion correction pipelines, both based on nonlinear registration, which were optimized for the particular weighting of the structural image registration target. The first pipeline used a 3D nonlinear registration to a T1-weighted target, while the second pipeline used a 1D nonlinear registration to a T2-weighted target. We assessed each pipeline in its ability to characterize high-level measures of brain connectivity in Parkinson's disease (PD) in 189 individuals (58 healthy controls, 131 people with PD) from the Parkinson's Progression Markers Initiative (PPMI) dataset. We computed a structural connectome (connectivity map) for each participant using regions of interest from a cortical parcellation combined with DWI-based whole-brain tractography. We evaluated test-retest reliability of the connectome for each EPI distortion correction pipeline using a second diffusion scan acquired directly after the participants' first. Finally, we used support vector machine (SVM) classification to assess how accurately each pipeline classified PD versus healthy controls using each participants' structural connectome.
Decoding small surface codes with feedforward neural networks
NASA Astrophysics Data System (ADS)
Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen
2018-01-01
Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.
Modeling ready biodegradability of fragrance materials.
Ceriani, Lidia; Papa, Ester; Kovarich, Simona; Boethling, Robert; Gramatica, Paola
2015-06-01
In the present study, quantitative structure activity relationships were developed for predicting ready biodegradability of approximately 200 heterogeneous fragrance materials. Two classification methods, classification and regression tree (CART) and k-nearest neighbors (kNN), were applied to perform the modeling. The models were validated with multiple external prediction sets, and the structural applicability domain was verified by the leverage approach. The best models had good sensitivity (internal ≥80%; external ≥68%), specificity (internal ≥80%; external 73%), and overall accuracy (≥75%). Results from the comparison with BIOWIN global models, based on group contribution method, show that specific models developed in the present study perform better in prediction than BIOWIN6, in particular for the correct classification of not readily biodegradable fragrance materials. © 2015 SETAC.
An Ultrasonographic Periodontal Probe
NASA Astrophysics Data System (ADS)
Bertoncini, C. A.; Hinders, M. K.
2010-02-01
Periodontal disease, commonly known as gum disease, affects millions of people. The current method of detecting periodontal pocket depth is painful, invasive, and inaccurate. As an alternative to manual probing, an ultrasonographic periodontal probe is being developed to use ultrasound echo waveforms to measure periodontal pocket depth, which is the main measure of periodontal disease. Wavelet transforms and pattern classification techniques are implemented in artificial intelligence routines that can automatically detect pocket depth. The main pattern classification technique used here, called a binary classification algorithm, compares test objects with only two possible pocket depth measurements at a time and relies on dimensionality reduction for the final determination. This method correctly identifies up to 90% of the ultrasonographic probe measurements within the manual probe's tolerance.
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Ďuračkoá, Daniela
2014-01-01
The paper describes our experiment with using the Gaussian mixture models (GMM) for classification of speech uttered by a person wearing orthodontic appliances. For the GMM classification, the input feature vectors comprise the basic and the complementary spectral properties as well as the supra-segmental parameters. Dependence of classification correctness on the number of the parameters in the input feature vector and on the computation complexity is also evaluated. In addition, an influence of the initial setting of the parameters for GMM training process was analyzed. Obtained recognition results are compared visually in the form of graphs as well as numerically in the form of tables and confusion matrices for tested sentences uttered using three configurations of orthodontic appliances.
78 FR 33744 - Sedaxane; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-05
.... The following list of North American Industrial Classification System (NAICS) codes is not intended to... the data supporting the petition, EPA has corrected commodity definitions and recommended additional... exposure through drinking water and in residential settings, but does not include occupational exposure...
Effects of Weathering on TIR Spectra and Rock Classification
NASA Astrophysics Data System (ADS)
McDowell, M. L.; Hamilton, V. E.; Riley, D.
2006-03-01
Changes in mineralogy due to weathering are detectable in the TIR and cause misclassification of rock types. We survey samples over a range of lithologies and attempt to provide a method of correction for rock identification from weathered spectra.
Portable bench tester for piezo weigh-in-motion equipment : final report, June 2006.
DOT National Transportation Integrated Search
2006-06-01
The Ohio Department of Transportation's (ODOT) piezo weigh-in-motion (WIM) equipment must be tested for initial working operation and to insure continued correct operation. Currently, the only available method to verify the vehicle classification par...
Johnson, LeeAnn K; Brown, Mary B; Carruthers, Ethan A; Ferguson, John A; Dombek, Priscilla E; Sadowsky, Michael J
2004-08-01
A horizontal, fluorophore-enhanced, repetitive extragenic palindromic-PCR (rep-PCR) DNA fingerprinting technique (HFERP) was developed and evaluated as a means to differentiate human from animal sources of Escherichia coli. Box A1R primers and PCR were used to generate 2,466 rep-PCR and 1,531 HFERP DNA fingerprints from E. coli strains isolated from fecal material from known human and 12 animal sources: dogs, cats, horses, deer, geese, ducks, chickens, turkeys, cows, pigs, goats, and sheep. HFERP DNA fingerprinting reduced within-gel grouping of DNA fingerprints and improved alignment of DNA fingerprints between gels, relative to that achieved using rep-PCR DNA fingerprinting. Jackknife analysis of the complete rep-PCR DNA fingerprint library, done using Pearson's product-moment correlation coefficient, indicated that animal and human isolates were assigned to the correct source groups with an 82.2% average rate of correct classification. However, when only unique isolates were examined, isolates from a single animal having a unique DNA fingerprint, Jackknife analysis showed that isolates were assigned to the correct source groups with a 60.5% average rate of correct classification. The percentages of correctly classified isolates were about 15 and 17% greater for rep-PCR and HFERP, respectively, when analyses were done using the curve-based Pearson's product-moment correlation coefficient, rather than the band-based Jaccard algorithm. Rarefaction analysis indicated that, despite the relatively large size of the known-source database, genetic diversity in E. coli was very great and is most likely accounting for our inability to correctly classify many environmental E. coli isolates. Our data indicate that removal of duplicate genotypes within DNA fingerprint libraries, increased database size, proper methods of statistical analysis, and correct alignment of band data within and between gels improve the accuracy of microbial source tracking methods.
Calès, Paul; Halfon, Philippe; Batisse, Dominique; Carrat, Fabrice; Perré, Philippe; Penaranda, Guillaume; Guyader, Dominique; d'Alteroche, Louis; Fouchard-Hubert, Isabelle; Michelet, Christian; Veillon, Pascal; Lambert, Jérôme; Weiss, Laurence; Salmon, Dominique; Cacoub, Patrice
2010-08-01
We compared 5 non-specific and 2 specific blood tests for liver fibrosis in HCV/HIV co-infection. Four hundred and sixty-seven patients were included into derivation (n=183) or validation (n=284) populations. Within these populations, the diagnostic target, significant fibrosis (Metavir F > or = 2), was found in 66% and 72% of the patients, respectively. Two new fibrosis tests, FibroMeter HICV and HICV test, were constructed in the derivation population. Unadjusted AUROCs in the derivation population were: APRI: 0.716, Fib-4: 0.722, Fibrotest: 0.778, Hepascore: 0.779, FibroMeter: 0.783, HICV test: 0.822, FibroMeter HICV: 0.828. AUROCs adjusted on classification and distribution of fibrosis stages in a reference population showed similar values in both populations. FibroMeter, FibroMeter HICV and HICV test had the highest correct classification rates in F0/1 and F3/4 (which account for high predictive values): 77-79% vs. 70-72% in the other tests (p=0.002). Reliable individual diagnosis based on predictive values > or = 90% distinguished three test categories: poorly reliable: Fib-4 (2.4% of patients), APRI (8.9%); moderately reliable: Fibrotest (25.4%), FibroMeter (26.6%), Hepascore (30.2%); acceptably reliable: HICV test (40.2%), FibroMeter HICV (45.6%) (p<10(-3) between tests). FibroMeter HICV classified all patients into four reliable diagnosis intervals (< or =F1, F1+/-1, > or =F1, > or =F2) with an overall accuracy of 93% vs. 79% (p<10(-3)) for a binary diagnosis of significant fibrosis. Tests designed for HCV infections are less effective in HIV/HCV infections. A specific test, like FibroMeter HICV, was the most interesting test for diagnostic accuracy, correct classification profile, and a reliable diagnosis. With reliable diagnosis intervals, liver biopsy can therefore be avoided in all patients. Copyright 2010 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Veiseth-Kent, Eva; Høst, Vibeke; Løvland, Atle
2017-01-01
The main objective of this work was to develop a method for rapid and non-destructive detection and grading of wooden breast (WB) syndrome in chicken breast fillets. Near-infrared (NIR) spectroscopy was chosen as detection method, and an industrial NIR scanner was applied and tested for large scale on-line detection of the syndrome. Two approaches were evaluated for discrimination of WB fillets: 1) Linear discriminant analysis based on NIR spectra only, and 2) a regression model for protein was made based on NIR spectra and the estimated concentrations of protein were used for discrimination. A sample set of 197 fillets was used for training and calibration. A test set was recorded under industrial conditions and contained spectra from 79 fillets. The classification methods obtained 99.5–100% correct classification of the calibration set and 100% correct classification of the test set. The NIR scanner was then installed in a commercial chicken processing plant and could detect incidence rates of WB in large batches of fillets. Examples of incidence are shown for three broiler flocks where a high number of fillets (9063, 6330 and 10483) were effectively measured. Prevalence of WB of 0.1%, 6.6% and 8.5% were estimated for these flocks based on the complete sample volumes. Such an on-line system can be used to alleviate the challenges WB represents to the poultry meat industry. It enables automatic quality sorting of chicken fillets to different product categories. Manual laborious grading can be avoided. Incidences of WB from different farms and flocks can be tracked and information can be used to understand and point out main causes for WB in the chicken production. This knowledge can be used to improve the production procedures and reduce today’s extensive occurrence of WB. PMID:28278170
Laser Raman detection for oral cancer based on a Gaussian process classification method
NASA Astrophysics Data System (ADS)
Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Zhang, Chijun; Chen, He; Luo, Yusheng; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming
2013-06-01
Oral squamous cell carcinoma is the most common neoplasm of the oral cavity. The incidence rate accounts for 80% of total oral cancer and shows an upward trend in recent years. It has a high degree of malignancy and is difficult to detect in terms of differential diagnosis, as a consequence of which the timing of treatment is always delayed. In this work, Raman spectroscopy was adopted to differentially diagnose oral squamous cell carcinoma and oral gland carcinoma. In total, 852 entries of raw spectral data which consisted of 631 items from 36 oral squamous cell carcinoma patients, 87 items from four oral gland carcinoma patients and 134 items from five normal people were collected by utilizing an optical method on oral tissues. The probability distribution of the datasets corresponding to the spectral peaks of the oral squamous cell carcinoma tissue was analyzed and the experimental result showed that the data obeyed a normal distribution. Moreover, the distribution characteristic of the noise was also in compliance with a Gaussian distribution. A Gaussian process (GP) classification method was utilized to distinguish the normal people and the oral gland carcinoma patients from the oral squamous cell carcinoma patients. The experimental results showed that all the normal people could be recognized. 83.33% of the oral squamous cell carcinoma patients could be correctly diagnosed and the remaining ones would be diagnosed as having oral gland carcinoma. For the classification process of oral gland carcinoma and oral squamous cell carcinoma, the correct ratio was 66.67% and the erroneously diagnosed percentage was 33.33%. The total sensitivity was 80% and the specificity was 100% with the Matthews correlation coefficient (MCC) set to 0.447 213 595. Considering the numerical results above, the application prospects and clinical value of this technique are significantly impressive.
A random forest model based classification scheme for neonatal amplitude-integrated EEG.
Chen, Weiting; Wang, Yu; Cao, Guitao; Chen, Guoqiang; Gu, Qiufang
2014-01-01
Modern medical advances have greatly increased the survival rate of infants, while they remain in the higher risk group for neurological problems later in life. For the infants with encephalopathy or seizures, identification of the extent of brain injury is clinically challenging. Continuous amplitude-integrated electroencephalography (aEEG) monitoring offers a possibility to directly monitor the brain functional state of the newborns over hours, and has seen an increasing application in neonatal intensive care units (NICUs). This paper presents a novel combined feature set of aEEG and applies random forest (RF) method to classify aEEG tracings. To that end, a series of experiments were conducted on 282 aEEG tracing cases (209 normal and 73 abnormal ones). Basic features, statistic features and segmentation features were extracted from both the tracing as a whole and the segmented recordings, and then form a combined feature set. All the features were sent to a classifier afterwards. The significance of feature, the data segmentation, the optimization of RF parameters, and the problem of imbalanced datasets were examined through experiments. Experiments were also done to evaluate the performance of RF on aEEG signal classifying, compared with several other widely used classifiers including SVM-Linear, SVM-RBF, ANN, Decision Tree (DT), Logistic Regression(LR), ML, and LDA. The combined feature set can better characterize aEEG signals, compared with basic features, statistic features and segmentation features respectively. With the combined feature set, the proposed RF-based aEEG classification system achieved a correct rate of 92.52% and a high F1-score of 95.26%. Among all of the seven classifiers examined in our work, the RF method got the highest correct rate, sensitivity, specificity, and F1-score, which means that RF outperforms all of the other classifiers considered here. The results show that the proposed RF-based aEEG classification system with the combined feature set is efficient and helpful to better detect the brain disorders in newborns.
NASA Astrophysics Data System (ADS)
Akay, S. S.; Sertel, E.
2016-06-01
Urban land cover/use changes like urbanization and urban sprawl have been impacting the urban ecosystems significantly therefore determination of urban land cover/use changes is an important task to understand trends and status of urban ecosystems, to support urban planning and to aid decision-making for urban-based projects. High resolution satellite images could be used to accurately, periodically and quickly map urban land cover/use and their changes by time. This paper aims to determine urban land cover/use changes in Gaziantep city centre between 2010 and 2105 using object based images analysis and high resolution SPOT 5 and SPOT 6 images. 2.5 m SPOT 5 image obtained in 5th of June 2010 and 1.5 m SPOT 6 image obtained in 7th of July 2015 were used in this research to precisely determine land changes in five-year period. In addition to satellite images, various ancillary data namely Normalized Difference Vegetation Index (NDVI), Difference Water Index (NDWI) maps, cadastral maps, OpenStreetMaps, road maps and Land Cover maps, were integrated into the classification process to produce high accuracy urban land cover/use maps for these two years. Both images were geometrically corrected to fulfil the 1/10,000 scale geometric accuracy. Decision tree based object oriented classification was applied to identify twenty different urban land cover/use classes defined in European Urban Atlas project. Not only satellite images and satellite image-derived indices but also different thematic maps were integrated into decision tree analysis to create rule sets for accurate mapping of each class. Rule sets of each satellite image for the object based classification involves spectral, spatial and geometric parameter to automatically produce urban map of the city centre region. Total area of each class per related year and their changes in five-year period were determined and change trend in terms of class transformation were presented. Classification accuracy assessment was conducted by creating a confusion matrix to illustrate the thematic accuracy of each class.
A technique for correcting ERTS data for solar and atmospheric effects
NASA Technical Reports Server (NTRS)
Rogers, R. H.; Peacock, K.; Shah, N. J.
1974-01-01
A technique is described by which ERTS investigators can obtain and utilize solar and atmospheric parameters to transform spacecraft radiance measurements to absolute target reflectance signatures. A radiant power measuring instrument (RPMI) and its use in determining atmospheric paramaters needed for ground truth are discussed. The procedures used and results achieved in processing ERTS CCTs to correct for atmospheric parameters to obtain imagery are reviewed. Examples are given which demonstrate the nature and magnitude of atmospheric effects on computer classification programs.
NASA Astrophysics Data System (ADS)
Manohar, A. V.
2003-02-01
These lecture notes present some of the basic ideas of heavy quark effective theory. The topics covered include the classification of states, the derivation of the HQET Lagrangian at tree level, hadron masses, meson form factors, Luke's theorem, reparameterization invariance and inclusive decays. Radiative corrections are discussed in some detail, including an explicit computation of a matching correction for HQET. Borel summability, renormalons, and their connection with the QCD perturbation series is covered, as well as the use of the upsilon expansion to improve the convergence of the perturbation series.
On the Equidecomposability of a Regular Triangle and A Square of Equal Areas.
1984-01-01
Hogo Steinhaus OMathematical Snapshots , is not correct. ANS (MOS) Subject Classifications: 51-01, 51N20 Key Words: Equidecomposability of plane...It is shown that the solution of the problem of the title, as given in the first snapshot of H. Steinhaus ’ "Mathematical Snapshots" is not correct...490985 This accuracy should be sufficient for the realization of the ingenious linkage of four polygons (1) as described by Steinhaus (See (2, Figure 1.3
[Classifications in forensic medicine and their logical basis].
Kovalev, A V; Shmarov, L A; Ten'kov, A A
2014-01-01
The objective of the present study was to characterize the main requirements for the correct construction of classifications used in forensic medicine, with special reference to the errors that occur in the relevant text-books, guidelines, and manuals and the ways to avoid them. This publication continues the series of thematic articles of the authors devoted to the logical errors in the expert conclusions. The preparation of further publications is underway to report the results of the in-depth analysis of the logical errors encountered in expert conclusions, text-books, guidelines, and manuals.
Monitoring strip mining and reclamation with LANDSAT data in Belmont County, Ohio
NASA Technical Reports Server (NTRS)
Witt, R. G.; Schaal, G. M.; Bly, B. G.
1983-01-01
The utility of LANDSAT digital data for mapping and monitoring surface mines in Belmont County, Ohio was investigated. Two data sets from 1976 and 1979 were processed to classify level 1 land covers and three strip mine categories in order to examine change over time and assess reclamation efforts. The two classifications were compared with aerial photographs. Results of the accuracy assessment show that both classifications are approximately 86 per cent correct, and that surface mine change detection (date-to-date comparison) is facilitated by the digital format of LANDSAT data.
SSVEP-BCI implementation for 37-40 Hz frequency range.
Müller, Sandra Mara Torres; Diez, Pablo F; Bastos-Filho, Teodiano Freire; Sarcinelli-Filho, Mário; Mut, Vicente; Laciar, Eric
2011-01-01
This work presents a Brain-Computer Interface (BCI) based on Steady State Visual Evoked Potentials (SSVEP), using higher stimulus frequencies (>30 Hz). Using a statistical test and a decision tree, the real-time EEG registers of six volunteers are analyzed, with the classification result updated each second. The BCI developed does not need any kind of settings or adjustments, which makes it more general. Offline results are presented, which corresponds to a correct classification rate of up to 99% and a Information Transfer Rate (ITR) of up to 114.2 bits/min.
Atypia and DNA methylation in nipple duct lavage in relation to predicted breast cancer risk.
Euhus, David M; Bu, Dawei; Ashfaq, Raheela; Xie, Xian-Jin; Bian, Aihua; Leitch, A Marilyn; Lewis, Cheryl M
2007-09-01
Tumor suppressor gene (TSG) methylation is identified more frequently in random periareolar fine needle aspiration samples from women at high risk for breast cancer than women at lower risk. It is not known whether TSG methylation or atypia in nipple duct lavage (NDL) samples is related to predicted breast cancer risk. 514 NDL samples obtained from 150 women selected to represent a wide range of breast cancer risk were evaluated cytologically and by quantitative multiplex methylation-specific PCR for methylation of cyclin D2, APC, HIN1, RASSF1A, and RAR-beta2. Based on methylation patterns and cytology, NDL retrieved cancer cells from only 9% of breasts ipsilateral to a breast cancer. Methylation of >/=2 genes correlated with marked atypia by univariate analysis, but not multivariate analysis, that adjusted for sample cellularity and risk group classification. Both marked atypia and TSG methylation independently predicted abundant cellularity in multivariate analyses. Discrimination between Gail lower-risk ducts and Gail high-risk ducts was similar for marked atypia [odds ratio (OR), 3.48; P = 0.06] and measures of TSG methylation (OR, 3.51; P = 0.03). However, marked atypia provided better discrimination between Gail lower-risk ducts and ducts contralateral to a breast cancer (OR, 6.91; P = 0.003, compared with methylation OR, 4.21; P = 0.02). TSG methylation in NDL samples does not predict marked atypia after correcting for sample cellularity and risk group classification. Rather, both methylation and marked atypia are independently associated with highly cellular samples, Gail model risk classifications, and a personal history of breast cancer. This suggests the existence of related, but independent, pathogenic pathways in breast epithelium.
The Neuropsychology of Male Adults With High-Functioning Autism or Asperger Syndrome†
Wilson, C Ellie; Happé, Francesca; Wheelwright, Sally J; Ecker, Christine; Lombardo, Michael V; Johnston, Patrick; Daly, Eileen; Murphy, Clodagh M; Spain, Debbie; Lai, Meng-Chuan; Chakrabarti, Bhismadev; Sauter, Disa A; Baron-Cohen, Simon; Murphy, Declan G M
2014-01-01
Autism Spectrum Disorder (ASD) is diagnosed on the basis of behavioral symptoms, but cognitive abilities may also be useful in characterizing individuals with ASD. One hundred seventy-eight high-functioning male adults, half with ASD and half without, completed tasks assessing IQ, a broad range of cognitive skills, and autistic and comorbid symptomatology. The aims of the study were, first, to determine whether significant differences existed between cases and controls on cognitive tasks, and whether cognitive profiles, derived using a multivariate classification method with data from multiple cognitive tasks, could distinguish between the two groups. Second, to establish whether cognitive skill level was correlated with degree of autistic symptom severity, and third, whether cognitive skill level was correlated with degree of comorbid psychopathology. Fourth, cognitive characteristics of individuals with Asperger Syndrome (AS) and high-functioning autism (HFA) were compared. After controlling for IQ, ASD and control groups scored significantly differently on tasks of social cognition, motor performance, and executive function (P's < 0.05). To investigate cognitive profiles, 12 variables were entered into a support vector machine (SVM), which achieved good classification accuracy (81%) at a level significantly better than chance (P < 0.0001). After correcting for multiple correlations, there were no significant associations between cognitive performance and severity of either autistic or comorbid symptomatology. There were no significant differences between AS and HFA groups on the cognitive tasks. Cognitive classification models could be a useful aid to the diagnostic process when used in conjunction with other data sources—including clinical history. Autism Res 2014, 7: 568–581. © 2014 International Society for Autism Research, Wiley Periodicals, Inc. PMID:24903974