MULTI-K: accurate classification of microarray subtypes using ensemble k-means clustering
Kim, Eun-Youn; Kim, Seon-Young; Ashlock, Daniel; Nam, Dougu
2009-01-01
Background Uncovering subtypes of disease from microarray samples has important clinical implications such as survival time and sensitivity of individual patients to specific therapies. Unsupervised clustering methods have been used to classify this type of data. However, most existing methods focus on clusters with compact shapes and do not reflect the geometric complexity of the high dimensional microarray clusters, which limits their performance. Results We present a cluster-number-based ensemble clustering algorithm, called MULTI-K, for microarray sample classification, which demonstrates remarkable accuracy. The method amalgamates multiple k-means runs by varying the number of clusters and identifies clusters that manifest the most robust co-memberships of elements. In addition to the original algorithm, we newly devised the entropy-plot to control the separation of singletons or small clusters. MULTI-K, unlike the simple k-means or other widely used methods, was able to capture clusters with complex and high-dimensional structures accurately. MULTI-K outperformed other methods including a recently developed ensemble clustering algorithm in tests with five simulated and eight real gene-expression data sets. Conclusion The geometric complexity of clusters should be taken into account for accurate classification of microarray data, and ensemble clustering applied to the number of clusters tackles the problem very well. The C++ code and the data sets tested are available from the authors. PMID:19698124
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 12 2013-01-01 2013-01-01 false Classifications for Multi-Family Residential Rehabilitation Work K Exhibit K to Subpart A of Part 1924 Agriculture Regulations of the Department of... Planning and Performing Construction and Other Development Pt. 1924, Subpt. A, Exh. K Exhibit K to Subpart...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 12 2012-01-01 2012-01-01 false Classifications for Multi-Family Residential Rehabilitation Work K Exhibit K to Subpart A of Part 1924 Agriculture Regulations of the Department of... Planning and Performing Construction and Other Development Pt. 1924, Subpt. A, Exh. K Exhibit K to Subpart...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 12 2011-01-01 2011-01-01 false Classifications for Multi-Family Residential Rehabilitation Work K Exhibit K to Subpart A of Part 1924 Agriculture Regulations of the Department of... Planning and Performing Construction and Other Development Pt. 1924, Subpt. A, Exh. K Exhibit K to Subpart...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 12 2010-01-01 2010-01-01 false Classifications for Multi-Family Residential Rehabilitation Work K Exhibit K to Subpart A of Part 1924 Agriculture Regulations of the Department of... Planning and Performing Construction and Other Development Pt. 1924, Subpt. A, Exh. K Exhibit K to Subpart...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 12 2014-01-01 2013-01-01 true Classifications for Multi-Family Residential Rehabilitation Work K Exhibit K to Subpart A of Part 1924 Agriculture Regulations of the Department of... Planning and Performing Construction and Other Development Pt. 1924, Subpt. A, Exh. K Exhibit K to Subpart...
Vrooman, Henri A; Cocosco, Chris A; van der Lijn, Fedde; Stokking, Rik; Ikram, M Arfan; Vernooij, Meike W; Breteler, Monique M B; Niessen, Wiro J
2007-08-01
Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue in MR data, requires training on manually labeled subjects. This manual labeling is a laborious and time-consuming procedure. In this work, a new fully automated brain tissue classification procedure is presented, in which kNN training is automated. This is achieved by non-rigidly registering the MR data with a tissue probability atlas to automatically select training samples, followed by a post-processing step to keep the most reliable samples. The accuracy of the new method was compared to rigid registration-based training and to conventional kNN-based segmentation using training on manually labeled subjects for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in 12 data sets. Furthermore, for all classification methods, the performance was assessed when varying the free parameters. Finally, the robustness of the fully automated procedure was evaluated on 59 subjects. The automated training method using non-rigid registration with a tissue probability atlas was significantly more accurate than rigid registration. For both automated training using non-rigid registration and for the manually trained kNN classifier, the difference with the manual labeling by observers was not significantly larger than inter-observer variability for all tissue types. From the robustness study, it was clear that, given an appropriate brain atlas and optimal parameters, our new fully automated, non-rigid registration-based method gives accurate and robust segmentation results. A similarity index was used for comparison with manually trained kNN. The similarity indices were 0.93, 0.92 and 0.92, for CSF, GM and WM, respectively. It can be concluded that our fully automated method using non-rigid registration may replace manual segmentation, and thus that automated brain tissue segmentation without laborious manual training is feasible.
Photometric Supernova Classification with Machine Learning
NASA Astrophysics Data System (ADS)
Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.
2016-08-01
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.
NASA Astrophysics Data System (ADS)
Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Brink, Henrik; Crellin-Quick, Arien
2012-12-01
With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.
2012-12-15
With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In additionmore » to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.« less
Leveraging Long-term Seismic Catalogs for Automated Real-time Event Classification
NASA Astrophysics Data System (ADS)
Linville, L.; Draelos, T.; Pankow, K. L.; Young, C. J.; Alvarez, S.
2017-12-01
We investigate the use of labeled event types available through reviewed seismic catalogs to produce automated event labels on new incoming data from the crustal region spanned by the cataloged events. Using events cataloged by the University of Utah Seismograph Stations between October, 2012 and June, 2017, we calculate the spectrogram for a time window that spans the duration of each event as seen on individual stations, resulting in 110k event spectrograms (50% local earthquakes examples, 50% quarry blasts examples). Using 80% of the randomized example events ( 90k), a classifier is trained to distinguish between local earthquakes and quarry blasts. We explore variations of deep learning classifiers, incorporating elements of convolutional and recurrent neural networks. Using a single-layer Long Short Term Memory recurrent neural network, we achieve 92% accuracy on the classification task on the remaining 20K test examples. Leveraging the decisions from a group of stations that detected the same event by using the median of all classifications in the group increases the model accuracy to 96%. Additional data with equivalent processing from 500 more recently cataloged events (July, 2017), achieves the same accuracy as our test data on both single-station examples and multi-station medians, suggesting that the model can maintain accurate and stable classification rates on real-time automated events local to the University of Utah Seismograph Stations, with potentially minimal levels of re-training through time.
Singha, Mrinal; Wu, Bingfang; Zhang, Miao
2016-01-01
Accurate and timely mapping of paddy rice is vital for food security and environmental sustainability. This study evaluates the utility of temporal features extracted from coarse resolution data for object-based paddy rice classification of fine resolution data. The coarse resolution vegetation index data is first fused with the fine resolution data to generate the time series fine resolution data. Temporal features are extracted from the fused data and added with the multi-spectral data to improve the classification accuracy. Temporal features provided the crop growth information, while multi-spectral data provided the pattern variation of paddy rice. The achieved overall classification accuracy and kappa coefficient were 84.37% and 0.68, respectively. The results indicate that the use of temporal features improved the overall classification accuracy of a single-date multi-spectral image by 18.75% from 65.62% to 84.37%. The minimum sensitivity (MS) of the paddy rice classification has also been improved. The comparison showed that the mapped paddy area was analogous to the agricultural statistics at the district level. This work also highlighted the importance of feature selection to achieve higher classification accuracies. These results demonstrate the potential of the combined use of temporal and spectral features for accurate paddy rice classification. PMID:28025525
Singha, Mrinal; Wu, Bingfang; Zhang, Miao
2016-12-22
Accurate and timely mapping of paddy rice is vital for food security and environmental sustainability. This study evaluates the utility of temporal features extracted from coarse resolution data for object-based paddy rice classification of fine resolution data. The coarse resolution vegetation index data is first fused with the fine resolution data to generate the time series fine resolution data. Temporal features are extracted from the fused data and added with the multi-spectral data to improve the classification accuracy. Temporal features provided the crop growth information, while multi-spectral data provided the pattern variation of paddy rice. The achieved overall classification accuracy and kappa coefficient were 84.37% and 0.68, respectively. The results indicate that the use of temporal features improved the overall classification accuracy of a single-date multi-spectral image by 18.75% from 65.62% to 84.37%. The minimum sensitivity (MS) of the paddy rice classification has also been improved. The comparison showed that the mapped paddy area was analogous to the agricultural statistics at the district level. This work also highlighted the importance of feature selection to achieve higher classification accuracies. These results demonstrate the potential of the combined use of temporal and spectral features for accurate paddy rice classification.
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less
Císař, Petr; Labbé, Laurent; Souček, Pavel; Pelissier, Pablo; Kerneis, Thierry
2018-01-01
The main aim of this study was to develop a new objective method for evaluating the impacts of different diets on the live fish skin using image-based features. In total, one-hundred and sixty rainbow trout (Oncorhynchus mykiss) were fed either a fish-meal based diet (80 fish) or a 100% plant-based diet (80 fish) and photographed using consumer-grade digital camera. Twenty-three colour features and four texture features were extracted. Four different classification methods were used to evaluate fish diets including Random forest (RF), Support vector machine (SVM), Logistic regression (LR) and k-Nearest neighbours (k-NN). The SVM with radial based kernel provided the best classifier with correct classification rate (CCR) of 82% and Kappa coefficient of 0.65. Although the both LR and RF methods were less accurate than SVM, they achieved good classification with CCR 75% and 70% respectively. The k-NN was the least accurate (40%) classification model. Overall, it can be concluded that consumer-grade digital cameras could be employed as the fast, accurate and non-invasive sensor for classifying rainbow trout based on their diets. Furthermore, these was a close association between image-based features and fish diet received during cultivation. These procedures can be used as non-invasive, accurate and precise approaches for monitoring fish status during the cultivation by evaluating diet’s effects on fish skin. PMID:29596375
Saberioon, Mohammadmehdi; Císař, Petr; Labbé, Laurent; Souček, Pavel; Pelissier, Pablo; Kerneis, Thierry
2018-03-29
The main aim of this study was to develop a new objective method for evaluating the impacts of different diets on the live fish skin using image-based features. In total, one-hundred and sixty rainbow trout ( Oncorhynchus mykiss ) were fed either a fish-meal based diet (80 fish) or a 100% plant-based diet (80 fish) and photographed using consumer-grade digital camera. Twenty-three colour features and four texture features were extracted. Four different classification methods were used to evaluate fish diets including Random forest (RF), Support vector machine (SVM), Logistic regression (LR) and k -Nearest neighbours ( k -NN). The SVM with radial based kernel provided the best classifier with correct classification rate (CCR) of 82% and Kappa coefficient of 0.65. Although the both LR and RF methods were less accurate than SVM, they achieved good classification with CCR 75% and 70% respectively. The k -NN was the least accurate (40%) classification model. Overall, it can be concluded that consumer-grade digital cameras could be employed as the fast, accurate and non-invasive sensor for classifying rainbow trout based on their diets. Furthermore, these was a close association between image-based features and fish diet received during cultivation. These procedures can be used as non-invasive, accurate and precise approaches for monitoring fish status during the cultivation by evaluating diet's effects on fish skin.
A Coupled k-Nearest Neighbor Algorithm for Multi-Label Classification
2015-05-22
classification, an image may contain several concepts simultaneously, such as beach, sunset and kangaroo . Such tasks are usually denoted as multi-label...informatics, a gene can belong to both metabolism and transcription classes; and in music categorization, a song may labeled as Mozart and sad. In the
Behavior Based Social Dimensions Extraction for Multi-Label Classification
Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin
2016-01-01
Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849
A novel approach to internal crown characterization for coniferous tree species classification
NASA Astrophysics Data System (ADS)
Harikumar, A.; Bovolo, F.; Bruzzone, L.
2016-10-01
The knowledge about individual trees in forest is highly beneficial in forest management. High density small foot- print multi-return airborne Light Detection and Ranging (LiDAR) data can provide a very accurate information about the structural properties of individual trees in forests. Every tree species has a unique set of crown structural characteristics that can be used for tree species classification. In this paper, we use both the internal and external crown structural information of a conifer tree crown, derived from a high density small foot-print multi-return LiDAR data acquisition for species classification. Considering the fact that branches are the major building blocks of a conifer tree crown, we obtain the internal crown structural information using a branch level analysis. The structure of each conifer branch is represented using clusters in the LiDAR point cloud. We propose the joint use of the k-means clustering and geometric shape fitting, on the LiDAR data projected onto a novel 3-dimensional space, to identify branch clusters. After mapping the identified clusters back to the original space, six internal geometric features are estimated using a branch-level analysis. The external crown characteristics are modeled by using six least correlated features based on cone fitting and convex hull. Species classification is performed using a sparse Support Vector Machines (sparse SVM) classifier.
Brain tumor classification and segmentation using sparse coding and dictionary learning.
Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo
2016-08-01
This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.
Classification algorithm of lung lobe for lung disease cases based on multislice CT images
NASA Astrophysics Data System (ADS)
Matsuhiro, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Mishima, M.; Ohmatsu, H.; Tsuchida, T.; Eguchi, K.; Kaneko, M.; Moriyama, N.
2011-03-01
With the development of multi-slice CT technology, to obtain an accurate 3D image of lung field in a short time is possible. To support that, a lot of image processing methods need to be developed. In clinical setting for diagnosis of lung cancer, it is important to study and analyse lung structure. Therefore, classification of lung lobe provides useful information for lung cancer analysis. In this report, we describe algorithm which classify lungs into lung lobes for lung disease cases from multi-slice CT images. The classification algorithm of lung lobes is efficiently carried out using information of lung blood vessel, bronchus, and interlobar fissure. Applying the classification algorithms to multi-slice CT images of 20 normal cases and 5 lung disease cases, we demonstrate the usefulness of the proposed algorithms.
Investigating the Complexity of NGC 2992 with HETG
NASA Astrophysics Data System (ADS)
Canizares, Claude
2009-09-01
NGC 2992 is a nearby (z = 0.00771) Seyfert galaxy with a variable 1.5-2 classification. Over the past 30 years, the 2-10 keV continuum flux has varied by a factor of ~20. This was accompanied by complex variability in the multi-component Fe K line emission, which may indicate violent flaring activity in the innermost regions of the accretion disk. By observing NGC 2992 with the HETG, we will obtain the best constraint to date on the FWHM of the narrow, distant-matter Fe K line emission, along with precision measurement of its centroid energy, thereby enabling more accurate modeling of the variable broad component. We will also test models of the soft excess through measurement of narrow absorption lines attributable to a warm absorber and narrow emission lines arising from photoexcitation.
Automated unsupervised multi-parametric classification of adipose tissue depots in skeletal muscle
Valentinitsch, Alexander; Karampinos, Dimitrios C.; Alizai, Hamza; Subburaj, Karupppasamy; Kumar, Deepak; Link, Thomas M.; Majumdar, Sharmila
2012-01-01
Purpose To introduce and validate an automated unsupervised multi-parametric method for segmentation of the subcutaneous fat and muscle regions in order to determine subcutaneous adipose tissue (SAT) and intermuscular adipose tissue (IMAT) areas based on data from a quantitative chemical shift-based water-fat separation approach. Materials and Methods Unsupervised standard k-means clustering was employed to define sets of similar features (k = 2) within the whole multi-modal image after the water-fat separation. The automated image processing chain was composed of three primary stages including tissue, muscle and bone region segmentation. The algorithm was applied on calf and thigh datasets to compute SAT and IMAT areas and was compared to a manual segmentation. Results The IMAT area using the automatic segmentation had excellent agreement with the IMAT area using the manual segmentation for all the cases in the thigh (R2: 0.96) and for cases with up to moderate IMAT area in the calf (R2: 0.92). The group with the highest grade of muscle fat infiltration in the calf had the highest error in the inner SAT contour calculation. Conclusion The proposed multi-parametric segmentation approach combined with quantitative water-fat imaging provides an accurate and reliable method for an automated calculation of the SAT and IMAT areas reducing considerably the total post-processing time. PMID:23097409
Modeling occupancy distribution in large spaces with multi-feature classification algorithm
Wang, Wei; Chen, Jiayu; Hong, Tianzhen
2018-04-07
We present that occupancy information enables robust and flexible control of heating, ventilation, and air-conditioning (HVAC) systems in buildings. In large spaces, multiple HVAC terminals are typically installed to provide cooperative services for different thermal zones, and the occupancy information determines the cooperation among terminals. However, a person count at room-level does not adequately optimize HVAC system operation due to the movement of occupants within the room that creates uneven load distribution. Without accurate knowledge of the occupants’ spatial distribution, the uneven distribution of occupants often results in under-cooling/heating or over-cooling/heating in some thermal zones. Therefore, the lack of high-resolutionmore » occupancy distribution is often perceived as a bottleneck for future improvements to HVAC operation efficiency. To fill this gap, this study proposes a multi-feature k-Nearest-Neighbors (k-NN) classification algorithm to extract occupancy distribution through reliable, low-cost Bluetooth Low Energy (BLE) networks. An on-site experiment was conducted in a typical office of an institutional building to demonstrate the proposed methods, and the experiment outcomes of three case studies were examined to validate detection accuracy. One method based on City Block Distance (CBD) was used to measure the distance between detected occupancy distribution and ground truth and assess the results of occupancy distribution. Finally, the results show the accuracy when CBD = 1 is over 71.4% and the accuracy when CBD = 2 can reach up to 92.9%.« less
Modeling occupancy distribution in large spaces with multi-feature classification algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Wei; Chen, Jiayu; Hong, Tianzhen
We present that occupancy information enables robust and flexible control of heating, ventilation, and air-conditioning (HVAC) systems in buildings. In large spaces, multiple HVAC terminals are typically installed to provide cooperative services for different thermal zones, and the occupancy information determines the cooperation among terminals. However, a person count at room-level does not adequately optimize HVAC system operation due to the movement of occupants within the room that creates uneven load distribution. Without accurate knowledge of the occupants’ spatial distribution, the uneven distribution of occupants often results in under-cooling/heating or over-cooling/heating in some thermal zones. Therefore, the lack of high-resolutionmore » occupancy distribution is often perceived as a bottleneck for future improvements to HVAC operation efficiency. To fill this gap, this study proposes a multi-feature k-Nearest-Neighbors (k-NN) classification algorithm to extract occupancy distribution through reliable, low-cost Bluetooth Low Energy (BLE) networks. An on-site experiment was conducted in a typical office of an institutional building to demonstrate the proposed methods, and the experiment outcomes of three case studies were examined to validate detection accuracy. One method based on City Block Distance (CBD) was used to measure the distance between detected occupancy distribution and ground truth and assess the results of occupancy distribution. Finally, the results show the accuracy when CBD = 1 is over 71.4% and the accuracy when CBD = 2 can reach up to 92.9%.« less
Generalizing DTW to the multi-dimensional case requires an adaptive approach
Hu, Bing; Jin, Hongxia; Wang, Jun; Keogh, Eamonn
2017-01-01
In recent years Dynamic Time Warping (DTW) has emerged as the distance measure of choice for virtually all time series data mining applications. For example, virtually all applications that process data from wearable devices use DTW as a core sub-routine. This is the result of significant progress in improving DTW’s efficiency, together with multiple empirical studies showing that DTW-based classifiers at least equal (and generally surpass) the accuracy of all their rivals across dozens of datasets. Thus far, most of the research has considered only the one-dimensional case, with practitioners generalizing to the multi-dimensional case in one of two ways, dependent or independent warping. In general, it appears the community believes either that the two ways are equivalent, or that the choice is irrelevant. In this work, we show that this is not the case. The two most commonly used multi-dimensional DTW methods can produce different classifications, and neither one dominates over the other. This seems to suggest that one should learn the best method for a particular application. However, we will show that this is not necessary; a simple, principled rule can be used on a case-by-case basis to predict which of the two methods we should trust at the time of classification. Our method allows us to ensure that classification results are at least as accurate as the better of the two rival methods, and, in many cases, our method is significantly more accurate. We demonstrate our ideas with the most extensive set of multi-dimensional time series classification experiments ever attempted. PMID:29104448
Canizo, Brenda V; Escudero, Leticia B; Pérez, María B; Pellerano, Roberto G; Wuilloud, Rodolfo G
2018-03-01
The feasibility of the application of chemometric techniques associated with multi-element analysis for the classification of grape seeds according to their provenance vineyard soil was investigated. Grape seed samples from different localities of Mendoza province (Argentina) were evaluated. Inductively coupled plasma mass spectrometry (ICP-MS) was used for the determination of twenty-nine elements (Ag, As, Ce, Co, Cs, Cu, Eu, Fe, Ga, Gd, La, Lu, Mn, Mo, Nb, Nd, Ni, Pr, Rb, Sm, Te, Ti, Tl, Tm, U, V, Y, Zn and Zr). Once the analytical data were collected, supervised pattern recognition techniques such as linear discriminant analysis (LDA), partial least square discriminant analysis (PLS-DA), k-nearest neighbors (k-NN), support vector machine (SVM) and Random Forest (RF) were applied to construct classification/discrimination rules. The results indicated that nonlinear methods, RF and SVM, perform best with up to 98% and 93% accuracy rate, respectively, and therefore are excellent tools for classification of grapes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Improved Sparse Multi-Class SVM and Its Application for Gene Selection in Cancer Classification
Huang, Lingkang; Zhang, Hao Helen; Zeng, Zhao-Bang; Bushel, Pierre R.
2013-01-01
Background Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. Results The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. Conclusions High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. Availability: The source MATLAB code are available from http://math.arizona.edu/~hzhang/software.html. PMID:23966761
Remote sensing imagery classification using multi-objective gravitational search algorithm
NASA Astrophysics Data System (ADS)
Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie
2016-10-01
Simultaneous optimization of different validity measures can capture different data characteristics of remote sensing imagery (RSI) and thereby achieving high quality classification results. In this paper, two conflicting cluster validity indices, the Xie-Beni (XB) index and the fuzzy C-means (FCM) (Jm) measure, are integrated with a diversity-enhanced and memory-based multi-objective gravitational search algorithm (DMMOGSA) to present a novel multi-objective optimization based RSI classification method. In this method, the Gabor filter method is firstly implemented to extract texture features of RSI. Then, the texture features are syncretized with the spectral features to construct the spatial-spectral feature space/set of the RSI. Afterwards, cluster of the spectral-spatial feature set is carried out on the basis of the proposed method. To be specific, cluster centers are randomly generated initially. After that, the cluster centers are updated and optimized adaptively by employing the DMMOGSA. Accordingly, a set of non-dominated cluster centers are obtained. Therefore, numbers of image classification results of RSI are produced and users can pick up the most promising one according to their problem requirements. To quantitatively and qualitatively validate the effectiveness of the proposed method, the proposed classification method was applied to classifier two aerial high-resolution remote sensing imageries. The obtained classification results are compared with that produced by two single cluster validity index based and two state-of-the-art multi-objective optimization algorithms based classification results. Comparison results show that the proposed method can achieve more accurate RSI classification.
SVM-Fold: a tool for discriminative multi-class protein fold and superfamily recognition
Melvin, Iain; Ie, Eugene; Kuang, Rui; Weston, Jason; Stafford, William Noble; Leslie, Christina
2007-01-01
Background Predicting a protein's structural class from its amino acid sequence is a fundamental problem in computational biology. Much recent work has focused on developing new representations for protein sequences, called string kernels, for use with support vector machine (SVM) classifiers. However, while some of these approaches exhibit state-of-the-art performance at the binary protein classification problem, i.e. discriminating between a particular protein class and all other classes, few of these studies have addressed the real problem of multi-class superfamily or fold recognition. Moreover, there are only limited software tools and systems for SVM-based protein classification available to the bioinformatics community. Results We present a new multi-class SVM-based protein fold and superfamily recognition system and web server called SVM-Fold, which can be found at . Our system uses an efficient implementation of a state-of-the-art string kernel for sequence profiles, called the profile kernel, where the underlying feature representation is a histogram of inexact matching k-mer frequencies. We also employ a novel machine learning approach to solve the difficult multi-class problem of classifying a sequence of amino acids into one of many known protein structural classes. Binary one-vs-the-rest SVM classifiers that are trained to recognize individual structural classes yield prediction scores that are not comparable, so that standard "one-vs-all" classification fails to perform well. Moreover, SVMs for classes at different levels of the protein structural hierarchy may make useful predictions, but one-vs-all does not try to combine these multiple predictions. To deal with these problems, our method learns relative weights between one-vs-the-rest classifiers and encodes information about the protein structural hierarchy for multi-class prediction. In large-scale benchmark results based on the SCOP database, our code weighting approach significantly improves on the standard one-vs-all method for both the superfamily and fold prediction in the remote homology setting and on the fold recognition problem. Moreover, our code weight learning algorithm strongly outperforms nearest-neighbor methods based on PSI-BLAST in terms of prediction accuracy on every structure classification problem we consider. Conclusion By combining state-of-the-art SVM kernel methods with a novel multi-class algorithm, the SVM-Fold system delivers efficient and accurate protein fold and superfamily recognition. PMID:17570145
Method of Grassland Information Extraction Based on Multi-Level Segmentation and Cart Model
NASA Astrophysics Data System (ADS)
Qiao, Y.; Chen, T.; He, J.; Wen, Q.; Liu, F.; Wang, Z.
2018-04-01
It is difficult to extract grassland accurately by traditional classification methods, such as supervised method based on pixels or objects. This paper proposed a new method combing the multi-level segmentation with CART (classification and regression tree) model. The multi-level segmentation which combined the multi-resolution segmentation and the spectral difference segmentation could avoid the over and insufficient segmentation seen in the single segmentation mode. The CART model was established based on the spectral characteristics and texture feature which were excavated from training sample data. Xilinhaote City in Inner Mongolia Autonomous Region was chosen as the typical study area and the proposed method was verified by using visual interpretation results as approximate truth value. Meanwhile, the comparison with the nearest neighbor supervised classification method was obtained. The experimental results showed that the total precision of classification and the Kappa coefficient of the proposed method was 95 % and 0.9, respectively. However, the total precision of classification and the Kappa coefficient of the nearest neighbor supervised classification method was 80 % and 0.56, respectively. The result suggested that the accuracy of classification proposed in this paper was higher than the nearest neighbor supervised classification method. The experiment certificated that the proposed method was an effective extraction method of grassland information, which could enhance the boundary of grassland classification and avoid the restriction of grassland distribution scale. This method was also applicable to the extraction of grassland information in other regions with complicated spatial features, which could avoid the interference of woodland, arable land and water body effectively.
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.
Dillon, Michael P; Major, Matthew J; Kaluf, Brian; Balasanov, Yuri; Fatone, Stefania
2018-04-01
While Amputee Mobility Predictor scores differ between Medicare Functional Classification Levels (K-level), this does not demonstrate that the Amputee Mobility Predictor can accurately predict K-level. To determine how accurately K-level could be predicted using the Amputee Mobility Predictor in combination with patient characteristics for persons with transtibial and transfemoral amputation. Prediction. A cumulative odds ordinal logistic regression was built to determine the effect that the Amputee Mobility Predictor, in combination with patient characteristics, had on the odds of being assigned to a particular K-level in 198 people with transtibial or transfemoral amputation. For people assigned to the K2 or K3 level by their clinician, the Amputee Mobility Predictor predicted the clinician-assigned K-level more than 80% of the time. For people assigned to the K1 or K4 level by their clinician, the prediction of clinician-assigned K-level was less accurate. The odds of being in a higher K-level improved with younger age and transfemoral amputation. Ordinal logistic regression can be used to predict the odds of being assigned to a particular K-level using the Amputee Mobility Predictor and patient characteristics. This pilot study highlighted critical method design issues, such as potential predictor variables and sample size requirements for future prospective research. Clinical relevance This pilot study demonstrated that the odds of being assigned a particular K-level could be predicted using the Amputee Mobility Predictor score and patient characteristics. While the model seemed sufficiently accurate to predict clinician assignment to the K2 or K3 level, further work is needed in larger and more representative samples, particularly for people with low (K1) and high (K4) levels of mobility, to be confident in the model's predictive value prior to use in clinical practice.
Bryan, Kenneth; Cunningham, Pádraig
2008-01-01
Background Microarrays have the capacity to measure the expressions of thousands of genes in parallel over many experimental samples. The unsupervised classification technique of bicluster analysis has been employed previously to uncover gene expression correlations over subsets of samples with the aim of providing a more accurate model of the natural gene functional classes. This approach also has the potential to aid functional annotation of unclassified open reading frames (ORFs). Until now this aspect of biclustering has been under-explored. In this work we illustrate how bicluster analysis may be extended into a 'semi-supervised' ORF annotation approach referred to as BALBOA. Results The efficacy of the BALBOA ORF classification technique is first assessed via cross validation and compared to a multi-class k-Nearest Neighbour (kNN) benchmark across three independent gene expression datasets. BALBOA is then used to assign putative functional annotations to unclassified yeast ORFs. These predictions are evaluated using existing experimental and protein sequence information. Lastly, we employ a related semi-supervised method to predict the presence of novel functional modules within yeast. Conclusion In this paper we demonstrate how unsupervised classification methods, such as bicluster analysis, may be extended using of available annotations to form semi-supervised approaches within the gene expression analysis domain. We show that such methods have the potential to improve upon supervised approaches and shed new light on the functions of unclassified ORFs and their co-regulation. PMID:18831786
NASA Astrophysics Data System (ADS)
Iwahashi, J.; Yamazaki, D.; Matsuoka, M.; Thamarux, P.; Herrick, J.; Yong, A.; Mital, U.
2017-12-01
A seamless model of landform classifications with regional accuracy will be a powerful platform for geophysical studies that forecast geologic hazards. Spatial variability as a function of landform on a global scale was captured in the automated classifications of Iwahashi and Pike (2007) and additional developments are presented here that incorporate more accurate depictions using higher-resolution elevation data than the original 1-km scale Shuttle Radar Topography Mission digital elevation model (DEM). We create polygon-based terrain classifications globally by using the 280-m DEM interpolated from the Multi-Error-Removed Improved-Terrain DEM (MERIT; Yamazaki et al., 2017). The multi-scale pixel-image analysis method, known as Multi-resolution Segmentation (Baatz and Schäpe, 2000), is first used to classify the terrains based on geometric signatures (slope and local convexity) calculated from the 280-m DEM. Next, we apply the machine learning method of "k-means clustering" to prepare the polygon-based classification at the globe-scale using slope, local convexity and surface texture. We then group the divisions with similar properties by hierarchical clustering and other statistical analyses using geological and geomorphological data of the area where landslides and earthquakes are frequent (e.g. Japan and California). We find the 280-m DEM resolution is only partially sufficient for classifying plains. We nevertheless observe that the categories correspond to reported landslide and liquefaction features at the global scale, suggesting that our model is an appropriate platform to forecast ground failure. To predict seismic amplification, we estimate site conditions using the time-averaged shear-wave velocity in the upper 30-m (VS30) measurements compiled by Yong et al. (2016) and the terrain model developed by Yong (2016; Y16). We plan to test our method on finer resolution DEMs and report our findings to obtain a more globally consistent terrain model as there are known errors in DEM derivatives at higher-resolutions. We expect the improvement in DEM resolution (4 times greater detail) and the combination of regional and global coverage will yield a consistent dataset of polygons that have the potential to improve relations to the Y16 estimates significantly.
Alexnet Feature Extraction and Multi-Kernel Learning for Objectoriented Classification
NASA Astrophysics Data System (ADS)
Ding, L.; Li, H.; Hu, C.; Zhang, W.; Wang, S.
2018-04-01
In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.
Vehicle Classification Using an Imbalanced Dataset Based on a Single Magnetic Sensor.
Xu, Chang; Wang, Yingguan; Bao, Xinghe; Li, Fengrong
2018-05-24
This paper aims to improve the accuracy of automatic vehicle classifiers for imbalanced datasets. Classification is made through utilizing a single anisotropic magnetoresistive sensor, with the models of vehicles involved being classified into hatchbacks, sedans, buses, and multi-purpose vehicles (MPVs). Using time domain and frequency domain features in combination with three common classification algorithms in pattern recognition, we develop a novel feature extraction method for vehicle classification. These three common classification algorithms are the k-nearest neighbor, the support vector machine, and the back-propagation neural network. Nevertheless, a problem remains with the original vehicle magnetic dataset collected being imbalanced, and may lead to inaccurate classification results. With this in mind, we propose an approach called SMOTE, which can further boost the performance of classifiers. Experimental results show that the k-nearest neighbor (KNN) classifier with the SMOTE algorithm can reach a classification accuracy of 95.46%, thus minimizing the effect of the imbalance.
Youn, Su Hyun; Sim, Taeyong; Choi, Ahnryul; Song, Jinsung; Shin, Ki Young; Lee, Il Kwon; Heo, Hyun Mu; Lee, Daeweon; Mun, Joung Hwan
2015-06-01
Ultrasonic surgical units (USUs) have the advantage of minimizing tissue damage during surgeries that require tissue dissection by reducing problems such as coagulation and unwanted carbonization, but the disadvantage of requiring manual adjustment of power output according to the target tissue. In order to overcome this limitation, it is necessary to determine the properties of in vivo tissues automatically. We propose a multi-classifier that can accurately classify tissues based on the unique impedance of each tissue. For this purpose, a multi-classifier was built based on single classifiers with high classification rates, and the classification accuracy of the proposed model was compared with that of single classifiers for various electrode types (Type-I: 6 mm invasive; Type-II: 3 mm invasive; Type-III: surface). The sensitivity and positive predictive value (PPV) of the multi-classifier by cross checks were determined. According to the 10-fold cross validation results, the classification accuracy of the proposed model was significantly higher (p<0.05 or <0.01) than that of existing single classifiers for all electrode types. In particular, the classification accuracy of the proposed model was highest when the 3mm invasive electrode (Type-II) was used (sensitivity=97.33-100.00%; PPV=96.71-100.00%). The results of this study are an important contribution to achieving automatic optimal output power adjustment of USUs according to the properties of individual tissues. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khan, Faisal; Enzmann, Frieder; Kersten, Michael
2016-03-01
Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.
Fabelo, Himar; Ortega, Samuel; Ravi, Daniele; Kiran, B Ravi; Sosa, Coralia; Bulters, Diederik; Callicó, Gustavo M; Bulstrode, Harry; Szolna, Adam; Piñeiro, Juan F; Kabwama, Silvester; Madroñal, Daniel; Lazcano, Raquel; J-O'Shanahan, Aruma; Bisshopp, Sara; Hernández, María; Báez, Abelardo; Yang, Guang-Zhong; Stanciulescu, Bogdan; Salvador, Rubén; Juárez, Eduardo; Sarmiento, Roberto
2018-01-01
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area.
Kabwama, Silvester; Madroñal, Daniel; Lazcano, Raquel; J-O’Shanahan, Aruma; Bisshopp, Sara; Hernández, María; Báez, Abelardo; Yang, Guang-Zhong; Stanciulescu, Bogdan; Salvador, Rubén; Juárez, Eduardo; Sarmiento, Roberto
2018-01-01
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area. PMID:29554126
Crabtree, Nathaniel M; Moore, Jason H; Bowyer, John F; George, Nysia I
2017-01-01
A computational evolution system (CES) is a knowledge discovery engine that can identify subtle, synergistic relationships in large datasets. Pareto optimization allows CESs to balance accuracy with model complexity when evolving classifiers. Using Pareto optimization, a CES is able to identify a very small number of features while maintaining high classification accuracy. A CES can be designed for various types of data, and the user can exploit expert knowledge about the classification problem in order to improve discrimination between classes. These characteristics give CES an advantage over other classification and feature selection algorithms, particularly when the goal is to identify a small number of highly relevant, non-redundant biomarkers. Previously, CESs have been developed only for binary class datasets. In this study, we developed a multi-class CES. The multi-class CES was compared to three common feature selection and classification algorithms: support vector machine (SVM), random k-nearest neighbor (RKNN), and random forest (RF). The algorithms were evaluated on three distinct multi-class RNA sequencing datasets. The comparison criteria were run-time, classification accuracy, number of selected features, and stability of selected feature set (as measured by the Tanimoto distance). The performance of each algorithm was data-dependent. CES performed best on the dataset with the smallest sample size, indicating that CES has a unique advantage since the accuracy of most classification methods suffer when sample size is small. The multi-class extension of CES increases the appeal of its application to complex, multi-class datasets in order to identify important biomarkers and features.
NASA Astrophysics Data System (ADS)
Suiter, Ashley Elizabeth
Multi-spectral imagery provides a robust and low-cost dataset for assessing wetland extent and quality over broad regions and is frequently used for wetland inventories. However in forested wetlands, hydrology is obscured by tree canopy making it difficult to detect with multi-spectral imagery alone. Because of this, classification of forested wetlands often includes greater errors than that of other wetlands types. Elevation and terrain derivatives have been shown to be useful for modelling wetland hydrology. But, few studies have addressed the use of LiDAR intensity data detecting hydrology in forested wetlands. Due the tendency of LiDAR signal to be attenuated by water, this research proposed the fusion of LiDAR intensity data with LiDAR elevation, terrain data, and aerial imagery, for the detection of forested wetland hydrology. We examined the utility of LiDAR intensity data and determined whether the fusion of Lidar derived data with multispectral imagery increased the accuracy of forested wetland classification compared with a classification performed with only multi-spectral image. Four classifications were performed: Classification A -- All Imagery, Classification B -- All LiDAR, Classification C -- LiDAR without Intensity, and Classification D -- Fusion of All Data. These classifications were performed using random forest and each resulted in a 3-foot resolution thematic raster of forested upland and forested wetland locations in Vermilion County, Illinois. The accuracies of these classifications were compared using Kappa Coefficient of Agreement. Importance statistics produced within the random forest classifier were evaluated in order to understand the contribution of individual datasets. Classification D, which used the fusion of LiDAR and multi-spectral imagery as input variables, had moderate to strong agreement between reference data and classification results. It was found that Classification A performed using all the LiDAR data and its derivatives (intensity, elevation, slope, aspect, curvatures, and Topographic Wetness Index) was the most accurate classification with Kappa: 78.04%, indicating moderate to strong agreement. However, Classification C, performed with LiDAR derivative without intensity data had less agreement than would be expected by chance, indicating that LiDAR contributed significantly to the accuracy of Classification B.
Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds.
Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M; Bloom, Peter H; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd
2017-01-01
Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.
Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds
Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd
2017-01-01
Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data. PMID:28403159
Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds
Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael J.; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd
2017-01-01
Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.
NASA Astrophysics Data System (ADS)
Zhao, Lili; Yin, Jianping; Yuan, Lihuan; Liu, Qiang; Li, Kuan; Qiu, Minghui
2017-07-01
Automatic detection of abnormal cells from cervical smear images is extremely demanded in annual diagnosis of women's cervical cancer. For this medical cell recognition problem, there are three different feature sections, namely cytology morphology, nuclear chromatin pathology and region intensity. The challenges of this problem come from feature combination s and classification accurately and efficiently. Thus, we propose an efficient abnormal cervical cell detection system based on multi-instance extreme learning machine (MI-ELM) to deal with above two questions in one unified framework. MI-ELM is one of the most promising supervised learning classifiers which can deal with several feature sections and realistic classification problems analytically. Experiment results over Herlev dataset demonstrate that the proposed method outperforms three traditional methods for two-class classification in terms of well accuracy and less time.
Multi-task feature selection in microarray data by binary integer programming.
Lan, Liang; Vucetic, Slobodan
2013-12-20
A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.
Corcoran, Jennifer M.; Knight, Joseph F.; Gallant, Alisa L.
2013-01-01
Wetland mapping at the landscape scale using remotely sensed data requires both affordable data and an efficient accurate classification method. Random forest classification offers several advantages over traditional land cover classification techniques, including a bootstrapping technique to generate robust estimations of outliers in the training data, as well as the capability of measuring classification confidence. Though the random forest classifier can generate complex decision trees with a multitude of input data and still not run a high risk of over fitting, there is a great need to reduce computational and operational costs by including only key input data sets without sacrificing a significant level of accuracy. Our main questions for this study site in Northern Minnesota were: (1) how does classification accuracy and confidence of mapping wetlands compare using different remote sensing platforms and sets of input data; (2) what are the key input variables for accurate differentiation of upland, water, and wetlands, including wetland type; and (3) which datasets and seasonal imagery yield the best accuracy for wetland classification. Our results show the key input variables include terrain (elevation and curvature) and soils descriptors (hydric), along with an assortment of remotely sensed data collected in the spring (satellite visible, near infrared, and thermal bands; satellite normalized vegetation index and Tasseled Cap greenness and wetness; and horizontal-horizontal (HH) and horizontal-vertical (HV) polarization using L-band satellite radar). We undertook this exploratory analysis to inform decisions by natural resource managers charged with monitoring wetland ecosystems and to aid in designing a system for consistent operational mapping of wetlands across landscapes similar to those found in Northern Minnesota.
Object-oriented crop mapping and monitoring using multi-temporal polarimetric RADARSAT-2 data
NASA Astrophysics Data System (ADS)
Jiao, Xianfeng; Kovacs, John M.; Shang, Jiali; McNairn, Heather; Walters, Dan; Ma, Baoluo; Geng, Xiaoyuan
2014-10-01
The aim of this paper is to assess the accuracy of an object-oriented classification of polarimetric Synthetic Aperture Radar (PolSAR) data to map and monitor crops using 19 RADARSAT-2 fine beam polarimetric (FQ) images of an agricultural area in North-eastern Ontario, Canada. Polarimetric images and field data were acquired during the 2011 and 2012 growing seasons. The classification and field data collection focused on the main crop types grown in the region, which include: wheat, oat, soybean, canola and forage. The polarimetric parameters were extracted with PolSAR analysis using both the Cloude-Pottier and Freeman-Durden decompositions. The object-oriented classification, with a single date of PolSAR data, was able to classify all five crop types with an accuracy of 95% and Kappa of 0.93; a 6% improvement in comparison with linear-polarization only classification. However, the time of acquisition is crucial. The larger biomass crops of canola and soybean were most accurately mapped, whereas the identification of oat and wheat were more variable. The multi-temporal data using the Cloude-Pottier decomposition parameters provided the best classification accuracy compared to the linear polarizations and the Freeman-Durden decomposition parameters. In general, the object-oriented classifications were able to accurately map crop types by reducing the noise inherent in the SAR data. Furthermore, using the crop classification maps we were able to monitor crop growth stage based on a trend analysis of the radar response. Based on field data from canola crops, there was a strong relationship between the phenological growth stage based on the BBCH scale, and the HV backscatter and entropy.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966
Land use/cover classification in the Brazilian Amazon using satellite images.
Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira
2012-09-01
Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.
Land use/cover classification in the Brazilian Amazon using satellite images
Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant’Anna, Sidnei João Siqueira
2013-01-01
Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data. PMID:24353353
The Effect of Sub-Aperture in DRIA Framework Applied on Multi-Aspect PolSAR Data
NASA Astrophysics Data System (ADS)
Xue, Feiteng; Yin, Qiang; Lin, Yun; Hong, Wen
2016-08-01
Multi-aspect SAR is a new remote sensing technology, achieves consecutive data in large look angle as platform moves. Multi- aspect observation brings higher resolution and SNR to SAR picture. Multi-aspect PolSAR data can increase the accuracy of target identify and classification because it contains the 3-D polarimetric scattering properties.DRIA(detecting-removing-incoherent-adding)framework is a multi-aspect PolSAR data processing method. In this method, the anisotropic and isotropic scattering is separated by maximum- likelihood ratio test. The anisotropic scattering is removed to gain a removal series. The isotropic scattering is incoherent added to gain a high resolution picture. The removal series describes the anisotropic scattering property and is used in features extraction and classification.This article focuses on the effect brought by difference of sub-aperture numbers in anisotropic scattering detection and removal. The more sub-apertures are, the less look angle is. Artificial target has anisotropic scattering because of Bragg resonances. The increase of sub-aperture number brings more accurate observation in azimuth though the quality of each single image may loss. The accuracy of classification in agricultural fields is affected by the anisotropic scattering brought by Bragg resonances. The size of the sub-aperture has a significant effect in the removal result of Bragg resonances.
Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui
2015-10-30
Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.
Weighted Parzen Windows for Pattern Classification
1994-05-01
Nearest-Neighbor Rule The k-Nearest-Neighbor ( kNN ) technique is nonparametric, assuming nothing about the distribution of the data. Stated succinctly...probabilities P(wj I x) from samples." Raudys and Jain [20:255] advance this interpretation by pointing out that the kNN technique can be viewed as the...34Parzen window classifier with a hyper- rectangular window function." As with the Parzen-window technique, the kNN classifier is more accurate as the
Lin, Yu-Ching; Yu, Nan-Ying; Jiang, Ching-Fen; Chang, Shao-Hsia
2018-06-02
In this paper, we introduce a newly developed multi-scale wavelet model for the interpretation of surface electromyography (SEMG) signals and validate the model's capability to characterize changes in neuromuscular activation in cases with myofascial pain syndrome (MPS) via machine learning methods. The SEMG data collected from normal (N = 30; 27 women, 3 men) and MPS subjects (N = 26; 22 women, 4 men) were adopted for this retrospective analysis. SMEGs were measured from the taut-band loci on both sides of the trapezius muscle on the upper back while he/she conducted a cyclic bilateral backward shoulder extension movement within 1 min. Classification accuracy of the SEMG model to differentiate MPS patients from normal subjects was 77% using template matching and 60% using K-means clustering. Classification consistency between the two machine learning methods was 87% in the normal group and 93% in the MPS group. The 2D feature graphs derived from the proposed multi-scale model revealed distinct patterns between normal subjects and MPS patients. The classification consistency using template matching and K-means clustering suggests the potential of using the proposed model to characterize interference pattern changes induced by MPS. Copyright © 2018. Published by Elsevier Ltd.
Identification of Terrestrial Reflectance From Remote Sensing
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel; Nolf, Scott R.; Stacy, Kathryn (Technical Monitor)
2000-01-01
Correcting for atmospheric effects is an essential part of surface-reflectance recovery from radiance measurements. Model-based atmospheric correction techniques enable an accurate identification and classification of terrestrial reflectances from multi-spectral imagery. Successful and efficient removal of atmospheric effects from remote-sensing data is a key factor in the success of Earth observation missions. This report assesses the performance, robustness and sensitivity of two atmospheric-correction and reflectance-recovery techniques as part of an end-to-end simulation of hyper-spectral acquisition, identification and classification.
Fan, Ming; Zheng, Bin; Li, Lihua
2015-10-01
Knowledge of the structural class of a given protein is important for understanding its folding patterns. Although a lot of efforts have been made, it still remains a challenging problem for prediction of protein structural class solely from protein sequences. The feature extraction and classification of proteins are the main problems in prediction. In this research, we extended our earlier work regarding these two aspects. In protein feature extraction, we proposed a scheme by calculating the word frequency and word position from sequences of amino acid, reduced amino acid, and secondary structure. For an accurate classification of the structural class of protein, we developed a novel Multi-Agent Ada-Boost (MA-Ada) method by integrating the features of Multi-Agent system into Ada-Boost algorithm. Extensive experiments were taken to test and compare the proposed method using four benchmark datasets in low homology. The results showed classification accuracies of 88.5%, 96.0%, 88.4%, and 85.5%, respectively, which are much better compared with the existing methods. The source code and dataset are available on request.
Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan
2011-01-01
Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications. PMID:22096600
Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan
2011-01-01
Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications.
Automotive System for Remote Surface Classification.
Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail
2017-04-01
In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.
Automotive System for Remote Surface Classification
Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail
2017-01-01
In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions. PMID:28368297
Confirmation of 5 SN in the Kepler/K2 C16 Field with Gemini
NASA Astrophysics Data System (ADS)
Margheim, S.; Tucker, B. E.; Garnavich, P. M.; Rest, A.; Narayan, G.; Smith, K. W.; Smartt, S.; Kasen, D.; Shaya, E.; Mushotzky, R.; Olling, R.; Villar, A.; Forster, F.; Zenteno, A.; James, D.; Smith, R. Chris
2018-01-01
We report new spectroscopic classifications by KEGS of supernova discovered by Pan-STARRS1 during a targeted search of the Kepler/K2 Campaign 16 field using the Gemini Multi-Object Spectrograph (GMOS) on both the Gemini North Observatory on Mauna Kea, and the Gemini South Observatory on Cerro Pachon.
Kiranyaz, Serkan; Ince, Turker; Pulkkinen, Jenni; Gabbouj, Moncef
2010-01-01
In this paper, we address dynamic clustering in high dimensional data or feature spaces as an optimization problem where multi-dimensional particle swarm optimization (MD PSO) is used to find out the true number of clusters, while fractional global best formation (FGBF) is applied to avoid local optima. Based on these techniques we then present a novel and personalized long-term ECG classification system, which addresses the problem of labeling the beats within a long-term ECG signal, known as Holter register, recorded from an individual patient. Due to the massive amount of ECG beats in a Holter register, visual inspection is quite difficult and cumbersome, if not impossible. Therefore the proposed system helps professionals to quickly and accurately diagnose any latent heart disease by examining only the representative beats (the so called master key-beats) each of which is representing a cluster of homogeneous (similar) beats. We tested the system on a benchmark database where the beats of each Holter register have been manually labeled by cardiologists. The selection of the right master key-beats is the key factor for achieving a highly accurate classification and the proposed systematic approach produced results that were consistent with the manual labels with 99.5% average accuracy, which basically shows the efficiency of the system.
NASA Astrophysics Data System (ADS)
Han, Xiaopeng; Huang, Xin; Li, Jiayi; Li, Yansheng; Yang, Michael Ying; Gong, Jianya
2018-04-01
In recent years, the availability of high-resolution imagery has enabled more detailed observation of the Earth. However, it is imperative to simultaneously achieve accurate interpretation and preserve the spatial details for the classification of such high-resolution data. To this aim, we propose the edge-preservation multi-classifier relearning framework (EMRF). This multi-classifier framework is made up of support vector machine (SVM), random forest (RF), and sparse multinomial logistic regression via variable splitting and augmented Lagrangian (LORSAL) classifiers, considering their complementary characteristics. To better characterize complex scenes of remote sensing images, relearning based on landscape metrics is proposed, which iteratively quantizes both the landscape composition and spatial configuration by the use of the initial classification results. In addition, a novel tri-training strategy is proposed to solve the over-smoothing effect of relearning by means of automatic selection of training samples with low classification certainties, which always distribute in or near the edge areas. Finally, EMRF flexibly combines the strengths of relearning and tri-training via the classification certainties calculated by the probabilistic output of the respective classifiers. It should be noted that, in order to achieve an unbiased evaluation, we assessed the classification accuracy of the proposed framework using both edge and non-edge test samples. The experimental results obtained with four multispectral high-resolution images confirm the efficacy of the proposed framework, in terms of both edge and non-edge accuracy.
Razek, Ahmed Abdel Khalek Abdel; Shamaa, Sameh; Lattif, Mahmoud Abdel; Yousef, Hanan Hamid
2017-01-01
To assess inter-observer agreement of whole-body computed tomography (WBCT) in staging and response assessment in lymphoma according to the Lugano classification. Retrospective analysis was conducted of 115 consecutive patients with lymphomas (45 females, 70 males; mean age of 46 years). Patients underwent WBCT with a 64 multi-detector CT device for staging and response assessment after a complete course of chemotherapy. Image analysis was performed by 2 reviewers according to the Lugano classification for staging and response assessment. The overall inter-observer agreement of WBCT in staging of lymphoma was excellent ( k =0.90, percent agreement=94.9%). There was an excellent inter-observer agreement for stage I ( k =0.93, percent agreement=96.4%), stage II ( k =0.90, percent agreement=94.8%), stage III ( k =0.89, percent agreement=94.6%) and stage IV ( k =0.88, percent agreement=94%). The overall inter-observer agreement in response assessment after a completer course of treatment was excellent ( k =0.91, percent agreement=95.8%). There was an excellent inter-observer agreement in progressive disease ( k =0.94, percent agreement=97.1%), stable disease ( k =0.90, percent agreement=95%), partial response ( k =0.96, percent agreement=98.1%) and complete response ( k =0.87, Percent agreement=93.3%). We concluded that WBCT is a reliable and reproducible imaging modality for staging and treatment assessment in lymphoma according to the Lugano classification.
Material identification based on electrostatic sensing technology
NASA Astrophysics Data System (ADS)
Liu, Kai; Chen, Xi; Li, Jingnan
2018-04-01
When the robot travels on the surface of different media, the uncertainty of the medium will seriously affect the autonomous action of the robot. In this paper, the distribution characteristics of multiple electrostatic charges on the surface of materials are detected, so as to improve the accuracy of the existing electrostatic signal material identification methods, which is of great significance to help the robot optimize the control algorithm. In this paper, based on the electrostatic signal material identification method proposed by predecessors, the multi-channel detection circuit is used to obtain the electrostatic charge distribution at different positions of the material surface, the weights are introduced into the eigenvalue matrix, and the weight distribution is optimized by the evolutionary algorithm, which makes the eigenvalue matrix more accurately reflect the surface charge distribution characteristics of the material. The matrix is used as the input of the k-Nearest Neighbor (kNN)classification algorithm to classify the dielectric materials. The experimental results show that the proposed method can significantly improve the recognition rate of the existing electrostatic signal material recognition methods.
Goshvarpour, Ateke; Goshvarpour, Atefeh
2018-04-30
Heart rate variability (HRV) analysis has become a widely used tool for monitoring pathological and psychological states in medical applications. In a typical classification problem, information fusion is a process whereby the effective combination of the data can achieve a more accurate system. The purpose of this article was to provide an accurate algorithm for classifying HRV signals in various psychological states. Therefore, a novel feature level fusion approach was proposed. First, using the theory of information, two similarity indicators of the signal were extracted, including correntropy and Cauchy-Schwarz divergence. Applying probabilistic neural network (PNN) and k-nearest neighbor (kNN), the performance of each index in the classification of meditators and non-meditators HRV signals was appraised. Then, three fusion rules, including division, product, and weighted sum rules were used to combine the information of both similarity measures. For the first time, we propose an algorithm to define the weights of each feature based on the statistical p-values. The performance of HRV classification using combined features was compared with the non-combined features. Totally, the accuracy of 100% was obtained for discriminating all states. The results showed the strong ability and proficiency of division and weighted sum rules in the improvement of the classifier accuracies.
Automated retinal vessel type classification in color fundus images
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.
2013-02-01
Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.
Multi-class SVM model for fMRI-based classification and grading of liver fibrosis
NASA Astrophysics Data System (ADS)
Freiman, M.; Sela, Y.; Edrei, Y.; Pappo, O.; Joskowicz, L.; Abramovitch, R.
2010-03-01
We present a novel non-invasive automatic method for the classification and grading of liver fibrosis from fMRI maps based on hepatic hemodynamic changes. This method automatically creates a model for liver fibrosis grading based on training datasets. Our supervised learning method evaluates hepatic hemodynamics from an anatomical MRI image and three T2*-W fMRI signal intensity time-course scans acquired during the breathing of air, air-carbon dioxide, and carbogen. It constructs a statistical model of liver fibrosis from these fMRI scans using a binary-based one-against-all multi class Support Vector Machine (SVM) classifier. We evaluated the resulting classification model with the leave-one out technique and compared it to both full multi-class SVM and K-Nearest Neighbor (KNN) classifications. Our experimental study analyzed 57 slice sets from 13 mice, and yielded a 98.2% separation accuracy between healthy and low grade fibrotic subjects, and an overall accuracy of 84.2% for fibrosis grading. These results are better than the existing image-based methods which can only discriminate between healthy and high grade fibrosis subjects. With appropriate extensions, our method may be used for non-invasive classification and progression monitoring of liver fibrosis in human patients instead of more invasive approaches, such as biopsy or contrast-enhanced imaging.
Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Nie, Feiping; Munsell, Brent
2018-01-01
Graph-based transductive learning (GTL) is a powerful machine learning technique that is used when sufficient training data is not available. In particular, conventional GTL approaches first construct a fixed inter-subject relation graph that is based on similarities in voxel intensity values in the feature domain, which can then be used to propagate the known phenotype data (i.e., clinical scores and labels) from the training data to the testing data in the label domain. However, this type of graph is exclusively learned in the feature domain, and primarily due to outliers in the observed features, may not be optimal for label propagation in the label domain. To address this limitation, a progressive GTL (pGTL) method is proposed that gradually finds an intrinsic data representation that more accurately aligns imaging features with the phenotype data. In general, optimal feature-to-phenotype alignment is achieved using an iterative approach that: (1) refines inter-subject relationships observed in the feature domain by using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined inter-subject relationships, and (3) verifies the intrinsic data representation on the training data to guarantee an optimal classification when applied to testing data. Additionally, the iterative approach is extended to multi-modal imaging data to further improve pGTL classification accuracy. Using Alzheimer’s disease and Parkinson’s disease study data, the classification accuracy of the proposed pGTL method is compared to several state-of-the-art classification methods, and the results show pGTL can more accurately identify subjects, even at different progression stages, in these two study data sets. PMID:28551556
Forkan, Abdur Rahim Mohammad; Khalil, Ibrahim
2017-02-01
In home-based context-aware monitoring patient's real-time data of multiple vital signs (e.g. heart rate, blood pressure) are continuously generated from wearable sensors. The changes in such vital parameters are highly correlated. They are also patient-centric and can be either recurrent or can fluctuate. The objective of this study is to develop an intelligent method for personalized monitoring and clinical decision support through early estimation of patient-specific vital sign values, and prediction of anomalies using the interrelation among multiple vital signs. In this paper, multi-label classification algorithms are applied in classifier design to forecast these values and related abnormalities. We proposed a completely new approach of patient-specific vital sign prediction system using their correlations. The developed technique can guide healthcare professionals to make accurate clinical decisions. Moreover, our model can support many patients with various clinical conditions concurrently by utilizing the power of cloud computing technology. The developed method also reduces the rate of false predictions in remote monitoring centres. In the experimental settings, the statistical features and correlations of six vital signs are formulated as multi-label classification problem. Eight multi-label classification algorithms along with three fundamental machine learning algorithms are used and tested on a public dataset of 85 patients. Different multi-label classification evaluation measures such as Hamming score, F1-micro average, and accuracy are used for interpreting the prediction performance of patient-specific situation classifications. We achieved 90-95% Hamming score values across 24 classifier combinations for 85 different patients used in our experiment. The results are compared with single-label classifiers and without considering the correlations among the vitals. The comparisons show that multi-label method is the best technique for this problem domain. The evaluation results reveal that multi-label classification techniques using the correlations among multiple vitals are effective ways for early estimation of future values of those vitals. In context-aware remote monitoring this process can greatly help the doctors in quick diagnostic decision making. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Zeng, Ling-Li; Wang, Huaning; Hu, Panpan; Yang, Bo; Pu, Weidan; Shen, Hui; Chen, Xingui; Liu, Zhening; Yin, Hong; Tan, Qingrong; Wang, Kai; Hu, Dewen
2018-04-01
A lack of a sufficiently large sample at single sites causes poor generalizability in automatic diagnosis classification of heterogeneous psychiatric disorders such as schizophrenia based on brain imaging scans. Advanced deep learning methods may be capable of learning subtle hidden patterns from high dimensional imaging data, overcome potential site-related variation, and achieve reproducible cross-site classification. However, deep learning-based cross-site transfer classification, despite less imaging site-specificity and more generalizability of diagnostic models, has not been investigated in schizophrenia. A large multi-site functional MRI sample (n = 734, including 357 schizophrenic patients from seven imaging resources) was collected, and a deep discriminant autoencoder network, aimed at learning imaging site-shared functional connectivity features, was developed to discriminate schizophrenic individuals from healthy controls. Accuracies of approximately 85·0% and 81·0% were obtained in multi-site pooling classification and leave-site-out transfer classification, respectively. The learned functional connectivity features revealed dysregulation of the cortical-striatal-cerebellar circuit in schizophrenia, and the most discriminating functional connections were primarily located within and across the default, salience, and control networks. The findings imply that dysfunctional integration of the cortical-striatal-cerebellar circuit across the default, salience, and control networks may play an important role in the "disconnectivity" model underlying the pathophysiology of schizophrenia. The proposed discriminant deep learning method may be capable of learning reliable connectome patterns and help in understanding the pathophysiology and achieving accurate prediction of schizophrenia across multiple independent imaging sites. Copyright © 2018 German Center for Neurodegenerative Diseases (DZNE). Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kamiran, N.; Sarker, M. L. R.
2014-02-01
The land use/land cover transformation in Malaysia is enormous due to palm oil plantation which has provided huge economical benefits but also created a huge concern for carbon emission and biodiversity. Accurate information about oil palm plantation and the age of plantation is important for a sustainable production, estimation of carbon storage capacity, biodiversity and the climate model. However, the problem is that this information cannot be extracted easily due to the spectral signature for forest and age group of palm oil plantations is similar. Therefore, a noble approach "multi-scale and multi-texture algorithms" was used for mapping vegetation and different age groups of palm oil plantation using a high resolution panchromatic image (WorldView-1) considering the fact that pan imagery has a potential for more detailed and accurate mapping with an effective image processing technique. Seven texture algorithms of second-order Grey Level Co-occurrence Matrix (GLCM) with different scales (from 3×3 to 39×39) were used for texture generation. All texture parameters were classified step by step using a robust classifier "Artificial Neural Network (ANN)". Results indicate that single spectral band was unable to provide good result (overall accuracy = 34.92%), while higher overall classification accuracies (73.48%, 84.76% and 93.18%) were obtained when textural information from multi-scale and multi-texture approach were used in the classification algorithm.
Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.
Liu, Manhua; Cheng, Danni; Wang, Kundong; Wang, Yaping
2018-03-23
Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance.
Mapping grass communities based on multi-temporal Landsat TM imagery and environmental variables
NASA Astrophysics Data System (ADS)
Zeng, Yuandi; Liu, Yanfang; Liu, Yaolin; de Leeuw, Jan
2007-06-01
Information on the spatial distribution of grass communities in wetland is increasingly recognized as important for effective wetland management and biological conservation. Remote sensing techniques has been proved to be an effective alternative to intensive and costly ground surveys for mapping grass community. However, the mapping accuracy of grass communities in wetland is still not preferable. The aim of this paper is to develop an effective method to map grass communities in Poyang Lake Natural Reserve. Through statistic analysis, elevation is selected as an environmental variable for its high relationship with the distribution of grass communities; NDVI stacked from images of different months was used to generate Carex community map; the image in October was used to discriminate Miscanthus and Cynodon communities. Classifications were firstly performed with maximum likelihood classifier using single date satellite image with and without elevation; then layered classifications were performed using multi-temporal satellite imagery and elevation with maximum likelihood classifier, decision tree and artificial neural network separately. The results show that environmental variables can improve the mapping accuracy; and the classification with multitemporal imagery and elevation is significantly better than that with single date image and elevation (p=0.001). Besides, maximum likelihood (a=92.71%, k=0.90) and artificial neural network (a=94.79%, k=0.93) perform significantly better than decision tree (a=86.46%, k=0.83).
Analysis of framelets for breast cancer diagnosis.
Thivya, K S; Sakthivel, P; Venkata Sai, P M
2016-01-01
Breast cancer is the second threatening tumor among the women. The effective way of reducing breast cancer is its early detection which helps to improve the diagnosing process. Digital mammography plays a significant role in mammogram screening at earlier stage of breast carcinoma. Even though, it is very difficult to find accurate abnormality in prevalent screening by radiologists. But the possibility of precise breast cancer screening is encouraged by predicting the accurate type of abnormality through Computer Aided Diagnosis (CAD) systems. The two most important indicators of breast malignancy are microcalcifications and masses. In this study, framelet transform, a multiresolutional analysis is investigated for the classification of the above mentioned two indicators. The statistical and co-occurrence features are extracted from the framelet decomposed mammograms with different resolution levels and support vector machine is employed for classification with k-fold cross validation. This system achieves 94.82% and 100% accuracy in normal/abnormal classification (stage I) and benign/malignant classification (stage II) of mass classification system and 98.57% and 100% for microcalcification system when using the MIAS database.
exprso: an R-package for the rapid implementation of machine learning algorithms.
Quinn, Thomas; Tylee, Daniel; Glatt, Stephen
2016-01-01
Machine learning plays a major role in many scientific investigations. However, non-expert programmers may struggle to implement the elaborate pipelines necessary to build highly accurate and generalizable models. We introduce exprso , a new R package that is an intuitive machine learning suite designed specifically for non-expert programmers. Built initially for the classification of high-dimensional data, exprso uses an object-oriented framework to encapsulate a number of common analytical methods into a series of interchangeable modules. This includes modules for feature selection, classification, high-throughput parameter grid-searching, elaborate cross-validation schemes (e.g., Monte Carlo and nested cross-validation), ensemble classification, and prediction. In addition, exprso also supports multi-class classification (through the 1-vs-all generalization of binary classifiers) and the prediction of continuous outcomes.
Computational intelligence techniques for biological data mining: An overview
NASA Astrophysics Data System (ADS)
Faye, Ibrahima; Iqbal, Muhammad Javed; Said, Abas Md; Samir, Brahim Belhaouari
2014-10-01
Computational techniques have been successfully utilized for a highly accurate analysis and modeling of multifaceted and raw biological data gathered from various genome sequencing projects. These techniques are proving much more effective to overcome the limitations of the traditional in-vitro experiments on the constantly increasing sequence data. However, most critical problems that caught the attention of the researchers may include, but not limited to these: accurate structure and function prediction of unknown proteins, protein subcellular localization prediction, finding protein-protein interactions, protein fold recognition, analysis of microarray gene expression data, etc. To solve these problems, various classification and clustering techniques using machine learning have been extensively used in the published literature. These techniques include neural network algorithms, genetic algorithms, fuzzy ARTMAP, K-Means, K-NN, SVM, Rough set classifiers, decision tree and HMM based algorithms. Major difficulties in applying the above algorithms include the limitations found in the previous feature encoding and selection methods while extracting the best features, increasing classification accuracy and decreasing the running time overheads of the learning algorithms. The application of this research would be potentially useful in the drug design and in the diagnosis of some diseases. This paper presents a concise overview of the well-known protein classification techniques.
NASA Astrophysics Data System (ADS)
Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng
2018-02-01
Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided.
Kalegowda, Yogesh; Harmer, Sarah L
2012-03-20
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) spectra of mineral samples are complex, comprised of large mass ranges and many peaks. Consequently, characterization and classification analysis of these systems is challenging. In this study, different chemometric and statistical data evaluation methods, based on monolayer sensitive TOF-SIMS data, have been tested for the characterization and classification of copper-iron sulfide minerals (chalcopyrite, chalcocite, bornite, and pyrite) at different flotation pulp conditions (feed, conditioned feed, and Eh modified). The complex mass spectral data sets were analyzed using the following chemometric and statistical techniques: principal component analysis (PCA); principal component-discriminant functional analysis (PC-DFA); soft independent modeling of class analogy (SIMCA); and k-Nearest Neighbor (k-NN) classification. PCA was found to be an important first step in multivariate analysis, providing insight into both the relative grouping of samples and the elemental/molecular basis for those groupings. For samples exposed to oxidative conditions (at Eh ~430 mV), each technique (PCA, PC-DFA, SIMCA, and k-NN) was found to produce excellent classification. For samples at reductive conditions (at Eh ~ -200 mV SHE), k-NN and SIMCA produced the most accurate classification. Phase identification of particles that contain the same elements but a different crystal structure in a mixed multimetal mineral system has been achieved.
Automatic classification and detection of clinically relevant images for diabetic retinopathy
NASA Astrophysics Data System (ADS)
Xu, Xinyu; Li, Baoxin
2008-03-01
We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.
Jones, Benjamin A; Stanton, Timothy K; Colosi, John A; Gauss, Roger C; Fialkowski, Joseph M; Michael Jech, J
2017-06-01
For horizontal-looking sonar systems operating at mid-frequencies (1-10 kHz), scattering by fish with resonant gas-filled swimbladders can dominate seafloor and surface reverberation at long-ranges (i.e., distances much greater than the water depth). This source of scattering, which can be difficult to distinguish from other sources of scattering in the water column or at the boundaries, can add spatio-temporal variability to an already complex acoustic record. Sparsely distributed, spatially compact fish aggregations were measured in the Gulf of Maine using a long-range broadband sonar with continuous spectral coverage from 1.5 to 5 kHz. Observed echoes, that are at least 15 decibels above background levels in the horizontal-looking sonar data, are classified spectrally by the resonance features as due to swimbladder-bearing fish. Contemporaneous multi-frequency echosounder measurements (18, 38, and 120 kHz) and net samples are used in conjunction with physics-based acoustic models to validate this approach. Furthermore, the fish aggregations are statistically characterized in the long-range data by highly non-Rayleigh distributions of the echo magnitudes. These distributions are accurately predicted by a computationally efficient, physics-based model. The model accounts for beam-pattern and waveguide effects as well as the scattering response of aggregations of fish.
Morabito, Rosa; Colonna, Michele R; Mormina, Enricomaria; Stagno d'Alcontres, Ferdinando; Salpietro, Vincenzo; Blandino, Alfredo; Longo, Marcello; Granata, Francesca
2014-12-01
Craniofacial duplication is a very rare malformation. The phenotype comprises a wide spectrum, ranging from partial duplication of few facial structures to complete dicephalus. We report the case of a newborn with an accessory oral cavity associated to duplication of the tongue and the mandible diagnosed by multi-row detector Computed Tomography, few days after her birth. Our case of partial craniofacial duplication can be considered as Type II of Gorlin classification or as an intermediate form between Type I and Type II of Sun classification. Our experience demonstrates that CT scan, using appropriate reconstruction algorithms, permits a detailed evaluation of the different structures in an anatomical region. Multi-row CT scan is also the more accurate diagnostic procedure for the pre-surgical evaluation of craniofacial malformations. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Yansong; Monteiro, Sildomar T.; Saber, Eli
2015-10-01
Changes in vegetation cover, building construction, road network and traffic conditions caused by urban expansion affect the human habitat as well as the natural environment in rapidly developing cities. It is crucial to assess these changes and respond accordingly by identifying man-made and natural structures with accurate classification algorithms. With the increase in use of multi-sensor remote sensing systems, researchers are able to obtain a more complete description of the scene of interest. By utilizing multi-sensor data, the accuracy of classification algorithms can be improved. In this paper, we propose a method for combining 3D LiDAR point clouds and high-resolution color images to classify urban areas using Gaussian processes (GP). GP classification is a powerful non-parametric classification method that yields probabilistic classification results. It makes predictions in a way that addresses the uncertainty of real world. In this paper, we attempt to identify man-made and natural objects in urban areas including buildings, roads, trees, grass, water and vehicles. LiDAR features are derived from the 3D point clouds and the spatial and color features are extracted from RGB images. For classification, we use the Laplacian approximation for GP binary classification on the new combined feature space. The multiclass classification has been implemented by using one-vs-all binary classification strategy. The result of applying support vector machines (SVMs) and logistic regression (LR) classifier is also provided for comparison. Our experiments show a clear improvement of classification results by using the two sensors combined instead of each sensor separately. Also we found the advantage of applying GP approach to handle the uncertainty in classification result without compromising accuracy compared to SVM, which is considered as the state-of-the-art classification method.
Centrifuge: rapid and sensitive classification of metagenomic sequences
Song, Li; Breitwieser, Florian P.
2016-01-01
Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. PMID:27852649
Deep learning architectures for multi-label classification of intelligent health risk prediction.
Maxwell, Andrew; Li, Runzhi; Yang, Bei; Weng, Heng; Ou, Aihua; Hong, Huixiao; Zhou, Zhaoxian; Gong, Ping; Zhang, Chaoyang
2017-12-28
Multi-label classification of data remains to be a challenging problem. Because of the complexity of the data, it is sometimes difficult to infer information about classes that are not mutually exclusive. For medical data, patients could have symptoms of multiple different diseases at the same time and it is important to develop tools that help to identify problems early. Intelligent health risk prediction models built with deep learning architectures offer a powerful tool for physicians to identify patterns in patient data that indicate risks associated with certain types of chronic diseases. Physical examination records of 110,300 anonymous patients were used to predict diabetes, hypertension, fatty liver, a combination of these three chronic diseases, and the absence of disease (8 classes in total). The dataset was split into training (90%) and testing (10%) sub-datasets. Ten-fold cross validation was used to evaluate prediction accuracy with metrics such as precision, recall, and F-score. Deep Learning (DL) architectures were compared with standard and state-of-the-art multi-label classification methods. Preliminary results suggest that Deep Neural Networks (DNN), a DL architecture, when applied to multi-label classification of chronic diseases, produced accuracy that was comparable to that of common methods such as Support Vector Machines. We have implemented DNNs to handle both problem transformation and algorithm adaption type multi-label methods and compare both to see which is preferable. Deep Learning architectures have the potential of inferring more information about the patterns of physical examination data than common classification methods. The advanced techniques of Deep Learning can be used to identify the significance of different features from physical examination data as well as to learn the contributions of each feature that impact a patient's risk for chronic diseases. However, accurate prediction of chronic disease risks remains a challenging problem that warrants further studies.
Diaz, Naryttza N; Krause, Lutz; Goesmann, Alexander; Niehaus, Karsten; Nattkemper, Tim W
2009-01-01
Background Metagenomics, or the sequencing and analysis of collective genomes (metagenomes) of microorganisms isolated from an environment, promises direct access to the "unculturable majority". This emerging field offers the potential to lay solid basis on our understanding of the entire living world. However, the taxonomic classification is an essential task in the analysis of metagenomics data sets that it is still far from being solved. We present a novel strategy to predict the taxonomic origin of environmental genomic fragments. The proposed classifier combines the idea of the k-nearest neighbor with strategies from kernel-based learning. Results Our novel strategy was extensively evaluated using the leave-one-out cross validation strategy on fragments of variable length (800 bp – 50 Kbp) from 373 completely sequenced genomes. TACOA is able to classify genomic fragments of length 800 bp and 1 Kbp with high accuracy until rank class. For longer fragments ≥ 3 Kbp accurate predictions are made at even deeper taxonomic ranks (order and genus). Remarkably, TACOA also produces reliable results when the taxonomic origin of a fragment is not represented in the reference set, thus classifying such fragments to its known broader taxonomic class or simply as "unknown". We compared the classification accuracy of TACOA with the latest intrinsic classifier PhyloPythia using 63 recently published complete genomes. For fragments of length 800 bp and 1 Kbp the overall accuracy of TACOA is higher than that obtained by PhyloPythia at all taxonomic ranks. For all fragment lengths, both methods achieved comparable high specificity results up to rank class and low false negative rates are also obtained. Conclusion An accurate multi-class taxonomic classifier was developed for environmental genomic fragments. TACOA can predict with high reliability the taxonomic origin of genomic fragments as short as 800 bp. The proposed method is transparent, fast, accurate and the reference set can be easily updated as newly sequenced genomes become available. Moreover, the method demonstrated to be competitive when compared to the most current classifier PhyloPythia and has the advantage that it can be locally installed and the reference set can be kept up-to-date. PMID:19210774
Rapid Multi-Tracer PET Tumor Imaging With F-FDG and Secondary Shorter-Lived Tracers.
Black, Noel F; McJames, Scott; Kadrmas, Dan J
2009-10-01
Rapid multi-tracer PET, where two to three PET tracers are rapidly scanned with staggered injections, can recover certain imaging measures for each tracer based on differences in tracer kinetics and decay. We previously showed that single-tracer imaging measures can be recovered to a certain extent from rapid dual-tracer (62)Cu - PTSM (blood flow) + (62)Cu - ATSM (hypoxia) tumor imaging. In this work, the feasibility of rapidly imaging (18)F-FDG plus one or two of these shorter-lived secondary tracers was evaluated in the same tumor model. Dynamic PET imaging was performed in four dogs with pre-existing tumors, and the raw scan data was combined to emulate 60 minute long dual- and triple-tracer scans, using the single-tracer scans as gold standards. The multi-tracer data were processed for static (SUV) and kinetic (K(1), K(net)) endpoints for each tracer, followed by linear regression analysis of multi-tracer versus single-tracer results. Static and quantitative dynamic imaging measures of FDG were both accurately recovered from the multi-tracer scans, closely matching the single-tracer FDG standards (R > 0.99). Quantitative blood flow information, as measured by PTSM K(1) and SUV, was also accurately recovered from the multi-tracer scans (R = 0.97). Recovery of ATSM kinetic parameters proved more difficult, though the ATSM SUV was reasonably well recovered (R = 0.92). We conclude that certain additional information from one to two shorter-lived PET tracers may be measured in a rapid multi-tracer scan alongside FDG without compromising the assessment of glucose metabolism. Such additional and complementary information has the potential to improve tumor characterization in vivo, warranting further investigation of rapid multi-tracer techniques.
VizieR Online Data Catalog: LAMOST-Kepler MKCLASS spectral classification (Gray+, 2016)
NASA Astrophysics Data System (ADS)
Gray, R. O.; Corbally, C. J.; De Cat, P.; Fu, J. N.; Ren, A. B.; Shi, J. R.; Luo, A. L.; Zhang, H. T.; Wu, Y.; Cao, Z.; Li, G.; Zhang, Y.; Hou, Y.; Wang, Y.
2016-07-01
The data for the LAMOST-Kepler project are supplied by the Large Sky Area Multi Object Fiber Spectroscopic Telescope (LAMOST, also known as the Guo Shou Jing Telescope). This unique astronomical instrument is located at the Xinglong observatory in China, and combines a large aperture (4 m) telescope with a 5° circular field of view (Wang et al. 1996ApOpt..35.5155W). Our role in this project is to supply accurate two-dimensional spectral types for the observed targets. The large number of spectra obtained for this project (101086) makes traditional visual classification techniques impractical, so we have utilized the MKCLASS code to perform these classifications. The MKCLASS code (Gray & Corbally 2014AJ....147...80G, v1.07 http://www.appstate.edu/~grayro/mkclass/), an expert system designed to classify blue-violet spectra on the MK Classification system, was employed to produce the spectral classifications reported in this paper. MKCLASS was designed to reproduce the steps skilled human classifiers employ in the classification process. (2 data files).
Classifying epileptic EEG signals with delay permutation entropy and Multi-Scale K-means.
Zhu, Guohun; Li, Yan; Wen, Peng Paul; Wang, Shuaifang
2015-01-01
Most epileptic EEG classification algorithms are supervised and require large training datasets, that hinder their use in real time applications. This chapter proposes an unsupervised Multi-Scale K-means (MSK-means) MSK-means algorithm to distinguish epileptic EEG signals and identify epileptic zones. The random initialization of the K-means algorithm can lead to wrong clusters. Based on the characteristics of EEGs, the MSK-means MSK-means algorithm initializes the coarse-scale centroid of a cluster with a suitable scale factor. In this chapter, the MSK-means algorithm is proved theoretically superior to the K-means algorithm on efficiency. In addition, three classifiers: the K-means, MSK-means MSK-means and support vector machine (SVM), are used to identify seizure and localize epileptogenic zone using delay permutation entropy features. The experimental results demonstrate that identifying seizure with the MSK-means algorithm and delay permutation entropy achieves 4. 7 % higher accuracy than that of K-means, and 0. 7 % higher accuracy than that of the SVM.
NASA Astrophysics Data System (ADS)
Farda, N. M.
2017-12-01
Coastal wetlands provide ecosystem services essential to people and the environment. Changes in coastal wetlands, especially on land use, are important to monitor by utilizing multi-temporal imagery. The Google Earth Engine (GEE) provides many machine learning algorithms (10 algorithms) that are very useful for extracting land use from imagery. The research objective is to explore machine learning in Google Earth Engine and its accuracy for multi-temporal land use mapping of coastal wetland area. Landsat 3 MSS (1978), Landsat 5 TM (1991), Landsat 7 ETM+ (2001), and Landsat 8 OLI (2014) images located in Segara Anakan lagoon are selected to represent multi temporal images. The input for machine learning are visible and near infrared bands, PCA band, invers PCA bands, bare soil index, vegetation index, wetness index, elevation from ASTER GDEM, and GLCM (Harralick) texture, and also polygon samples in 140 locations. There are 10 machine learning algorithms applied to extract coastal wetlands land use from Landsat imagery. The algorithms are Fast Naive Bayes, CART (Classification and Regression Tree), Random Forests, GMO Max Entropy, Perceptron (Multi Class Perceptron), Winnow, Voting SVM, Margin SVM, Pegasos (Primal Estimated sub-GrAdient SOlver for Svm), IKPamir (Intersection Kernel Passive Aggressive Method for Information Retrieval, SVM). Machine learning in Google Earth Engine are very helpful in multi-temporal land use mapping, the highest accuracy for land use mapping of coastal wetland is CART with 96.98 % Overall Accuracy using K-Fold Cross Validation (K = 10). GEE is particularly useful for multi-temporal land use mapping with ready used image and classification algorithms, and also very challenging for other applications.
Cirrhosis Diagnosis and Liver Fibrosis Staging: Transient Elastometry Versus Cirrhosis Blood Test.
Calès, Paul; Boursier, Jérôme; Oberti, Frédéric; Bardou, Derek; Zarski, Jean-Pierre; de Lédinghen, Victor
2015-07-01
Elastometry is more accurate than blood tests for cirrhosis diagnosis. However, blood tests were developed for significant fibrosis, with the exception of CirrhoMeter developed for cirrhosis. We compared the performance of Fibroscan and CirrhoMeter, and classic binary cirrhosis diagnosis versus new fibrosis staging for cirrhosis diagnosis. The diagnostic population included 679 patients with hepatitis C and liver biopsy (Metavir staging and morphometry), Fibroscan, and CirrhoMeter. The prognostic population included 1110 patients with chronic liver disease and both tests. Binary diagnosis: AUROCs for cirrhosis were: Fibroscan: 0.905; CirrhoMeter: 0.857; and P=0.041. Accuracy (Youden cutoff) was: Fibroscan: 85.4%; CirrhoMeter: 79.2%; and P<0.001. Fibrosis classification provided 6 classes (F0/1, F1/2, F2±1, F3±1, F3/4, and F4). Accuracy was: Fibroscan: 88.2%; CirrhoMeter: 88.8%; and P=0.77. A simplified fibrosis classification comprised 3 categories: discrete (F1±1), moderate (F2±1), and severe (F3/4) fibrosis. Using this simplified classification, CirrhoMeter predicted survival better than Fibroscan (respectively, χ=37.9 and 19.7 by log-rank test), but both predicted it well (P<0.001 by log-rank test). Comparison: binary diagnosis versus fibrosis classification, respectively, overall accuracy: CirrhoMeter: 79.2% versus 88.8% (P<0.001); Fibroscan: 85.4% versus 88.2% (P=0.127); positive predictive value for cirrhosis by Fibroscan: Youden cutoff (11.1 kPa): 49.1% versus cutoffs of F3/4 (17.6 kPa): 67.6% and F4 classes (25.7 kPa): 82.4%. Fibroscan's usual binary cutoffs for cirrhosis diagnosis are not sufficiently accurate. Fibrosis classification should be preferred over binary diagnosis. A cirrhosis-specific blood test markedly attenuates the accuracy deficit for cirrhosis diagnosis of usual blood tests versus transient elastometry, and may offer better prognostication.
Aided diagnosis methods of breast cancer based on machine learning
NASA Astrophysics Data System (ADS)
Zhao, Yue; Wang, Nian; Cui, Xiaoyu
2017-08-01
In the field of medicine, quickly and accurately determining whether the patient is malignant or benign is the key to treatment. In this paper, K-Nearest Neighbor, Linear Discriminant Analysis, Logistic Regression were applied to predict the classification of thyroid,Her-2,PR,ER,Ki67,metastasis and lymph nodes in breast cancer, in order to recognize the benign and malignant breast tumors and achieve the purpose of aided diagnosis of breast cancer. The results showed that the highest classification accuracy of LDA was 88.56%, while the classification effect of KNN and Logistic Regression were better than that of LDA, the best accuracy reached 96.30%.
Age and gender estimation using Region-SIFT and multi-layered SVM
NASA Astrophysics Data System (ADS)
Kim, Hyunduk; Lee, Sang-Heon; Sohn, Myoung-Kyu; Hwang, Byunghun
2018-04-01
In this paper, we propose an age and gender estimation framework using the region-SIFT feature and multi-layered SVM classifier. The suggested framework entails three processes. The first step is landmark based face alignment. The second step is the feature extraction step. In this step, we introduce the region-SIFT feature extraction method based on facial landmarks. First, we define sub-regions of the face. We then extract SIFT features from each sub-region. In order to reduce the dimensions of features we employ a Principal Component Analysis (PCA) and a Linear Discriminant Analysis (LDA). Finally, we classify age and gender using a multi-layered Support Vector Machines (SVM) for efficient classification. Rather than performing gender estimation and age estimation independently, the use of the multi-layered SVM can improve the classification rate by constructing a classifier that estimate the age according to gender. Moreover, we collect a dataset of face images, called by DGIST_C, from the internet. A performance evaluation of proposed method was performed with the FERET database, CACD database, and DGIST_C database. The experimental results demonstrate that the proposed approach classifies age and performs gender estimation very efficiently and accurately.
Yücel, Yasin; Sultanoğlu, Pınar
2013-09-01
Chemical characterisation has been carried out on 45 honey samples collected from Hatay region of Turkey. The concentrations of 17 elements were determined by inductively coupled plasma optical emission spectrometry (ICP-OES). Ca, K, Mg and Na were the most abundant elements, with mean contents of 219.38, 446.93, 49.06 and 95.91 mg kg(-1) respectively. The trace element mean contents ranged between 0.03 and 15.07 mg kg(-1). Chemometric methods such as principal component analysis (PCA) and cluster analysis (CA) techniques were applied to classify honey according to mineral content. The first most important principal component (PC) was strongly associated with the value of Al, B, Cd and Co. CA showed eight clusters corresponding to the eight botanical origins of honey. PCA explained 75.69% of the variance with the first six PC variables. Chemometric analysis of the analytical data allowed the accurate classification of the honey samples according to origin. Copyright © 2013 Elsevier Ltd. All rights reserved.
Seismic Data Analysis throught Multi-Class Classification.
NASA Astrophysics Data System (ADS)
Anderson, P.; Kappedal, R. D.; Magana-Zook, S. A.
2017-12-01
In this research, we conducted twenty experiments of varying time and frequency bands on 5000seismic signals with the intent of finding a method to classify signals as either an explosion or anearthquake in an automated fashion. We used a multi-class approach by clustering of the data throughvarious techniques. Dimensional reduction was examined through the use of wavelet transforms withthe use of the coiflet mother wavelet and various coefficients to explore possible computational time vsaccuracy dependencies. Three and four classes were generated from the clustering techniques andexamined with the three class approach producing the most accurate and realistic results.
Evolving land cover classification algorithms for multispectral and multitemporal imagery
NASA Astrophysics Data System (ADS)
Brumby, Steven P.; Theiler, James P.; Bloch, Jeffrey J.; Harvey, Neal R.; Perkins, Simon J.; Szymanski, John J.; Young, Aaron C.
2002-01-01
The Cerro Grande/Los Alamos forest fire devastated over 43,000 acres (17,500 ha) of forested land, and destroyed over 200 structures in the town of Los Alamos and the adjoining Los Alamos National Laboratory. The need to measure the continuing impact of the fire on the local environment has led to the application of a number of remote sensing technologies. During and after the fire, remote-sensing data was acquired from a variety of aircraft- and satellite-based sensors, including Landsat 7 Enhanced Thematic Mapper (ETM+). We now report on the application of a machine learning technique to the automated classification of land cover using multi-spectral and multi-temporal imagery. We apply a hybrid genetic programming/supervised classification technique to evolve automatic feature extraction algorithms. We use a software package we have developed at Los Alamos National Laboratory, called GENIE, to carry out this evolution. We use multispectral imagery from the Landsat 7 ETM+ instrument from before, during, and after the wildfire. Using an existing land cover classification based on a 1992 Landsat 5 TM scene for our training data, we evolve algorithms that distinguish a range of land cover categories, and an algorithm to mask out clouds and cloud shadows. We report preliminary results of combining individual classification results using a K-means clustering approach. The details of our evolved classification are compared to the manually produced land-cover classification.
Ihmaid, Saleh K; Ahmed, Hany E A; Zayed, Mohamed F; Abadleh, Mohammed M
2016-01-30
The main step in a successful drug discovery pipeline is the identification of small potent compounds that selectively bind to the target of interest with high affinity. However, there is still a shortage of efficient and accurate computational methods with powerful capability to study and hence predict compound selectivity properties. In this work, we propose an affordable machine learning method to perform compound selectivity classification and prediction. For this purpose, we have collected compounds with reported activity and built a selectivity database formed of 153 cathepsin K and S inhibitors that are considered of medicinal interest. This database has three compound sets, two K/S and S/K selective ones and one non-selective KS one. We have subjected this database to the selectivity classification tool 'Emergent Self-Organizing Maps' for exploring its capability to differentiate selective cathepsin inhibitors for one target over the other. The method exhibited good clustering performance for selective ligands with high accuracy (up to 100 %). Among the possibilites, BAPs and MACCS molecular structural fingerprints were used for such a classification. The results exhibited the ability of the method for structure-selectivity relationship interpretation and selectivity markers were identified for the design of further novel inhibitors with high activity and target selectivity.
Transperineal ultrasonography: First level exam in IBD patients with perianal disease.
Terracciano, Fulvia; Scalisi, Giuseppe; Bossa, Fabrizio; Scimeca, Daniela; Biscaglia, Giuseppe; Mangiacotti, Michele; Valvano, Maria Rosa; Perri, Francesco; Simeone, Anna; Andriulli, Angelo
2016-08-01
A pelvic magnetic resonance imaging (MRI) represents the front-line method for evaluating perianal disease in patients with inflammatory bowel disease (IBD). Recently, transperineal ultrasonography (TPUS) has been proposed as a simple, safe, time-sparing and useful diagnostic technique to assess different pathological conditions of the pelvic floor. The aim of this prospective single centre study was to evaluate the accuracy of TPUS versus MRI for the detection and classification of perineal disease in IBD patients. From November 2013 to November 2014, 28 IBD patients underwent either TPUS or MRI. Fistulae and abscesses were classified according to Parks' and AGA's classification methods. A concordance was assessed by k statistics. Overall, 33 fistulae and 8 abscesses were recognized by TPUS (30 and 7 by MRI, respectively). The agreement between TPUS and MRI was 75% according to Parks' classification (k=0.67) and 86% according to AGA classification (k=0.83), while it was 36% (k=0.34) for classifying abscesses. TPUS proved to be as accurate as MRI for detecting superficial and small abscesses and for classifying perianal disease. Both examinations may be performed at the initial presentation of the patient, but TPUS is a cheaper, time-sparing procedure. The optimal use of TPUS might be in follow-up patients. Copyright © 2016 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Xin; Chen, Huijun; Gong, Jianya
2018-01-01
Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed ADF features can effectively improve the accuracy of urban scene classification, with a significant increase in overall accuracy (3.8-11.7%) compared to using the spectral bands alone. Furthermore, the results indicated the superiority of the proposed ADFs in distinguishing between the spectrally similar and complex man-made classes, including roads and various types of buildings (e.g., high buildings, urban villages, and residential apartments).
A deep learning-based multi-model ensemble method for cancer prediction.
Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong
2018-01-01
Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.
Steganalysis feature improvement using expectation maximization
NASA Astrophysics Data System (ADS)
Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.
2007-04-01
Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.
Kwon, Min-Seok; Nam, Seungyoon; Lee, Sungyoung; Ahn, Young Zoo; Chang, Hae Ryung; Kim, Yon Hui; Park, Taesung
2017-01-01
The recent creation of enormous, cancer-related “Big Data” public depositories represents a powerful means for understanding tumorigenesis. However, a consistently accurate system for clinically evaluating single/multi-biomarkers remains lacking, and it has been asserted that oft-failed clinical advancement of biomarkers occurs within the very early stages of biomarker assessment. To address these challenges, we developed a clinically testable, web-based tool, CANcer-specific single/multi-biomarker Evaluation System (CANES), to evaluate biomarker effectiveness, across 2,134 whole transcriptome datasets, from 94,147 biological samples (from 18 tumor types). For user-provided single/multi-biomarkers, CANES evaluates the performance of single/multi-biomarker candidates, based on four classification methods, support vector machine, random forest, neural networks, and classification and regression trees. In addition, CANES offers several advantages over earlier analysis tools, including: 1) survival analysis; 2) evaluation of mature miRNAs as markers for user-defined diagnostic or prognostic purposes; and 3) provision of a “pan-cancer” summary view, based on each single marker. We believe that such “landscape” evaluation of single/multi-biomarkers, for diagnostic therapeutic/prognostic decision-making, will be highly valuable for the discovery and “repurposing” of existing biomarkers (and their specific targeted therapies), leading to improved patient therapeutic stratification, a key component of targeted therapy success for the avoidance of therapy resistance. PMID:29050243
a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images
NASA Astrophysics Data System (ADS)
Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei
2018-04-01
Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.
Centrifuge: rapid and sensitive classification of metagenomic sequences.
Kim, Daehwan; Song, Li; Breitwieser, Florian P; Salzberg, Steven L
2016-12-01
Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. © 2016 Kim et al.; Published by Cold Spring Harbor Laboratory Press.
NASA Technical Reports Server (NTRS)
Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.
2012-01-01
The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.
Uddin, M B; Chow, C M; Su, S W
2018-03-26
Sleep apnea (SA), a common sleep disorder, can significantly decrease the quality of life, and is closely associated with major health risks such as cardiovascular disease, sudden death, depression, and hypertension. The normal diagnostic process of SA using polysomnography is costly and time consuming. In addition, the accuracy of different classification methods to detect SA varies with the use of different physiological signals. If an effective, reliable, and accurate classification method is developed, then the diagnosis of SA and its associated treatment will be time-efficient and economical. This study aims to systematically review the literature and present an overview of classification methods to detect SA using respiratory and oximetry signals and address the automated detection approach. Sixty-two included studies revealed the application of single and multiple signals (respiratory and oximetry) for the diagnosis of SA. Both airflow and oxygen saturation signals alone were effective in detecting SA in the case of binary decision-making, whereas multiple signals were good for multi-class detection. In addition, some machine learning methods were superior to the other classification methods for SA detection using respiratory and oximetry signals. To deal with the respiratory and oximetry signals, a good choice of classification method as well as the consideration of associated factors would result in high accuracy in the detection of SA. An accurate classification method should provide a high detection rate with an automated (independent of human action) analysis of respiratory and oximetry signals. Future high-quality automated studies using large samples of data from multiple patient groups or record batches are recommended.
Automatic grade classification of Barretts Esophagus through feature enhancement
NASA Astrophysics Data System (ADS)
Ghatwary, Noha; Ahmed, Amr; Ye, Xujiong; Jalab, Hamid
2017-03-01
Barretts Esophagus (BE) is a precancerous condition that affects the esophagus tube and has the risk of developing esophageal adenocarcinoma. BE is the process of developing metaplastic intestinal epithelium and replacing the normal cells in the esophageal area. The detection of BE is considered difficult due to its appearance and properties. The diagnosis is usually done through both endoscopy and biopsy. Recently, Computer Aided Diagnosis systems have been developed to support physicians opinion when facing difficulty in detection/classification in different types of diseases. In this paper, an automatic classification of Barretts Esophagus condition is introduced. The presented method enhances the internal features of a Confocal Laser Endomicroscopy (CLE) image by utilizing a proposed enhancement filter. This filter depends on fractional differentiation and integration that improve the features in the discrete wavelet transform of an image. Later on, various features are extracted from each enhanced image on different levels for the multi-classification process. Our approach is validated on a dataset that consists of a group of 32 patients with 262 images with different histology grades. The experimental results demonstrated the efficiency of the proposed technique. Our method helps clinicians for more accurate classification. This potentially helps to reduce the need for biopsies needed for diagnosis, facilitate the regular monitoring of treatment/development of the patients case and can help train doctors with the new endoscopy technology. The accurate automatic classification is particularly important for the Intestinal Metaplasia (IM) type, which could turn into deadly cancerous. Hence, this work contributes to automatic classification that facilitates early intervention/treatment and decreasing biopsy samples needed.
Emotion recognition from multichannel EEG signals using K-nearest neighbor classification.
Li, Mi; Xu, Hongpei; Liu, Xingwang; Lu, Shengfu
2018-04-27
Many studies have been done on the emotion recognition based on multi-channel electroencephalogram (EEG) signals. This paper explores the influence of the emotion recognition accuracy of EEG signals in different frequency bands and different number of channels. We classified the emotional states in the valence and arousal dimensions using different combinations of EEG channels. Firstly, DEAP default preprocessed data were normalized. Next, EEG signals were divided into four frequency bands using discrete wavelet transform, and entropy and energy were calculated as features of K-nearest neighbor Classifier. The classification accuracies of the 10, 14, 18 and 32 EEG channels based on the Gamma frequency band were 89.54%, 92.28%, 93.72% and 95.70% in the valence dimension and 89.81%, 92.24%, 93.69% and 95.69% in the arousal dimension. As the number of channels increases, the classification accuracy of emotional states also increases, the classification accuracy of the gamma frequency band is greater than that of the beta frequency band followed by the alpha and theta frequency bands. This paper provided better frequency bands and channels reference for emotion recognition based on EEG.
Bush encroachment monitoring using multi-temporal Landsat data and random forests
NASA Astrophysics Data System (ADS)
Symeonakis, E.; Higginbottom, T.
2014-11-01
It is widely accepted that land degradation and desertification (LDD) are serious global threats to humans and the environment. Around a third of savannahs in Africa are affected by LDD processes that may lead to substantial declines in ecosystem functioning and services. Indirectly, LDD can be monitored using relevant indicators. The encroachment of woody plants into grasslands, and the subsequent conversion of savannahs and open woodlands into shrublands, has attracted a lot of attention over the last decades and has been identified as a potential indicator of LDD. Mapping bush encroachment over large areas can only effectively be done using Earth Observation (EO) data and techniques. However, the accurate assessment of large-scale savannah degradation through bush encroachment with satellite imagery remains a formidable task due to the fact that on the satellite data vegetation variability in response to highly variable rainfall patterns might obscure the underlying degradation processes. Here, we present a methodological framework for the monitoring of bush encroachment-related land degradation in a savannah environment in the Northwest Province of South Africa. We utilise multi-temporal Landsat TM and ETM+ (SLC-on) data from 1989 until 2009, mostly from the dry-season, and ancillary data in a GIS environment. We then use the machine learning classification approach of random forests to identify the extent of encroachment over the 20-year period. The results show that in the area of study, bush encroachment is as alarming as permanent vegetation loss. The classification of the year 2009 is validated yielding low commission and omission errors and high k-statistic values for the grasses and woody vegetation classes. Our approach is a step towards a rigorous and effective savannah degradation assessment.
NASA Astrophysics Data System (ADS)
Knoefel, Patrick; Loew, Fabian; Conrad, Christopher
2015-04-01
Crop maps based on classification of remotely sensed data are of increased attendance in agricultural management. This induces a more detailed knowledge about the reliability of such spatial information. However, classification of agricultural land use is often limited by high spectral similarities of the studied crop types. More, spatially and temporally varying agro-ecological conditions can introduce confusion in crop mapping. Classification errors in crop maps in turn may have influence on model outputs, like agricultural production monitoring. One major goal of the PhenoS project ("Phenological structuring to determine optimal acquisition dates for Sentinel-2 data for field crop classification"), is the detection of optimal phenological time windows for land cover classification purposes. Since many crop species are spectrally highly similar, accurate classification requires the right selection of satellite images for a certain classification task. In the course of one growing season, phenological phases exist where crops are separable with higher accuracies. For this purpose, coupling of multi-temporal spectral characteristics and phenological events is promising. The focus of this study is set on the separation of spectrally similar cereal crops like winter wheat, barley, and rye of two test sites in Germany called "Harz/Central German Lowland" and "Demmin". However, this study uses object based random forest (RF) classification to investigate the impact of image acquisition frequency and timing on crop classification uncertainty by permuting all possible combinations of available RapidEye time series recorded on the test sites between 2010 and 2014. The permutations were applied to different segmentation parameters. Then, classification uncertainty was assessed and analysed, based on the probabilistic soft-output from the RF algorithm at the per-field basis. From this soft output, entropy was calculated as a spatial measure of classification uncertainty. The results indicate that uncertainty estimates provide a valuable addition to traditional accuracy assessments and helps the user to allocate error in crop maps.
Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.
Jiménez, Fernando; Sánchez, Gracia; Juárez, José M
2014-03-01
This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case-based reasoning) obtaining with ENORA a classification rate of 0.9298, specificity of 0.9385, and sensitivity of 0.9364, with 14.2 interpretable fuzzy rules on average. Our proposal improves the accuracy and interpretability of the classifiers, compared with other non-evolutionary techniques. We also conclude that ENORA outperforms niched pre-selection and NSGA-II algorithms. Moreover, given that our multi-objective evolutionary methodology is non-combinational based on real parameter optimization, the time cost is significantly reduced compared with other evolutionary approaches existing in literature based on combinational optimization. Copyright © 2014 Elsevier B.V. All rights reserved.
Hybrid analysis for indicating patients with breast cancer using temperature time series.
Silva, Lincoln F; Santos, Alair Augusto S M D; Bravo, Renato S; Silva, Aristófanes C; Muchaluat-Saade, Débora C; Conci, Aura
2016-07-01
Breast cancer is the most common cancer among women worldwide. Diagnosis and treatment in early stages increase cure chances. The temperature of cancerous tissue is generally higher than that of healthy surrounding tissues, making thermography an option to be considered in screening strategies of this cancer type. This paper proposes a hybrid methodology for analyzing dynamic infrared thermography in order to indicate patients with risk of breast cancer, using unsupervised and supervised machine learning techniques, which characterizes the methodology as hybrid. The dynamic infrared thermography monitors or quantitatively measures temperature changes on the examined surface, after a thermal stress. In the dynamic infrared thermography execution, a sequence of breast thermograms is generated. In the proposed methodology, this sequence is processed and analyzed by several techniques. First, the region of the breasts is segmented and the thermograms of the sequence are registered. Then, temperature time series are built and the k-means algorithm is applied on these series using various values of k. Clustering formed by k-means algorithm, for each k value, is evaluated using clustering validation indices, generating values treated as features in the classification model construction step. A data mining tool was used to solve the combined algorithm selection and hyperparameter optimization (CASH) problem in classification tasks. Besides the classification algorithm recommended by the data mining tool, classifiers based on Bayesian networks, neural networks, decision rules and decision tree were executed on the data set used for evaluation. Test results support that the proposed analysis methodology is able to indicate patients with breast cancer. Among 39 tested classification algorithms, K-Star and Bayes Net presented 100% classification accuracy. Furthermore, among the Bayes Net, multi-layer perceptron, decision table and random forest classification algorithms, an average accuracy of 95.38% was obtained. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Chen, Zhaoxue; Yu, Haizhong; Chen, Hao
2013-12-01
To solve the problem of traditional K-means clustering in which initial clustering centers are selected randomly, we proposed a new K-means segmentation algorithm based on robustly selecting 'peaks' standing for White Matter, Gray Matter and Cerebrospinal Fluid in multi-peaks gray histogram of MRI brain image. The new algorithm takes gray value of selected histogram 'peaks' as the initial K-means clustering center and can segment the MRI brain image into three parts of tissue more effectively, accurately, steadily and successfully. Massive experiments have proved that the proposed algorithm can overcome many shortcomings caused by traditional K-means clustering method such as low efficiency, veracity, robustness and time consuming. The histogram 'peak' selecting idea of the proposed segmentootion method is of more universal availability.
Optical thermometry using fluorescence intensities multi-ratios in NaGdTiO4:Yb3+/Tm3+ phosphors
NASA Astrophysics Data System (ADS)
Zhou, Aihua; Song, Feng; Song, Feifei; Feng, Ming; Adnan, Khan; Ju, Dandan; Wang, Xueqing
2018-04-01
The NaGdTiO4:Yb3+/Tm3+ phosphor has been effectively synthesized by the traditional solid-state reaction method and its down-conversion and up-conversion luminescence properties were systematically studied. The results indicate that the electric dipole-dipole interaction is the main mechanism for the luminescence quenching. The fact that the ratios of the up-conversion intensities, i.e., I795nm/I798nm, I807nm/I798nm, and I812nm/I798nm, increase linearly with temperature (100 K-300 K) provides us a simple and accurate temperature measurement method. Multi-ratios can be more accurate than using only one ratio, allowing for self-referenced temperature determination. It's promising for NaGdTiO4: Yb3+/Tm3+ to be used for optical temperature sensors.
Shao, Wei; Liu, Mingxia; Zhang, Daoqiang
2016-01-01
The systematic study of subcellular location pattern is very important for fully characterizing the human proteome. Nowadays, with the great advances in automated microscopic imaging, accurate bioimage-based classification methods to predict protein subcellular locations are highly desired. All existing models were constructed on the independent parallel hypothesis, where the cellular component classes are positioned independently in a multi-class classification engine. The important structural information of cellular compartments is missed. To deal with this problem for developing more accurate models, we proposed a novel cell structure-driven classifier construction approach (SC-PSorter) by employing the prior biological structural information in the learning model. Specifically, the structural relationship among the cellular components is reflected by a new codeword matrix under the error correcting output coding framework. Then, we construct multiple SC-PSorter-based classifiers corresponding to the columns of the error correcting output coding codeword matrix using a multi-kernel support vector machine classification approach. Finally, we perform the classifier ensemble by combining those multiple SC-PSorter-based classifiers via majority voting. We evaluate our method on a collection of 1636 immunohistochemistry images from the Human Protein Atlas database. The experimental results show that our method achieves an overall accuracy of 89.0%, which is 6.4% higher than the state-of-the-art method. The dataset and code can be downloaded from https://github.com/shaoweinuaa/. dqzhang@nuaa.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A multi-temporal analysis approach for land cover mapping in support of nuclear incident response
NASA Astrophysics Data System (ADS)
Sah, Shagan; van Aardt, Jan A. N.; McKeown, Donald M.; Messinger, David W.
2012-06-01
Remote sensing can be used to rapidly generate land use maps for assisting emergency response personnel with resource deployment decisions and impact assessments. In this study we focus on constructing accurate land cover maps to map the impacted area in the case of a nuclear material release. The proposed methodology involves integration of results from two different approaches to increase classification accuracy. The data used included RapidEye scenes over Nine Mile Point Nuclear Power Station (Oswego, NY). The first step was building a coarse-scale land cover map from freely available, high temporal resolution, MODIS data using a time-series approach. In the case of a nuclear accident, high spatial resolution commercial satellites such as RapidEye or IKONOS can acquire images of the affected area. Land use maps from the two image sources were integrated using a probability-based approach. Classification results were obtained for four land classes - forest, urban, water and vegetation - using Euclidean and Mahalanobis distances as metrics. Despite the coarse resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. The classifications were augmented using this fused approach, with few supplementary advantages such as correction for cloud cover and independence from time of year. We concluded that this method would generate highly accurate land maps, using coarse spatial resolution time series satellite imagery and a single date, high spatial resolution, multi-spectral image.
NASA Astrophysics Data System (ADS)
Moscati, R. J.; Marshall, B. D.
2005-12-01
X-ray microfluorescence (XRMF) spectrometry is a rapid, accurate technique to map element abundances of rock surfaces (such as thin-section billets, the block remaining when a thin section is prepared). Scanning a specimen with a collimated primary X-ray beam (100 μm diameter) generates characteristic secondary X-rays that yield the relative chemical abundances for the major rock-/mineral-forming analytes (such as Si, Al, K, Ca, and Fe). When Cu-rich epoxy is used to impregnate billets, XRMF also can determine porosity from the Cu abundance. Common billet scan size is 30 x 15 mm and the typical mapping time rarely exceeds 2.5 hrs (much faster than traditional point-counting). No polishing or coating is required for the billets, although removing coarse striations or gross irregularities on billet surfaces should improve the spatial accuracy of the maps. Background counts, spectral artifacts, and diffraction peaks typically are inconsequential for maps of major elements. An operational check is performed after every 10 analyses on a standard that contains precisely measured areas of Mn and Mo. Reproducibility of the calculated area ratio of Mn:Mo is consistently within 5% of the known value. For each billet, the single element maps (TIFF files) generated by XRMF are imported into MultiSpec© (a program developed at Purdue University for analysis of multispectral image data, available from http://dynamo.ecn.purdue.edu/~biehl/MultiSpec/) where mineral phases can be spectrally identified and their relative abundances quantified. The element maps for each billet are layered to produce a multi-element file for mineral classification and statistical processing, including modal estimates of mineral abundance. Although mineral identification is possible even if the mineralogy is unknown, prior petrographic examination of the corresponding thin section yields more accurate maps because the software can be set to identify all similar pixels. Caution is needed when using MultiSpec© to distinguish mineral phases with similar chemistry (for example, opal and quartz) and minerals that occupy very small surface areas (<10 pixels). In either case, careful petrography and informed use of the software will allow rapid use of MultiSpec© to create accurate mineral maps of rock and thin-section billet surfaces. This technique, for example, has allowed quantitative estimates of calcite and silica abundances to be determined on about 200 samples of secondary mineral coatings from the unsaturated zone at Yucca Mountain, Nevada.
Reading Profiles in Multi-Site Data With Missingness.
Eckert, Mark A; Vaden, Kenneth I; Gebregziabher, Mulugeta
2018-01-01
Children with reading disability exhibit varied deficits in reading and cognitive abilities that contribute to their reading comprehension problems. Some children exhibit primary deficits in phonological processing, while others can exhibit deficits in oral language and executive functions that affect comprehension. This behavioral heterogeneity is problematic when missing data prevent the characterization of different reading profiles, which often occurs in retrospective data sharing initiatives without coordinated data collection. Here we show that reading profiles can be reliably identified based on Random Forest classification of incomplete behavioral datasets, after the missForest method is used to multiply impute missing values. Results from simulation analyses showed that reading profiles could be accurately classified across degrees of missingness (e.g., ∼5% classification error for 30% missingness across the sample). The application of missForest to a real multi-site dataset with missingness ( n = 924) showed that reading disability profiles significantly and consistently differed in reading and cognitive abilities for cases with and without missing data. The results of validation analyses indicated that the reading profiles (cases with and without missing data) exhibited significant differences for an independent set of behavioral variables that were not used to classify reading profiles. Together, the results show how multiple imputation can be applied to the classification of cases with missing data and can increase the integrity of results from multi-site open access datasets.
NASA Technical Reports Server (NTRS)
Kocurek, Michael J.
2005-01-01
The HARVIST project seeks to automatically provide an accurate, interactive interface to predict crop yield over the entire United States. In order to accomplish this goal, large images must be quickly and automatically classified by crop type. Current trained and untrained classification algorithms, while accurate, are highly inefficient when operating on large datasets. This project sought to develop new variants of two standard trained and untrained classification algorithms that are optimized to take advantage of the spatial nature of image data. The first algorithm, harvist-cluster, utilizes divide-and-conquer techniques to precluster an image in the hopes of increasing overall clustering speed. The second algorithm, harvistSVM, utilizes support vector machines (SVMs), a type of trained classifier. It seeks to increase classification speed by applying a "meta-SVM" to a quick (but inaccurate) SVM to approximate a slower, yet more accurate, SVM. Speedups were achieved by tuning the algorithm to quickly identify when the quick SVM was incorrect, and then reclassifying low-confidence pixels as necessary. Comparing the classification speeds of both algorithms to known baselines showed a slight speedup for large values of k (the number of clusters) for harvist-cluster, and a significant speedup for harvistSVM. Future work aims to automate the parameter tuning process required for harvistSVM, and further improve classification accuracy and speed. Additionally, this research will move documents created in Canvas into ArcGIS. The launch of the Mars Reconnaissance Orbiter (MRO) will provide a wealth of image data such as global maps of Martian weather and high resolution global images of Mars. The ability to store this new data in a georeferenced format will support future Mars missions by providing data for landing site selection and the search for water on Mars.
Load Weight Classification of The Quayside Container Crane Based On K-Means Clustering Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Bingqian; Hu, Xiong; Tang, Gang; Wang, Yide
2017-07-01
The precise knowledge of the load weight of each operation of the quayside container crane is important for accurately assessing the service life of the crane. The load weight is directly related to the vibration intensity. Through the study on the vibration of the hoist motor of the crane in radial and axial directions, we can classify the load using K-means clustering algorithm and quantitative statistical analysis. Vibration in radial direction is significantly and positively correlated with that in axial direction by correlation analysis, which means that we can use the data only in one of the directions to carry out the study improving then the efficiency without degrading the accuracy of load classification. The proposed method can well represent the real-time working condition of the crane.
Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng
2017-04-01
Accurate classification of different anatomical structures of teeth from medical images provides crucial information for the stress analysis in dentistry. Usually, the anatomical structures of teeth are manually labeled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing 3 dimensional (3D) information, and classify the tooth by employing unsupervised learning i.e., k-means++ method. In order to evaluate the proposed method, the experiments are conducted on the sufficient and extensive datasets of mandibular molars. The experimental results show that our method can achieve higher accuracy and robustness compared to other three clustering methods. Copyright © 2016 Elsevier Ltd. All rights reserved.
Accurate Prediction of Motor Failures by Application of Multi CBM Tools: A Case Study
NASA Astrophysics Data System (ADS)
Dutta, Rana; Singh, Veerendra Pratap; Dwivedi, Jai Prakash
2018-02-01
Motor failures are very difficult to predict accurately with a single condition-monitoring tool as both electrical and the mechanical systems are closely related. Electrical problem, like phase unbalance, stator winding insulation failures can, at times, lead to vibration problem and at the same time mechanical failures like bearing failure, leads to rotor eccentricity. In this case study of a 550 kW blower motor it has been shown that a rotor bar crack was detected by current signature analysis and vibration monitoring confirmed the same. In later months in a similar motor vibration monitoring predicted bearing failure and current signature analysis confirmed the same. In both the cases, after dismantling the motor, the predictions were found to be accurate. In this paper we will be discussing the accurate predictions of motor failures through use of multi condition monitoring tools with two case studies.
Cao, Peng; Liu, Xiaoli; Yang, Jinzhu; Zhao, Dazhe; Huang, Min; Zhang, Jian; Zaiane, Osmar
2017-12-01
Alzheimer's disease (AD) has been not only a substantial financial burden to the health care system but also an emotional burden to patients and their families. Making accurate diagnosis of AD based on brain magnetic resonance imaging (MRI) is becoming more and more critical and emphasized at the earliest stages. However, the high dimensionality and imbalanced data issues are two major challenges in the study of computer aided AD diagnosis. The greatest limitations of existing dimensionality reduction and over-sampling methods are that they assume a linear relationship between the MRI features (predictor) and the disease status (response). To better capture the complicated but more flexible relationship, we propose a multi-kernel based dimensionality reduction and over-sampling approaches. We combined Marginal Fisher Analysis with ℓ 2,1 -norm based multi-kernel learning (MKMFA) to achieve the sparsity of region-of-interest (ROI), which leads to simultaneously selecting a subset of the relevant brain regions and learning a dimensionality transformation. Meanwhile, a multi-kernel over-sampling (MKOS) was developed to generate synthetic instances in the optimal kernel space induced by MKMFA, so as to compensate for the class imbalanced distribution. We comprehensively evaluate the proposed models for the diagnostic classification (binary class and multi-class classification) including all subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The experimental results not only demonstrate the proposed method has superior performance over multiple comparable methods, but also identifies relevant imaging biomarkers that are consistent with prior medical knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Schudlo, Larissa C.; Chau, Tom
2015-12-01
Objective. The majority of near-infrared spectroscopy (NIRS) brain-computer interface (BCI) studies have investigated binary classification problems. Limited work has considered differentiation of more than two mental states, or multi-class differentiation of higher-level cognitive tasks using measurements outside of the anterior prefrontal cortex. Improvements in accuracies are needed to deliver effective communication with a multi-class NIRS system. We investigated the feasibility of a ternary NIRS-BCI that supports mental states corresponding to verbal fluency task (VFT) performance, Stroop task performance, and unconstrained rest using prefrontal and parietal measurements. Approach. Prefrontal and parietal NIRS signals were acquired from 11 able-bodied adults during rest and performance of the VFT or Stroop task. Classification was performed offline using bagging with a linear discriminant base classifier trained on a 10 dimensional feature set. Main results. VFT, Stroop task and rest were classified at an average accuracy of 71.7% ± 7.9%. The ternary classification system provided a statistically significant improvement in information transfer rate relative to a binary system controlled by either mental task (0.87 ± 0.35 bits/min versus 0.73 ± 0.24 bits/min). Significance. These results suggest that effective communication can be achieved with a ternary NIRS-BCI that supports VFT, Stroop task and rest via measurements from the frontal and parietal cortices. Further development of such a system is warranted. Accurate ternary classification can enhance communication rates offered by NIRS-BCIs, improving the practicality of this technology.
Mihaljević, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features.
Rajasekaran, Shanmuganathan; Vaccaro, Alexander R; Kanna, Rishi Mugesh; Schroeder, Gregory D; Oner, Frank Cumhur; Vialle, Luiz; Chapman, Jens; Dvorak, Marcel; Fehlings, Michael; Shetty, Ajoy Prasad; Schnake, Klaus; Maheshwaran, Anupama; Kandziora, Frank
2017-05-01
Although imaging has a major role in evaluation and management of thoracolumbar spinal trauma by spine surgeons, the exact role of computed tomography (CT) and magnetic resonance imaging (MRI) in addition to radiographs for fracture classification and surgical decision-making is unclear. Spine surgeons (n = 41) from around the world classified 30 thoracolumbar fractures. The cases were presented in a three-step approach: first plain radiographs, followed by CT and MRI images. Surgeons were asked to classify according to the AOSpine classification system and choose management in each of the three steps. Surgeons correctly classified 43.4 % of fractures with plain radiographs alone; after, additionally, evaluating CT and MRI images, this percentage increased by further 18.2 and 2.2 %, respectively. AO type A fractures were identified in 51.7 % of fractures with radiographs, while the number of type B fractures increased after CT and MRI. The number of type C fractures diagnosed was constant across the three steps. Agreement between radiographs and CT was fair for A-type (k = 0.31), poor for B-type (k = 0.19), but it was excellent between CT and MRI (k > 0.87). CT and MRI had similar sensitivity in identifying fracture subtypes except that MRI had a higher sensitivity (56.5 %) for B2 fractures (p < 0.001). The need for surgical fixation was deemed present in 72 % based on radiographs alone and increased to 81.7 % with CT images (p < 0.0001). The assessment for need of surgery did not change after an MRI (p = 0.77). For accurate classification, radiographs alone were insufficient except for C-type injuries. CT is mandatory for accurately classifying thoracolumbar fractures. Though MRI did confer a modest gain in sensitivity in B2 injuries, the study does not support the need for routine MRI in patients for classification, assessing instability or need for surgery.
Multi-label literature classification based on the Gene Ontology graph.
Jin, Bo; Muller, Brian; Zhai, Chengxiang; Lu, Xinghua
2008-12-08
The Gene Ontology is a controlled vocabulary for representing knowledge related to genes and proteins in a computable form. The current effort of manually annotating proteins with the Gene Ontology is outpaced by the rate of accumulation of biomedical knowledge in literature, which urges the development of text mining approaches to facilitate the process by automatically extracting the Gene Ontology annotation from literature. The task is usually cast as a text classification problem, and contemporary methods are confronted with unbalanced training data and the difficulties associated with multi-label classification. In this research, we investigated the methods of enhancing automatic multi-label classification of biomedical literature by utilizing the structure of the Gene Ontology graph. We have studied three graph-based multi-label classification algorithms, including a novel stochastic algorithm and two top-down hierarchical classification methods for multi-label literature classification. We systematically evaluated and compared these graph-based classification algorithms to a conventional flat multi-label algorithm. The results indicate that, through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods can significantly improve predictions of the Gene Ontology terms implied by the analyzed text. Furthermore, the graph-based multi-label classifiers are capable of suggesting Gene Ontology annotations (to curators) that are closely related to the true annotations even if they fail to predict the true ones directly. A software package implementing the studied algorithms is available for the research community. Through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods have better potential than the conventional flat multi-label classification approach to facilitate protein annotation based on the literature.
Protein classification based on text document classification techniques.
Cheng, Betty Yee Man; Carbonell, Jaime G; Klein-Seetharaman, Judith
2005-03-01
The need for accurate, automated protein classification methods continues to increase as advances in biotechnology uncover new proteins. G-protein coupled receptors (GPCRs) are a particularly difficult superfamily of proteins to classify due to extreme diversity among its members. Previous comparisons of BLAST, k-nearest neighbor (k-NN), hidden markov model (HMM) and support vector machine (SVM) using alignment-based features have suggested that classifiers at the complexity of SVM are needed to attain high accuracy. Here, analogous to document classification, we applied Decision Tree and Naive Bayes classifiers with chi-square feature selection on counts of n-grams (i.e. short peptide sequences of length n) to this classification task. Using the GPCR dataset and evaluation protocol from the previous study, the Naive Bayes classifier attained an accuracy of 93.0 and 92.4% in level I and level II subfamily classification respectively, while SVM has a reported accuracy of 88.4 and 86.3%. This is a 39.7 and 44.5% reduction in residual error for level I and level II subfamily classification, respectively. The Decision Tree, while inferior to SVM, outperforms HMM in both level I and level II subfamily classification. For those GPCR families whose profiles are stored in the Protein FAMilies database of alignments and HMMs (PFAM), our method performs comparably to a search against those profiles. Finally, our method can be generalized to other protein families by applying it to the superfamily of nuclear receptors with 94.5, 97.8 and 93.6% accuracy in family, level I and level II subfamily classification respectively. Copyright 2005 Wiley-Liss, Inc.
Enhancing image classification models with multi-modal biomarkers
NASA Astrophysics Data System (ADS)
Caban, Jesus J.; Liao, David; Yao, Jianhua; Mollura, Daniel J.; Gochuico, Bernadette; Yoo, Terry
2011-03-01
Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose, quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at providing quantitative measurements and assisting physicians during the decision-making process. As the need for more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged. In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40, are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.
Benchmark of Machine Learning Methods for Classification of a SENTINEL-2 Image
NASA Astrophysics Data System (ADS)
Pirotti, F.; Sunar, F.; Piragnolo, M.
2016-06-01
Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random forests method with the highest values; kappa index ranging from 0.55 to 0.42 respectively with the most and least number pixels for training. The two neural networks (multi layered perceptron and its ensemble) and the support vector machines - with default radial basis function kernel - methods follow closely with comparable performance.
Machine Learning Classification of Heterogeneous Fields to Estimate Physical Responses
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Akhriev, A.; Alzate, C.; Zhuk, S.
2017-12-01
The promise of machine learning to enhance physics-based simulation is examined here using the transient pressure response to a pumping well in a heterogeneous aquifer. 10,000 random fields of log10 hydraulic conductivity (K) are created and conditioned on a single K measurement at the pumping well. Each K-field is used as input to a forward simulation of drawdown (pressure decline). The differential equations governing groundwater flow to the well serve as a non-linear transform of the input K-field to an output drawdown field. The results are stored and the data set is split into training and testing sets for classification. A Euclidean distance measure between any two fields is calculated and the resulting distances between all pairs of fields define a similarity matrix. Similarity matrices are calculated for both input K-fields and the resulting drawdown fields at the end of the simulation. The similarity matrices are then used as input to spectral clustering to determine groupings of similar input and output fields. Additionally, the similarity matrix is used as input to multi-dimensional scaling to visualize the clustering of fields in lower dimensional spaces. We examine the ability to cluster both input K-fields and output drawdown fields separately with the goal of identifying K-fields that create similar drawdowns and, conversely, given a set of simulated drawdown fields, identify meaningful clusters of input K-fields. Feature extraction based on statistical parametric mapping provides insight into what features of the fields drive the classification results. The final goal is to successfully classify input K-fields into the correct output class, and also, given an output drawdown field, be able to infer the correct class of input field that created it.
Accurate Arabic Script Language/Dialect Classification
2014-01-01
Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, S.; Gezari, S.; Heinis, S.
2015-03-20
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g {sub P1}, r {sub P1}, i {sub P1}, and z {sub P1}. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and anmore » analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.« less
Patel, Meenal J; Andreescu, Carmen; Price, Julie C; Edelman, Kathryn L; Reynolds, Charles F; Aizenstein, Howard J
2015-10-01
Currently, depression diagnosis relies primarily on behavioral symptoms and signs, and treatment is guided by trial and error instead of evaluating associated underlying brain characteristics. Unlike past studies, we attempted to estimate accurate prediction models for late-life depression diagnosis and treatment response using multiple machine learning methods with inputs of multi-modal imaging and non-imaging whole brain and network-based features. Late-life depression patients (medicated post-recruitment) (n = 33) and older non-depressed individuals (n = 35) were recruited. Their demographics and cognitive ability scores were recorded, and brain characteristics were acquired using multi-modal magnetic resonance imaging pretreatment. Linear and nonlinear learning methods were tested for estimating accurate prediction models. A learning method called alternating decision trees estimated the most accurate prediction models for late-life depression diagnosis (87.27% accuracy) and treatment response (89.47% accuracy). The diagnosis model included measures of age, Mini-mental state examination score, and structural imaging (e.g. whole brain atrophy and global white mater hyperintensity burden). The treatment response model included measures of structural and functional connectivity. Combinations of multi-modal imaging and/or non-imaging measures may help better predict late-life depression diagnosis and treatment response. As a preliminary observation, we speculate that the results may also suggest that different underlying brain characteristics defined by multi-modal imaging measures-rather than region-based differences-are associated with depression versus depression recovery because to our knowledge this is the first depression study to accurately predict both using the same approach. These findings may help better understand late-life depression and identify preliminary steps toward personalized late-life depression treatment. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Mücher, C. A.; Roupioz, L.; Kramer, H.; Bogers, M. M. B.; Jongman, R. H. G.; Lucas, R. M.; Kosmidou, V. E.; Petrou, Z.; Manakos, I.; Padoa-Schioppa, E.; Adamo, M.; Blonda, P.
2015-05-01
A major challenge is to develop a biodiversity observation system that is cost effective and applicable in any geographic region. Measuring and reliable reporting of trends and changes in biodiversity requires amongst others detailed and accurate land cover and habitat maps in a standard and comparable way. The objective of this paper is to assess the EODHaM (EO Data for Habitat Mapping) classification results for a Dutch case study. The EODHaM system was developed within the BIO_SOS (The BIOdiversity multi-SOurce monitoring System: from Space TO Species) project and contains the decision rules for each land cover and habitat class based on spectral and height information. One of the main findings is that canopy height models, as derived from LiDAR, in combination with very high resolution satellite imagery provides a powerful input for the EODHaM system for the purpose of generic land cover and habitat mapping for any location across the globe. The assessment of the EODHaM classification results based on field data showed an overall accuracy of 74% for the land cover classes as described according to the Food and Agricultural Organization (FAO) Land Cover Classification System (LCCS) taxonomy at level 3, while the overall accuracy was lower (69.0%) for the habitat map based on the General Habitat Category (GHC) system for habitat surveillance and monitoring. A GHC habitat class is determined for each mapping unit on the basis of the composition of the individual life forms and height measurements. The classification showed very good results for forest phanerophytes (FPH) when individual life forms were analyzed in terms of their percentage coverage estimates per mapping unit from the LCCS classification and validated with field surveys. Analysis for shrubby chamaephytes (SCH) showed less accurate results, but might also be due to less accurate field estimates of percentage coverage. Overall, the EODHaM classification results encouraged us to derive the heights of all vegetated objects in the Netherlands from LiDAR data, in preparation for new habitat classifications.
Chen, Xue; Li, Xiaohui; Yang, Sibo; Yu, Xin; Liu, Aichun
2018-01-01
Lymphoma is a significant cancer that affects the human lymphatic and hematopoietic systems. In this work, discrimination of lymphoma using laser-induced breakdown spectroscopy (LIBS) conducted on whole blood samples is presented. The whole blood samples collected from lymphoma patients and healthy controls are deposited onto standard quantitative filter papers and ablated with a 1064 nm Q-switched Nd:YAG laser. 16 atomic and ionic emission lines of calcium (Ca), iron (Fe), magnesium (Mg), potassium (K) and sodium (Na) are selected to discriminate the cancer disease. Chemometric methods, including principal component analysis (PCA), linear discriminant analysis (LDA) classification, and k nearest neighbor (kNN) classification are used to build the discrimination models. Both LDA and kNN models have achieved very good discrimination performances for lymphoma, with an accuracy of over 99.7%, a sensitivity of over 0.996, and a specificity of over 0.997. These results demonstrate that the whole-blood-based LIBS technique in combination with chemometric methods can serve as a fast, less invasive, and accurate method for detection and discrimination of human malignancies. PMID:29541503
NASA Technical Reports Server (NTRS)
Spruce, J. P.; Smoot, James; Ellis, Jean; Hilbert, Kent; Swann, Roberta
2012-01-01
This paper discusses the development and implementation of a geospatial data processing method and multi-decadal Landsat time series for computing general coastal U.S. land-use and land-cover (LULC) classifications and change products consisting of seven classes (water, barren, upland herbaceous, non-woody wetland, woody upland, woody wetland, and urban). Use of this approach extends the observational period of the NOAA-generated Coastal Change and Analysis Program (C-CAP) products by almost two decades, assuming the availability of one cloud free Landsat scene from any season for each targeted year. The Mobile Bay region in Alabama was used as a study area to develop, demonstrate, and validate the method that was applied to derive LULC products for nine dates at approximate five year intervals across a 34-year time span, using single dates of data for each classification in which forests were either leaf-on, leaf-off, or mixed senescent conditions. Classifications were computed and refined using decision rules in conjunction with unsupervised classification of Landsat data and C-CAP value-added products. Each classification's overall accuracy was assessed by comparing stratified random locations to available reference data, including higher spatial resolution satellite and aerial imagery, field survey data, and raw Landsat RGBs. Overall classification accuracies ranged from 83 to 91% with overall Kappa statistics ranging from 0.78 to 0.89. The accuracies are comparable to those from similar, generalized LULC products derived from C-CAP data. The Landsat MSS-based LULC product accuracies are similar to those from Landsat TM or ETM+ data. Accurate classifications were computed for all nine dates, yielding effective results regardless of season. This classification method yielded products that were used to compute LULC change products via additive GIS overlay techniques.
Rapid Multi-Tracer PET Tumor Imaging With 18F-FDG and Secondary Shorter-Lived Tracers
Black, Noel F.; McJames, Scott; Kadrmas, Dan J.
2009-01-01
Rapid multi-tracer PET, where two to three PET tracers are rapidly scanned with staggered injections, can recover certain imaging measures for each tracer based on differences in tracer kinetics and decay. We previously showed that single-tracer imaging measures can be recovered to a certain extent from rapid dual-tracer 62Cu – PTSM (blood flow) + 62Cu — ATSM (hypoxia) tumor imaging. In this work, the feasibility of rapidly imaging 18F-FDG plus one or two of these shorter-lived secondary tracers was evaluated in the same tumor model. Dynamic PET imaging was performed in four dogs with pre-existing tumors, and the raw scan data was combined to emulate 60 minute long dual- and triple-tracer scans, using the single-tracer scans as gold standards. The multi-tracer data were processed for static (SUV) and kinetic (K1, Knet) endpoints for each tracer, followed by linear regression analysis of multi-tracer versus single-tracer results. Static and quantitative dynamic imaging measures of FDG were both accurately recovered from the multi-tracer scans, closely matching the single-tracer FDG standards (R > 0.99). Quantitative blood flow information, as measured by PTSM K1 and SUV, was also accurately recovered from the multi-tracer scans (R = 0.97). Recovery of ATSM kinetic parameters proved more difficult, though the ATSM SUV was reasonably well recovered (R = 0.92). We conclude that certain additional information from one to two shorter-lived PET tracers may be measured in a rapid multi-tracer scan alongside FDG without compromising the assessment of glucose metabolism. Such additional and complementary information has the potential to improve tumor characterization in vivo, warranting further investigation of rapid multi-tracer techniques. PMID:20046800
Lajnef, Tarek; Chaibi, Sahbi; Ruby, Perrine; Aguera, Pierre-Emmanuel; Eichenlaub, Jean-Baptiste; Samet, Mounir; Kachouri, Abdennaceur; Jerbi, Karim
2015-07-30
Sleep staging is a critical step in a range of electrophysiological signal processing pipelines used in clinical routine as well as in sleep research. Although the results currently achievable with automatic sleep staging methods are promising, there is need for improvement, especially given the time-consuming and tedious nature of visual sleep scoring. Here we propose a sleep staging framework that consists of a multi-class support vector machine (SVM) classification based on a decision tree approach. The performance of the method was evaluated using polysomnographic data from 15 subjects (electroencephalogram (EEG), electrooculogram (EOG) and electromyogram (EMG) recordings). The decision tree, or dendrogram, was obtained using a hierarchical clustering technique and a wide range of time and frequency-domain features were extracted. Feature selection was carried out using forward sequential selection and classification was evaluated using k-fold cross-validation. The dendrogram-based SVM (DSVM) achieved mean specificity, sensitivity and overall accuracy of 0.92, 0.74 and 0.88 respectively, compared to expert visual scoring. Restricting DSVM classification to data where both experts' scoring was consistent (76.73% of the data) led to a mean specificity, sensitivity and overall accuracy of 0.94, 0.82 and 0.92 respectively. The DSVM framework outperforms classification with more standard multi-class "one-against-all" SVM and linear-discriminant analysis. The promising results of the proposed methodology suggest that it may be a valuable alternative to existing automatic methods and that it could accelerate visual scoring by providing a robust starting hypnogram that can be further fine-tuned by expert inspection. Copyright © 2015 Elsevier B.V. All rights reserved.
Retinex Preprocessing for Improved Multi-Spectral Image Classification
NASA Technical Reports Server (NTRS)
Thompson, B.; Rahman, Z.; Park, S.
2000-01-01
The goal of multi-image classification is to identify and label "similar regions" within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. Better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discuss the overall impact on multi-spectral image classification when the retinex image enhancement algorithm is used to preprocess multi-spectral images. The retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has been successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images and thus provides better within-class variations than would otherwise be obtained without the preprocessing. For a series of multi-spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi-class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably "similar", and thus more consistent, whereas classifications derived from the original images, without preprocessing, are much less similar.
Agent Collaborative Target Localization and Classification in Wireless Sensor Networks
Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng
2007-01-01
Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.
Liu, Yu; Xia, Jun; Shi, Chun-Xiang; Hong, Yang
2009-01-01
The crowning objective of this research was to identify a better cloud classification method to upgrade the current window-based clustering algorithm used operationally for China’s first operational geostationary meteorological satellite FengYun-2C (FY-2C) data. First, the capabilities of six widely-used Artificial Neural Network (ANN) methods are analyzed, together with the comparison of two other methods: Principal Component Analysis (PCA) and a Support Vector Machine (SVM), using 2864 cloud samples manually collected by meteorologists in June, July, and August in 2007 from three FY-2C channel (IR1, 10.3–11.3 μm; IR2, 11.5–12.5 μm and WV 6.3–7.6 μm) imagery. The result shows that: (1) ANN approaches, in general, outperformed the PCA and the SVM given sufficient training samples and (2) among the six ANN networks, higher cloud classification accuracy was obtained with the Self-Organizing Map (SOM) and Probabilistic Neural Network (PNN). Second, to compare the ANN methods to the present FY-2C operational algorithm, this study implemented SOM, one of the best ANN network identified from this study, as an automated cloud classification system for the FY-2C multi-channel data. It shows that SOM method has improved the results greatly not only in pixel-level accuracy but also in cloud patch-level classification by more accurately identifying cloud types such as cumulonimbus, cirrus and clouds in high latitude. Findings of this study suggest that the ANN-based classifiers, in particular the SOM, can be potentially used as an improved Automated Cloud Classification Algorithm to upgrade the current window-based clustering method for the FY-2C operational products. PMID:22346714
Liu, Yu; Xia, Jun; Shi, Chun-Xiang; Hong, Yang
2009-01-01
The crowning objective of this research was to identify a better cloud classification method to upgrade the current window-based clustering algorithm used operationally for China's first operational geostationary meteorological satellite FengYun-2C (FY-2C) data. First, the capabilities of six widely-used Artificial Neural Network (ANN) methods are analyzed, together with the comparison of two other methods: Principal Component Analysis (PCA) and a Support Vector Machine (SVM), using 2864 cloud samples manually collected by meteorologists in June, July, and August in 2007 from three FY-2C channel (IR1, 10.3-11.3 μm; IR2, 11.5-12.5 μm and WV 6.3-7.6 μm) imagery. The result shows that: (1) ANN approaches, in general, outperformed the PCA and the SVM given sufficient training samples and (2) among the six ANN networks, higher cloud classification accuracy was obtained with the Self-Organizing Map (SOM) and Probabilistic Neural Network (PNN). Second, to compare the ANN methods to the present FY-2C operational algorithm, this study implemented SOM, one of the best ANN network identified from this study, as an automated cloud classification system for the FY-2C multi-channel data. It shows that SOM method has improved the results greatly not only in pixel-level accuracy but also in cloud patch-level classification by more accurately identifying cloud types such as cumulonimbus, cirrus and clouds in high latitude. Findings of this study suggest that the ANN-based classifiers, in particular the SOM, can be potentially used as an improved Automated Cloud Classification Algorithm to upgrade the current window-based clustering method for the FY-2C operational products.
Exploring cosmic origins with CORE: The instrument
NASA Astrophysics Data System (ADS)
de Bernardis, P.; Ade, P. A. R.; Baselmans, J. J. A.; Battistelli, E. S.; Benoit, A.; Bersanelli, M.; Bideaud, A.; Calvo, M.; Casas, F. J.; Castellano, M. G.; Catalano, A.; Charles, I.; Colantoni, I.; Columbro, F.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; De Petris, M.; Delabrouille, J.; Doyle, S.; Franceschet, C.; Gomez, A.; Goupy, J.; Hanany, S.; Hills, M.; Lamagna, L.; Macias-Perez, J.; Maffei, B.; Martin, S.; Martinez-Gonzalez, E.; Masi, S.; McCarthy, D.; Mennella, A.; Monfardini, A.; Noviello, F.; Paiella, A.; Piacentini, F.; Piat, M.; Pisano, G.; Signorelli, G.; Tan, C. Y.; Tartari, A.; Trappe, N.; Triqueneaux, S.; Tucker, C.; Vermeulen, G.; Young, K.; Zannoni, M.; Achúcarro, A.; Allison, R.; Artall, E.; Ashdown, M.; Ballardini, M.; Banday, A. J.; Banerji, R.; Bartlett, J.; Bartolo, N.; Basak, S.; Bonaldi, A.; Bonato, M.; Borrill, J.; Bouchet, F.; Boulanger, F.; Brinckmann, T.; Bucher, M.; Burigana, C.; Buzzelli, A.; Cai, Z. Y.; Carvalho, C. S.; Challinor, A.; Chluba, J.; Clesse, S.; De Gasperis, G.; De Zotti, G.; Di Valentino, E.; Diego, J. M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Finelli, F.; Forastieri, F.; Galli, S.; Génova-Santos, R.; Gerbino, M.; González-Nuevo, J.; Hagstotz, S.; Greenslade, J.; Handley, W.; Hernández-Monteagudo, C.; Hervias-Caimapo, C.; Hivon, E.; Kiiveri, K.; Kisner, T.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lasenby, A.; Lattanzi, M.; Lesgourgues, J.; Lewis, A.; Liguori, M.; Lindholm, V.; Luzzi, G.; Martins, C. J. A. P.; Matarrese, S.; Melchiorri, A.; Melin, J. B.; Molinari, D.; Natoli, P.; Negrello, M.; Notari, A.; Paoletti, D.; Patanchon, G.; Polastri, L.; Polenta, G.; Pollo, A.; Poulin, V.; Quartin, M.; Remazeilles, M.; Roman, M.; Rubiño-Martín, J. A.; Salvati, L.; Tomasi, M.; Tramonte, D.; Trombetti, T.; Väliviita, J.; Van de Weyjgaert, R.; van Tent, B.; Vennin, V.; Vielva, P.; Vittorio, N.
2018-04-01
We describe a space-borne, multi-band, multi-beam polarimeter aiming at a precise and accurate measurement of the polarization of the Cosmic Microwave Background. The instrument is optimized to be compatible with the strict budget requirements of a medium-size space mission within the Cosmic Vision Programme of the European Space Agency. The instrument has no moving parts, and uses arrays of diffraction-limited Kinetic Inductance Detectors to cover the frequency range from 60 GHz to 600 GHz in 19 wide bands, in the focal plane of a 1.2 m aperture telescope cooled at 40 K, allowing for an accurate extraction of the CMB signal from polarized foreground emission. The projected CMB polarization survey sensitivity of this instrument, after foregrounds removal, is 1.7 μKṡarcmin. The design is robust enough to allow, if needed, a downscoped version of the instrument covering the 100 GHz to 600 GHz range with a 0.8 m aperture telescope cooled at 85 K, with a projected CMB polarization survey sensitivity of 3.2 μKṡarcmin.
NASA Astrophysics Data System (ADS)
Chung, C.; Nagol, J. R.; Tao, X.; Anand, A.; Dempewolf, J.
2015-12-01
Increasing agricultural production while at the same time preserving the environment has become a challenging task. There is a need for new approaches for use of multi-scale and multi-source remote sensing data as well as ground based measurements for mapping and monitoring crop and ecosystem state to support decision making by governmental and non-governmental organizations for sustainable agricultural development. High resolution sub-meter imagery plays an important role in such an integrative framework of landscape monitoring. It helps link the ground based data to more easily available coarser resolution data, facilitating calibration and validation of derived remote sensing products. Here we present a hierarchical Object Based Image Analysis (OBIA) approach to classify sub-meter imagery. The primary reason for choosing OBIA is to accommodate pixel sizes smaller than the object or class of interest. Especially in non-homogeneous savannah regions of Tanzania, this is an important concern and the traditional pixel based spectral signature approach often fails. Ortho-rectified, calibrated, pan sharpened 0.5 meter resolution data acquired from DigitalGlobe's WorldView-2 satellite sensor was used for this purpose. Multi-scale hierarchical segmentation was performed using multi-resolution segmentation approach to facilitate the use of texture, neighborhood context, and the relationship between super and sub objects for training and classification. eCognition, a commonly used OBIA software program, was used for this purpose. Both decision tree and random forest approaches for classification were tested. The Kappa index agreement for both algorithms surpassed the 85%. The results demonstrate that using hierarchical OBIA can effectively and accurately discriminate classes at even LCCS-3 legend.
Machine-learning in grading of gliomas based on multi-parametric magnetic resonance imaging at 3T.
Citak-Er, Fusun; Firat, Zeynep; Kovanlikaya, Ilhami; Ture, Ugur; Ozturk-Isik, Esin
2018-06-15
The objective of this study was to assess the contribution of multi-parametric (mp) magnetic resonance imaging (MRI) quantitative features in the machine learning-based grading of gliomas with a multi-region-of-interests approach. Forty-three patients who were newly diagnosed as having a glioma were included in this study. The patients were scanned prior to any therapy using a standard brain tumor magnetic resonance (MR) imaging protocol that included T1 and T2-weighted, diffusion-weighted, diffusion tensor, MR perfusion and MR spectroscopic imaging. Three different regions-of-interest were drawn for each subject to encompass tumor, immediate tumor periphery, and distant peritumoral edema/normal. The normalized mp-MRI features were used to build machine-learning models for differentiating low-grade gliomas (WHO grades I and II) from high grades (WHO grades III and IV). In order to assess the contribution of regional mp-MRI quantitative features to the classification models, a support vector machine-based recursive feature elimination method was applied prior to classification. A machine-learning model based on support vector machine algorithm with linear kernel achieved an accuracy of 93.0%, a specificity of 86.7%, and a sensitivity of 96.4% for the grading of gliomas using ten-fold cross validation based on the proposed subset of the mp-MRI features. In this study, machine-learning based on multiregional and multi-parametric MRI data has proven to be an important tool in grading glial tumors accurately even in this limited patient population. Future studies are needed to investigate the use of machine learning algorithms for brain tumor classification in a larger patient cohort. Copyright © 2018. Published by Elsevier Ltd.
New decision support tool for acute lymphoblastic leukemia classification
NASA Astrophysics Data System (ADS)
Madhukar, Monica; Agaian, Sos; Chronopoulos, Anthony T.
2012-03-01
In this paper, we build up a new decision support tool to improve treatment intensity choice in childhood ALL. The developed system includes different methods to accurately measure furthermore cell properties in microscope blood film images. The blood images are exposed to series of pre-processing steps which include color correlation, and contrast enhancement. By performing K-means clustering on the resultant images, the nuclei of the cells under consideration are obtained. Shape features and texture features are then extracted for classification. The system is further tested on the classification of spectra measured from the cell nuclei in blood samples in order to distinguish normal cells from those affected by Acute Lymphoblastic Leukemia. The results show that the proposed system robustly segments and classifies acute lymphoblastic leukemia based on complete microscopic blood images.
Three-Way Analysis of Spectrospatial Electromyography Data: Classification and Interpretation
Kauppi, Jukka-Pekka; Hahne, Janne; Müller, Klaus-Robert; Hyvärinen, Aapo
2015-01-01
Classifying multivariate electromyography (EMG) data is an important problem in prosthesis control as well as in neurophysiological studies and diagnosis. With modern high-density EMG sensor technology, it is possible to capture the rich spectrospatial structure of the myoelectric activity. We hypothesize that multi-way machine learning methods can efficiently utilize this structure in classification as well as reveal interesting patterns in it. To this end, we investigate the suitability of existing three-way classification methods to EMG-based hand movement classification in spectrospatial domain, as well as extend these methods by sparsification and regularization. We propose to use Fourier-domain independent component analysis as preprocessing to improve classification and interpretability of the results. In high-density EMG experiments on hand movements across 10 subjects, three-way classification yielded higher average performance compared with state-of-the art classification based on temporal features, suggesting that the three-way analysis approach can efficiently utilize detailed spectrospatial information of high-density EMG. Phase and amplitude patterns of features selected by the classifier in finger-movement data were found to be consistent with known physiology. Thus, our approach can accurately resolve hand and finger movements on the basis of detailed spectrospatial information, and at the same time allows for physiological interpretation of the results. PMID:26039100
Kraken: ultrafast metagenomic sequence classification using exact alignments
2014-01-01
Kraken is an ultrafast and highly accurate program for assigning taxonomic labels to metagenomic DNA sequences. Previous programs designed for this task have been relatively slow and computationally expensive, forcing researchers to use faster abundance estimation programs, which only classify small subsets of metagenomic data. Using exact alignment of k-mers, Kraken achieves classification accuracy comparable to the fastest BLAST program. In its fastest mode, Kraken classifies 100 base pair reads at a rate of over 4.1 million reads per minute, 909 times faster than Megablast and 11 times faster than the abundance estimation program MetaPhlAn. Kraken is available at http://ccb.jhu.edu/software/kraken/. PMID:24580807
Descriptive Statistics of the Genome: Phylogenetic Classification of Viruses.
Hernandez, Troy; Yang, Jie
2016-10-01
The typical process for classifying and submitting a newly sequenced virus to the NCBI database involves two steps. First, a BLAST search is performed to determine likely family candidates. That is followed by checking the candidate families with the pairwise sequence alignment tool for similar species. The submitter's judgment is then used to determine the most likely species classification. The aim of this article is to show that this process can be automated into a fast, accurate, one-step process using the proposed alignment-free method and properly implemented machine learning techniques. We present a new family of alignment-free vectorizations of the genome, the generalized vector, that maintains the speed of existing alignment-free methods while outperforming all available methods. This new alignment-free vectorization uses the frequency of genomic words (k-mers), as is done in the composition vector, and incorporates descriptive statistics of those k-mers' positional information, as inspired by the natural vector. We analyze five different characterizations of genome similarity using k-nearest neighbor classification and evaluate these on two collections of viruses totaling over 10,000 viruses. We show that our proposed method performs better than, or as well as, other methods at every level of the phylogenetic hierarchy. The data and R code is available upon request.
Holi, Matti Mikael; Pelkonen, Mirjami; Karlsson, Linnea; Tuisku, Virpi; Kiviruusu, Olli; Ruuttu, Titta; Marttunen, Mauri
2008-12-31
Accurate assessment of suicidality is of major importance. We aimed to evaluate trained clinicians' ability to assess suicidality against a structured assessment made by trained raters. Treating clinicians classified 218 adolescent psychiatric outpatients suffering from a depressive mood disorder into three classes: 1-no suicidal ideation, 2-suicidal ideation, no suicidal acts, 3-suicidal or self-harming acts. This classification was compared with a classification with identical content derived from the Kiddie Schedule for Affective Disorders and Schizophrenia (K-SADS-PL) made by trained raters. The convergence was assessed by kappa- and weighted kappa tests. The clinicians' classification to class 1 (no suicidal ideation) was 85%, class 2 (suicidal ideation) 50%, and class 3 (suicidal acts) 10% concurrent with the K-SADS evaluation (gamma2 = 37.1, df 4, p = 0.000). Weighted kappa for the agreement of the measures was 0.335 (CI = 0.198-0.471, p < 0.0001). The clinicians under-detected suicidal and self-harm acts, but over-detected suicidal ideation. There was only a modest agreement between the trained clinicians' suicidality evaluation and the K-SADS evaluation, especially concerning suicidal or self-harming acts. We suggest a wider use of structured scales in clinical and research settings to improve reliable detection of adolescents with suicidality.
NASA Astrophysics Data System (ADS)
Zhong, Yanfei; Han, Xiaobing; Zhang, Liangpei
2018-04-01
Multi-class geospatial object detection from high spatial resolution (HSR) remote sensing imagery is attracting increasing attention in a wide range of object-related civil and engineering applications. However, the distribution of objects in HSR remote sensing imagery is location-variable and complicated, and how to accurately detect the objects in HSR remote sensing imagery is a critical problem. Due to the powerful feature extraction and representation capability of deep learning, the deep learning based region proposal generation and object detection integrated framework has greatly promoted the performance of multi-class geospatial object detection for HSR remote sensing imagery. However, due to the translation caused by the convolution operation in the convolutional neural network (CNN), although the performance of the classification stage is seldom influenced, the localization accuracies of the predicted bounding boxes in the detection stage are easily influenced. The dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage has not been addressed for HSR remote sensing imagery, and causes position accuracy problems for multi-class geospatial object detection with region proposal generation and object detection. In order to further improve the performance of the region proposal generation and object detection integrated framework for HSR remote sensing imagery object detection, a position-sensitive balancing (PSB) framework is proposed in this paper for multi-class geospatial object detection from HSR remote sensing imagery. The proposed PSB framework takes full advantage of the fully convolutional network (FCN), on the basis of a residual network, and adopts the PSB framework to solve the dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage. In addition, a pre-training mechanism is utilized to accelerate the training procedure and increase the robustness of the proposed algorithm. The proposed algorithm is validated with a publicly available 10-class object detection dataset.
FEELnc: a tool for long non-coding RNA annotation and its application to the dog transcriptome.
Wucher, Valentin; Legeai, Fabrice; Hédan, Benoît; Rizk, Guillaume; Lagoutte, Lætitia; Leeb, Tosso; Jagannathan, Vidhya; Cadieu, Edouard; David, Audrey; Lohi, Hannes; Cirera, Susanna; Fredholm, Merete; Botherel, Nadine; Leegwater, Peter A J; Le Béguec, Céline; Fieten, Hille; Johnson, Jeremy; Alföldi, Jessica; André, Catherine; Lindblad-Toh, Kerstin; Hitte, Christophe; Derrien, Thomas
2017-05-05
Whole transcriptome sequencing (RNA-seq) has become a standard for cataloguing and monitoring RNA populations. One of the main bottlenecks, however, is to correctly identify the different classes of RNAs among the plethora of reconstructed transcripts, particularly those that will be translated (mRNAs) from the class of long non-coding RNAs (lncRNAs). Here, we present FEELnc (FlExible Extraction of LncRNAs), an alignment-free program that accurately annotates lncRNAs based on a Random Forest model trained with general features such as multi k-mer frequencies and relaxed open reading frames. Benchmarking versus five state-of-the-art tools shows that FEELnc achieves similar or better classification performance on GENCODE and NONCODE data sets. The program also provides specific modules that enable the user to fine-tune classification accuracy, to formalize the annotation of lncRNA classes and to identify lncRNAs even in the absence of a training set of non-coding RNAs. We used FEELnc on a real data set comprising 20 canine RNA-seq samples produced by the European LUPA consortium to substantially expand the canine genome annotation to include 10 374 novel lncRNAs and 58 640 mRNA transcripts. FEELnc moves beyond conventional coding potential classifiers by providing a standardized and complete solution for annotating lncRNAs and is freely available at https://github.com/tderrien/FEELnc. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
A Novel Multi-Class Ensemble Model for Classifying Imbalanced Biomedical Datasets
NASA Astrophysics Data System (ADS)
Bikku, Thulasi; Sambasiva Rao, N., Dr; Rao, Akepogu Ananda, Dr
2017-08-01
This paper mainly focuseson developing aHadoop based framework for feature selection and classification models to classify high dimensionality data in heterogeneous biomedical databases. Wide research has been performing in the fields of Machine learning, Big data and Data mining for identifying patterns. The main challenge is extracting useful features generated from diverse biological systems. The proposed model can be used for predicting diseases in various applications and identifying the features relevant to particular diseases. There is an exponential growth of biomedical repositories such as PubMed and Medline, an accurate predictive model is essential for knowledge discovery in Hadoop environment. Extracting key features from unstructured documents often lead to uncertain results due to outliers and missing values. In this paper, we proposed a two phase map-reduce framework with text preprocessor and classification model. In the first phase, mapper based preprocessing method was designed to eliminate irrelevant features, missing values and outliers from the biomedical data. In the second phase, a Map-Reduce based multi-class ensemble decision tree model was designed and implemented in the preprocessed mapper data to improve the true positive rate and computational time. The experimental results on the complex biomedical datasets show that the performance of our proposed Hadoop based multi-class ensemble model significantly outperforms state-of-the-art baselines.
A Generalized Mixture Framework for Multi-label Classification
Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos
2015-01-01
We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd|X) using a product of posterior distributions over components of the output space. Our approach captures different input–output and output–output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods. PMID:26613069
Data Mining Algorithms for Classification of Complex Biomedical Data
ERIC Educational Resources Information Center
Lan, Liang
2012-01-01
In my dissertation, I will present my research which contributes to solve the following three open problems from biomedical informatics: (1) Multi-task approaches for microarray classification; (2) Multi-label classification of gene and protein prediction from multi-source biological data; (3) Spatial scan for movement data. In microarray…
A comparative study of machine learning models for ethnicity classification
NASA Astrophysics Data System (ADS)
Trivedi, Advait; Bessie Amali, D. Geraldine
2017-11-01
This paper endeavours to adopt a machine learning approach to solve the problem of ethnicity recognition. Ethnicity identification is an important vision problem with its use cases being extended to various domains. Despite the multitude of complexity involved, ethnicity identification comes naturally to humans. This meta information can be leveraged to make several decisions, be it in target marketing or security. With the recent development of intelligent systems a sub module to efficiently capture ethnicity would be useful in several use cases. Several attempts to identify an ideal learning model to represent a multi-ethnic dataset have been recorded. A comparative study of classifiers such as support vector machines, logistic regression has been documented. Experimental results indicate that the logical classifier provides a much accurate classification than the support vector machine.
NASA Astrophysics Data System (ADS)
Liu, Q.; Jing, L.; Li, Y.; Tang, Y.; Li, H.; Lin, Q.
2016-04-01
For the purpose of forest management, high resolution LIDAR and optical remote sensing imageries are used for treetop detection, tree crown delineation, and classification. The purpose of this study is to develop a self-adjusted dominant scales calculation method and a new crown horizontal cutting method of tree canopy height model (CHM) to detect and delineate tree crowns from LIDAR, under the hypothesis that a treetop is radiometric or altitudinal maximum and tree crowns consist of multi-scale branches. The major concept of the method is to develop an automatic selecting strategy of feature scale on CHM, and a multi-scale morphological reconstruction-open crown decomposition (MRCD) to get morphological multi-scale features of CHM by: cutting CHM from treetop to the ground; analysing and refining the dominant multiple scales with differential horizontal profiles to get treetops; segmenting LiDAR CHM using watershed a segmentation approach marked with MRCD treetops. This method has solved the problems of false detection of CHM side-surface extracted by the traditional morphological opening canopy segment (MOCS) method. The novel MRCD delineates more accurate and quantitative multi-scale features of CHM, and enables more accurate detection and segmentation of treetops and crown. Besides, the MRCD method can also be extended to high optical remote sensing tree crown extraction. In an experiment on aerial LiDAR CHM of a forest of multi-scale tree crowns, the proposed method yielded high-quality tree crown maps.
Shadan, Aidil Fahmi; Mahat, Naji A; Wan Ibrahim, Wan Aini; Ariffin, Zaiton; Ismail, Dzulkiflee
2018-01-01
As consumption of stingless bee honey has been gaining popularity in many countries including Malaysia, ability to identify accurately its geographical origin proves pertinent for investigating fraudulent activities for consumer protection. Because a chemical signature can be location-specific, multi-element distribution patterns may prove useful for provenancing such product. Using the inductively coupled-plasma optical emission spectrometer as well as principal component analysis (PCA) and linear discriminant analysis (LDA), the distributions of multi-elements in stingless bee honey collected at four different geographical locations (North, West, East, and South) in Johor, Malaysia, were investigated. While cross-validation using PCA demonstrated 87.0% correct classification rate, the same was improved (96.2%) with the use of LDA, indicating that discrimination was possible for the different geographical regions. Therefore, utilization of multi-element analysis coupled with chemometrics techniques for assigning the provenance of stingless bee honeys for forensic applications is supported. © 2017 American Academy of Forensic Sciences.
Wilkerson, Richard C; Linton, Yvonne-Marie; Fonseca, Dina M; Schultz, Ted R; Price, Dana C; Strickman, Daniel A
2015-01-01
The tribe Aedini (Family Culicidae) contains approximately one-quarter of the known species of mosquitoes, including vectors of deadly or debilitating disease agents. This tribe contains the genus Aedes, which is one of the three most familiar genera of mosquitoes. During the past decade, Aedini has been the focus of a series of extensive morphology-based phylogenetic studies published by Reinert, Harbach, and Kitching (RH&K). Those authors created 74 new, elevated or resurrected genera from what had been the single genus Aedes, almost tripling the number of genera in the entire family Culicidae. The proposed classification is based on subjective assessments of the "number and nature of the characters that support the branches" subtending particular monophyletic groups in the results of cladistic analyses of a large set of morphological characters of representative species. To gauge the stability of RH&K's generic groupings we reanalyzed their data with unweighted parsimony jackknife and maximum-parsimony analyses, with and without ordering 14 of the characters as in RH&K. We found that their phylogeny was largely weakly supported and their taxonomic rankings failed priority and other useful taxon-naming criteria. Consequently, we propose simplified aedine generic designations that 1) restore a classification system that is useful for the operational community; 2) enhance the ability of taxonomists to accurately place new species into genera; 3) maintain the progress toward a natural classification based on monophyletic groups of species; and 4) correct the current classification system that is subject to instability as new species are described and existing species more thoroughly defined. We do not challenge the phylogenetic hypotheses generated by the above-mentioned series of morphological studies. However, we reduce the ranks of the genera and subgenera of RH&K to subgenera or informal species groups, respectively, to preserve stability as new data become available.
Rotationally Invariant Image Representation for Viewing Direction Classification in Cryo-EM
Zhao, Zhizhen; Singer, Amit
2014-01-01
We introduce a new rotationally invariant viewing angle classification method for identifying, among a large number of cryo-EM projection images, similar views without prior knowledge of the molecule. Our rotationally invariant features are based on the bispectrum. Each image is denoised and compressed using steerable principal component analysis (PCA) such that rotating an image is equivalent to phase shifting the expansion coefficients. Thus we are able to extend the theory of bispectrum of 1D periodic signals to 2D images. The randomized PCA algorithm is then used to efficiently reduce the dimensionality of the bispectrum coefficients, enabling fast computation of the similarity between any pair of images. The nearest neighbors provide an initial classification of similar viewing angles. In this way, rotational alignment is only performed for images with their nearest neighbors. The initial nearest neighbor classification and alignment are further improved by a new classification method called vector diffusion maps. Our pipeline for viewing angle classification and alignment is experimentally shown to be faster and more accurate than reference-free alignment with rotationally invariant K-means clustering, MSA/MRA 2D classification, and their modern approximations. PMID:24631969
Automatic Classification of Time-variable X-Ray Sources
NASA Astrophysics Data System (ADS)
Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.
2014-05-01
To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ~97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7-500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.
Multi-wavelength emissivity measurement of stainless steel substrate
NASA Astrophysics Data System (ADS)
Zhang, Y. F. F.; Dai, J. M. M.; Zhang, L.; Pan, W. D. D.
2013-01-01
The emissivity is a key parameter to measure the surface temperature of materials in the radiation thermometry. In this paper, the surface emissivity of metallic substrates is measured by the multi-wavelength emissivity measurement apparatus developed by the Harbin Institute of Technology (HIT). The measuring principle of this apparatus is based on the energy comparison. Several radiation thermometers, whose emissivity coefficients corrected by the measured emissivity from this apparatus, are used to measure the surface temperature of stainless steel substrates. The temperature values measured by means of radiation thermometry are compared to those measured by means of contact thermometry. The relative error between the two means is less than 2% at temperatures from 700K to 1300K, it suggests that the emissivity of stainless steel substrate measured by the multi-wavelength emissivity measurement apparatus are accurate and reliable. Emissivity measurements performed with this apparatus present an uncertainty of 5.9% (cover factor=2).
NASA Astrophysics Data System (ADS)
Amit, Guy; Ben-Ari, Rami; Hadad, Omer; Monovich, Einat; Granot, Noa; Hashoul, Sharbell
2017-03-01
Diagnostic interpretation of breast MRI studies requires meticulous work and a high level of expertise. Computerized algorithms can assist radiologists by automatically characterizing the detected lesions. Deep learning approaches have shown promising results in natural image classification, but their applicability to medical imaging is limited by the shortage of large annotated training sets. In this work, we address automatic classification of breast MRI lesions using two different deep learning approaches. We propose a novel image representation for dynamic contrast enhanced (DCE) breast MRI lesions, which combines the morphological and kinetics information in a single multi-channel image. We compare two classification approaches for discriminating between benign and malignant lesions: training a designated convolutional neural network and using a pre-trained deep network to extract features for a shallow classifier. The domain-specific trained network provided higher classification accuracy, compared to the pre-trained model, with an area under the ROC curve of 0.91 versus 0.81, and an accuracy of 0.83 versus 0.71. Similar accuracy was achieved in classifying benign lesions, malignant lesions, and normal tissue images. The trained network was able to improve accuracy by using the multi-channel image representation, and was more robust to reductions in the size of the training set. A small-size convolutional neural network can learn to accurately classify findings in medical images using only a few hundred images from a few dozen patients. With sufficient data augmentation, such a network can be trained to outperform a pre-trained out-of-domain classifier. Developing domain-specific deep-learning models for medical imaging can facilitate technological advancements in computer-aided diagnosis.
Quantitative falls risk estimation through multi-sensor assessment of standing balance.
Greene, Barry R; McGrath, Denise; Walsh, Lorcan; Doheny, Emer P; McKeown, David; Garattini, Chiara; Cunningham, Clodagh; Crosby, Lisa; Caulfield, Brian; Kenny, Rose A
2012-12-01
Falls are the most common cause of injury and hospitalization and one of the principal causes of death and disability in older adults worldwide. Measures of postural stability have been associated with the incidence of falls in older adults. The aim of this study was to develop a model that accurately classifies fallers and non-fallers using novel multi-sensor quantitative balance metrics that can be easily deployed into a home or clinic setting. We compared the classification accuracy of our model with an established method for falls risk assessment, the Berg balance scale. Data were acquired using two sensor modalities--a pressure sensitive platform sensor and a body-worn inertial sensor, mounted on the lower back--from 120 community dwelling older adults (65 with a history of falls, 55 without, mean age 73.7 ± 5.8 years, 63 female) while performing a number of standing balance tasks in a geriatric research clinic. Results obtained using a support vector machine yielded a mean classification accuracy of 71.52% (95% CI: 68.82-74.28) in classifying falls history, obtained using one model classifying all data points. Considering male and female participant data separately yielded classification accuracies of 72.80% (95% CI: 68.85-77.17) and 73.33% (95% CI: 69.88-76.81) respectively, leading to a mean classification accuracy of 73.07% in identifying participants with a history of falls. Results compare favourably to those obtained using the Berg balance scale (mean classification accuracy: 59.42% (95% CI: 56.96-61.88)). Results from the present study could lead to a robust method for assessing falls risk in both supervised and unsupervised environments.
NASA Astrophysics Data System (ADS)
Ito, Reika; Yoshidome, Takashi
2018-01-01
Markov state models (MSMs) are a powerful approach for analyzing the long-time behaviors of protein motion using molecular dynamics simulation data. However, their quantitative performance with respect to the physical quantities is poor. We believe that this poor performance is caused by the failure to appropriately classify protein conformations into states when constructing MSMs. Herein, we show that the quantitative performance of an order parameter is improved when a manifold-learning technique is employed for the classification in the MSM. The MSM construction using the K-center method, which has been previously used for classification, has a poor quantitative performance.
Ma, Xu; Cheng, Yongmei; Hao, Shuai
2016-12-10
Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.
NASA Astrophysics Data System (ADS)
Shyu, Mei-Ling; Sainani, Varsha
The increasing number of network security related incidents have made it necessary for the organizations to actively protect their sensitive data with network intrusion detection systems (IDSs). IDSs are expected to analyze a large volume of data while not placing a significantly added load on the monitoring systems and networks. This requires good data mining strategies which take less time and give accurate results. In this study, a novel data mining assisted multiagent-based intrusion detection system (DMAS-IDS) is proposed, particularly with the support of multiclass supervised classification. These agents can detect and take predefined actions against malicious activities, and data mining techniques can help detect them. Our proposed DMAS-IDS shows superior performance compared to central sniffing IDS techniques, and saves network resources compared to other distributed IDS with mobile agents that activate too many sniffers causing bottlenecks in the network. This is one of the major motivations to use a distributed model based on multiagent platform along with a supervised classification technique.
Classification of forest land attributes using multi-source remotely sensed data
NASA Astrophysics Data System (ADS)
Pippuri, Inka; Suvanto, Aki; Maltamo, Matti; Korhonen, Kari T.; Pitkänen, Juho; Packalen, Petteri
2016-02-01
The aim of the study was to (1) examine the classification of forest land using airborne laser scanning (ALS) data, satellite images and sample plots of the Finnish National Forest Inventory (NFI) as training data and to (2) identify best performing metrics for classifying forest land attributes. Six different schemes of forest land classification were studied: land use/land cover (LU/LC) classification using both national classes and FAO (Food and Agricultural Organization of the United Nations) classes, main type, site type, peat land type and drainage status. Special interest was to test different ALS-based surface metrics in classification of forest land attributes. Field data consisted of 828 NFI plots collected in 2008-2012 in southern Finland and remotely sensed data was from summer 2010. Multinomial logistic regression was used as the classification method. Classification of LU/LC classes were highly accurate (kappa-values 0.90 and 0.91) but also the classification of site type, peat land type and drainage status succeeded moderately well (kappa-values 0.51, 0.69 and 0.52). ALS-based surface metrics were found to be the most important predictor variables in classification of LU/LC class, main type and drainage status. In best classification models of forest site types both spectral metrics from satellite data and point cloud metrics from ALS were used. In turn, in the classification of peat land types ALS point cloud metrics played the most important role. Results indicated that the prediction of site type and forest land category could be incorporated into stand level forest management inventory system in Finland.
Immune Centroids Over-Sampling Method for Multi-Class Classification
2015-05-22
recognize to specific antigens . The response of a receptor to an antigen can activate its hosting B-cell. Activated B-cell then proliferates and...modifying N.K. Jerne’s theory. The theory states that in a pre-existing group of lympho- cytes ( specifically B cells), a specific antigen only...the clusters of each small class, which have high data density, called global immune centroids over-sampling (denoted as Global-IC). Specifically
Sequential Adaptive Multi-Modality Target Detection and Classification Using Physics Based Models
2006-09-01
estimation," R. Raghuram, R. Raich and A.O. Hero, IEEE Intl. Conf. on Acoustics, Speech , and Signal Processing, Toulouse France, June 2006, <http...can then be solved using off-the-shelf classifiers such as radial basis functions, SVM, or kNN classifier structures. When applied to mine detection we...stage waveform selection for adaptive resource constrained state estimation," 2006 IEEE Intl. Conf. on Acoustics, Speech , and Signal Processing
A proposal for the annotation of recurrent colorectal cancer: the 'Sheffield classification'.
Majeed, A W; Shorthouse, A J; Blakeborough, A; Bird, N C
2011-11-01
Current classification systems of large bowel cancer only refer to metastatic disease as M0, M1 or Mx. Recurrent colorectal cancer primarily occurs in the liver, lungs, nodes or peritoneum. The management of each of these sites of recurrence has made significant advances and each is a subspecialty in its own right. The aim of this paper was to devise a classification system which accurately describes the site and extent of metastatic spread. An amendment of the current system is proposed in which liver, lung and peritoneal metastases are annotated by 'Liv 0,1', 'Pul 0,1' and 'Per 0,1' in describing the primary presentation. These are then subclassified, taking into account the chronology, size, number and geographical distribution of metastatic disease or logoregional recurrence and its K-Ras status. This discussion document proposes a classification system which is logical and simple to use. We plan to validate it prospectively. © 2011 The Authors. Colorectal Disease © 2011 The Association of Coloproctology of Great Britain and Ireland.
Pathological brain detection based on wavelet entropy and Hu moment invariants.
Zhang, Yudong; Wang, Shuihua; Sun, Ping; Phillips, Preetha
2015-01-01
With the aim of developing an accurate pathological brain detection system, we proposed a novel automatic computer-aided diagnosis (CAD) to detect pathological brains from normal brains obtained by magnetic resonance imaging (MRI) scanning. The problem still remained a challenge for technicians and clinicians, since MR imaging generated an exceptionally large information dataset. A new two-step approach was proposed in this study. We used wavelet entropy (WE) and Hu moment invariants (HMI) for feature extraction, and the generalized eigenvalue proximal support vector machine (GEPSVM) for classification. To further enhance classification accuracy, the popular radial basis function (RBF) kernel was employed. The 10 runs of k-fold stratified cross validation result showed that the proposed "WE + HMI + GEPSVM + RBF" method was superior to existing methods w.r.t. classification accuracy. It obtained the average classification accuracies of 100%, 100%, and 99.45% over Dataset-66, Dataset-160, and Dataset-255, respectively. The proposed method is effective and can be applied to realistic use.
Holi, Matti Mikael; Pelkonen, Mirjami; Karlsson, Linnea; Tuisku, Virpi; Kiviruusu, Olli; Ruuttu, Titta; Marttunen, Mauri
2008-01-01
Background Accurate assessment of suicidality is of major importance. We aimed to evaluate trained clinicians' ability to assess suicidality against a structured assessment made by trained raters. Method Treating clinicians classified 218 adolescent psychiatric outpatients suffering from a depressive mood disorder into three classes: 1-no suicidal ideation, 2-suicidal ideation, no suicidal acts, 3-suicidal or self-harming acts. This classification was compared with a classification with identical content derived from the Kiddie Schedule for Affective Disorders and Schizophrenia (K-SADS-PL) made by trained raters. The convergence was assessed by kappa- and weighted kappa tests. Results The clinicians' classification to class 1 (no suicidal ideation) was 85%, class 2 (suicidal ideation) 50%, and class 3 (suicidal acts) 10% concurrent with the K-SADS evaluation (γ2 = 37.1, df 4, p = 0.000). Weighted kappa for the agreement of the measures was 0.335 (CI = 0.198–0.471, p < 0.0001). The clinicians under-detected suicidal and self-harm acts, but over-detected suicidal ideation. Conclusion There was only a modest agreement between the trained clinicians' suicidality evaluation and the K-SADS evaluation, especially concerning suicidal or self-harming acts. We suggest a wider use of structured scales in clinical and research settings to improve reliable detection of adolescents with suicidality. PMID:19116040
Remote sensing of Earth terrain
NASA Technical Reports Server (NTRS)
Kong, Jin AU; Shin, Robert T.; Nghiem, Son V.; Yueh, Herng-Aung; Han, Hsiu C.; Lim, Harold H.; Arnold, David V.
1990-01-01
Remote sensing of earth terrain is examined. The layered random medium model is used to investigate the fully polarimetric scattering of electromagnetic waves from vegetation. The model is used to interpret the measured data for vegetation fields such as rice, wheat, or soybean over water or soil. Accurate calibration of polarimetric radar systems is essential for the polarimetric remote sensing of earth terrain. A polarimetric calibration algorithm using three arbitrary in-scene reflectors is developed. In the interpretation of active and passive microwave remote sensing data from the earth terrain, the random medium model was shown to be quite successful. A multivariate K-distribution is proposed to model the statistics of fully polarimetric radar returns from earth terrain. In the terrain cover classification using the synthetic aperture radar (SAR) images, the applications of the K-distribution model will provide better performance than the conventional Gaussian classifiers. The layered random medium model is used to study the polarimetric response of sea ice. Supervised and unsupervised classification procedures are also developed and applied to synthetic aperture radar polarimetric images in order to identify their various earth terrain components for more than two classes. These classification procedures were applied to San Francisco Bay and Traverse City SAR images.
Classification of EEG Signals Based on Pattern Recognition Approach.
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.
Classification of EEG Signals Based on Pattern Recognition Approach
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90–7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11–89.63% and 91.60–81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy. PMID:29209190
Rajagopal, Rekha; Ranganathan, Vidhyapriya
2018-06-05
Automation in cardiac arrhythmia classification helps medical professionals make accurate decisions about the patient's health. The aim of this work was to design a hybrid classification model to classify cardiac arrhythmias. The design phase of the classification model comprises the following stages: preprocessing of the cardiac signal by eliminating detail coefficients that contain noise, feature extraction through Daubechies wavelet transform, and arrhythmia classification using a collaborative decision from the K nearest neighbor classifier (KNN) and a support vector machine (SVM). The proposed model is able to classify 5 arrhythmia classes as per the ANSI/AAMI EC57: 1998 classification standard. Level 1 of the proposed model involves classification using the KNN and the classifier is trained with examples from all classes. Level 2 involves classification using an SVM and is trained specifically to classify overlapped classes. The final classification of a test heartbeat pertaining to a particular class is done using the proposed KNN/SVM hybrid model. The experimental results demonstrated that the average sensitivity of the proposed model was 92.56%, the average specificity 99.35%, the average positive predictive value 98.13%, the average F-score 94.5%, and the average accuracy 99.78%. The results obtained using the proposed model were compared with the results of discriminant, tree, and KNN classifiers. The proposed model is able to achieve a high classification accuracy.
Li, Juntao; Wang, Yanyan; Jiang, Tao; Xiao, Huimin; Song, Xuekun
2018-05-09
Diagnosing acute leukemia is the necessary prerequisite to treating it. Multi-classification on the gene expression data of acute leukemia is help for diagnosing it which contains B-cell acute lymphoblastic leukemia (BALL), T-cell acute lymphoblastic leukemia (TALL) and acute myeloid leukemia (AML). However, selecting cancer-causing genes is a challenging problem in performing multi-classification. In this paper, weighted gene co-expression networks are employed to divide the genes into groups. Based on the dividing groups, a new regularized multinomial regression with overlapping group lasso penalty (MROGL) has been presented to simultaneously perform multi-classification and select gene groups. By implementing this method on three-class acute leukemia data, the grouped genes which work synergistically are identified, and the overlapped genes shared by different groups are also highlighted. Moreover, MROGL outperforms other five methods on multi-classification accuracy. Copyright © 2017. Published by Elsevier B.V.
Multiple Sparse Representations Classification
Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik
2015-01-01
Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and sparsity level. PMID:26177106
Effective Feature Selection for Classification of Promoter Sequences.
K, Kouser; P G, Lavanya; Rangarajan, Lalitha; K, Acharya Kshitish
2016-01-01
Exploring novel computational methods in making sense of biological data has not only been a necessity, but also productive. A part of this trend is the search for more efficient in silico methods/tools for analysis of promoters, which are parts of DNA sequences that are involved in regulation of expression of genes into other functional molecules. Promoter regions vary greatly in their function based on the sequence of nucleotides and the arrangement of protein-binding short-regions called motifs. In fact, the regulatory nature of the promoters seems to be largely driven by the selective presence and/or the arrangement of these motifs. Here, we explore computational classification of promoter sequences based on the pattern of motif distributions, as such classification can pave a new way of functional analysis of promoters and to discover the functionally crucial motifs. We make use of Position Specific Motif Matrix (PSMM) features for exploring the possibility of accurately classifying promoter sequences using some of the popular classification techniques. The classification results on the complete feature set are low, perhaps due to the huge number of features. We propose two ways of reducing features. Our test results show improvement in the classification output after the reduction of features. The results also show that decision trees outperform SVM (Support Vector Machine), KNN (K Nearest Neighbor) and ensemble classifier LibD3C, particularly with reduced features. The proposed feature selection methods outperform some of the popular feature transformation methods such as PCA and SVD. Also, the methods proposed are as accurate as MRMR (feature selection method) but much faster than MRMR. Such methods could be useful to categorize new promoters and explore regulatory mechanisms of gene expressions in complex eukaryotic species.
E-Nose Vapor Identification Based on Dempster-Shafer Fusion of Multiple Classifiers
NASA Technical Reports Server (NTRS)
Li, Winston; Leung, Henry; Kwan, Chiman; Linnell, Bruce R.
2005-01-01
Electronic nose (e-nose) vapor identification is an efficient approach to monitor air contaminants in space stations and shuttles in order to ensure the health and safety of astronauts. Data preprocessing (measurement denoising and feature extraction) and pattern classification are important components of an e-nose system. In this paper, a wavelet-based denoising method is applied to filter the noisy sensor measurements. Transient-state features are then extracted from the denoised sensor measurements, and are used to train multiple classifiers such as multi-layer perceptions (MLP), support vector machines (SVM), k nearest neighbor (KNN), and Parzen classifier. The Dempster-Shafer (DS) technique is used at the end to fuse the results of the multiple classifiers to get the final classification. Experimental analysis based on real vapor data shows that the wavelet denoising method can remove both random noise and outliers successfully, and the classification rate can be improved by using classifier fusion.
Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael
2014-10-01
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. Copyright © 2014 Elsevier Inc. All rights reserved.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Objectives Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Methods Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Results Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. Conclusion The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports. PMID:28166263
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Jian; Liu, Gui-xiong
2016-09-01
The identification of targets varies in different surge tests. A multi-color space threshold segmentation and self-learning k-nearest neighbor algorithm ( k-NN) for equipment under test status identification was proposed after using feature matching to identify equipment status had to train new patterns every time before testing. First, color space (L*a*b*, hue saturation lightness (HSL), hue saturation value (HSV)) to segment was selected according to the high luminance points ratio and white luminance points ratio of the image. Second, the unknown class sample S r was classified by the k-NN algorithm with training set T z according to the feature vector, which was formed from number of pixels, eccentricity ratio, compactness ratio, and Euler's numbers. Last, while the classification confidence coefficient equaled k, made S r as one sample of pre-training set T z '. The training set T z increased to T z+1 by T z ' if T z ' was saturated. In nine series of illuminant, indicator light, screen, and disturbances samples (a total of 21600 frames), the algorithm had a 98.65%identification accuracy, also selected five groups of samples to enlarge the training set from T 0 to T 5 by itself.
Multi-Agent Information Classification Using Dynamic Acquaintance Lists.
ERIC Educational Resources Information Center
Mukhopadhyay, Snehasis; Peng, Shengquan; Raje, Rajeev; Palakal, Mathew; Mostafa, Javed
2003-01-01
Discussion of automated information services focuses on information classification and collaborative agents, i.e. intelligent computer programs. Highlights include multi-agent systems; distributed artificial intelligence; thesauri; document representation and classification; agent modeling; acquaintances, or remote agents discovered through…
NASA Astrophysics Data System (ADS)
Liu, Tao; Abd-Elrahman, Amr
2018-05-01
Deep convolutional neural network (DCNN) requires massive training datasets to trigger its image classification power, while collecting training samples for remote sensing application is usually an expensive process. When DCNN is simply implemented with traditional object-based image analysis (OBIA) for classification of Unmanned Aerial systems (UAS) orthoimage, its power may be undermined if the number training samples is relatively small. This research aims to develop a novel OBIA classification approach that can take advantage of DCNN by enriching the training dataset automatically using multi-view data. Specifically, this study introduces a Multi-View Object-based classification using Deep convolutional neural network (MODe) method to process UAS images for land cover classification. MODe conducts the classification on multi-view UAS images instead of directly on the orthoimage, and gets the final results via a voting procedure. 10-fold cross validation results show the mean overall classification accuracy increasing substantially from 65.32%, when DCNN was applied on the orthoimage to 82.08% achieved when MODe was implemented. This study also compared the performances of the support vector machine (SVM) and random forest (RF) classifiers with DCNN under traditional OBIA and the proposed multi-view OBIA frameworks. The results indicate that the advantage of DCNN over traditional classifiers in terms of accuracy is more obvious when these classifiers were applied with the proposed multi-view OBIA framework than when these classifiers were applied within the traditional OBIA framework.
A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.
Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar
2017-03-01
The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Zhang, Yiyan; Xin, Yi; Li, Qin; Ma, Jianshe; Li, Shuai; Lv, Xiaodan; Lv, Weiqi
2017-11-02
Various kinds of data mining algorithms are continuously raised with the development of related disciplines. The applicable scopes and their performances of these algorithms are different. Hence, finding a suitable algorithm for a dataset is becoming an important emphasis for biomedical researchers to solve practical problems promptly. In this paper, seven kinds of sophisticated active algorithms, namely, C4.5, support vector machine, AdaBoost, k-nearest neighbor, naïve Bayes, random forest, and logistic regression, were selected as the research objects. The seven algorithms were applied to the 12 top-click UCI public datasets with the task of classification, and their performances were compared through induction and analysis. The sample size, number of attributes, number of missing values, and the sample size of each class, correlation coefficients between variables, class entropy of task variable, and the ratio of the sample size of the largest class to the least class were calculated to character the 12 research datasets. The two ensemble algorithms reach high accuracy of classification on most datasets. Moreover, random forest performs better than AdaBoost on the unbalanced dataset of the multi-class task. Simple algorithms, such as the naïve Bayes and logistic regression model are suitable for a small dataset with high correlation between the task and other non-task attribute variables. K-nearest neighbor and C4.5 decision tree algorithms perform well on binary- and multi-class task datasets. Support vector machine is more adept on the balanced small dataset of the binary-class task. No algorithm can maintain the best performance in all datasets. The applicability of the seven data mining algorithms on the datasets with different characteristics was summarized to provide a reference for biomedical researchers or beginners in different fields.
Abnormal fronto-striatal activation as a marker of threshold and subthreshold Bulimia Nervosa.
Cyr, Marilyn; Yang, Xiao; Horga, Guillermo; Marsh, Rachel
2018-04-01
This study aimed to determine whether functional disturbances in fronto-striatal control circuits characterize adolescents with Bulimia Nervosa (BN) spectrum eating disorders regardless of clinical severity. FMRI was used to assess conflict-related brain activations during performance of a Simon task in two samples of adolescents with BN symptoms compared with healthy adolescents. The BN samples differed in the severity of their clinical presentation, illness duration and age. Multi-voxel pattern analyses (MVPAs) based on machine learning were used to determine whether patterns of fronto-striatal activation characterized adolescents with BN spectrum disorders regardless of clinical severity, and whether accurate classification of less symptomatic adolescents (subthreshold BN; SBN) could be achieved based on patterns of activation in adolescents who met DSM5 criteria for BN. MVPA classification analyses revealed that both BN and SBN adolescents could be accurately discriminated from healthy adolescents based on fronto-striatal activation. Notably, the patterns detected in more severely ill BN compared with healthy adolescents accurately discriminated less symptomatic SBN from healthy adolescents. Deficient activation of fronto-striatal circuits can characterize BN early in its course, when clinical presentations are less severe, perhaps pointing to circuit-based disturbances as useful biomarker or risk factor for the disorder, and a tool for understanding its developmental trajectory, as well as the development of early interventions. © 2018 Wiley Periodicals, Inc.
Big genomics and clinical data analytics strategies for precision cancer prognosis.
Ow, Ghim Siong; Kuznetsov, Vladimir A
2016-11-07
The field of personalized and precise medicine in the era of big data analytics is growing rapidly. Previously, we proposed our model of patient classification termed Prognostic Signature Vector Matching (PSVM) and identified a 37 variable signature comprising 36 let-7b associated prognostic significant mRNAs and the age risk factor that stratified large high-grade serous ovarian cancer patient cohorts into three survival-significant risk groups. Here, we investigated the predictive performance of PSVM via optimization of the prognostic variable weights, which represent the relative importance of one prognostic variable over the others. In addition, we compared several multivariate prognostic models based on PSVM with classical machine learning techniques such as K-nearest-neighbor, support vector machine, random forest, neural networks and logistic regression. Our results revealed that negative log-rank p-values provides more robust weight values as opposed to the use of other quantities such as hazard ratios, fold change, or a combination of those factors. PSVM, together with the classical machine learning classifiers were combined in an ensemble (multi-test) voting system, which collectively provides a more precise and reproducible patient stratification. The use of the multi-test system approach, rather than the search for the ideal classification/prediction method, might help to address limitations of the individual classification algorithm in specific situation.
Bentley, T William
2006-08-25
A recently proposed, multi-parameter correlation: log k (25 degrees C)=s(f) (Ef + Nf), where Ef is electrofugality and Nf is nucleofugality, for the substituent and solvent effects on the rate constants for solvolyses of benzhydryl and substituted benzhydryl substrates, is re-evaluated. A new formula (Ef=log k (RCl/EtOH/25 degrees C) -1.87), where RCl/EtOH refers to ethanolysis of chlorides, reproduces published values of Ef satisfactorily, avoids multi-parameter optimisations and provides additional values of Ef. From the formula for Ef, it is shown that the term (sfxEf) is compatible with the Hammett-Brown (rho+sigma+) equation for substituent effects. However, the previously published values of N(f) do not accurately account for solvent and leaving group effects (e.g. nucleofuge Cl or X), even for benzhydryl solvolyses; alternatively, if the more exact, two-parameter term, (sfxNf) is used, calculated effects are less accurate. A new formula (Nf=6.14 + log k(BX/any solvent/25 degrees C)), where BX refers to solvolysis of the parent benzhydryl as electrofuge, defines improved Nf values for benzhydryl substrates. The new formulae for Ef and Nf are consistent with an assumption that sf=1.00(,) and so improved correlations for benzhydryl substrates can be obtained from the additive formula: log k(RX/any solvent/25 degrees C)=(Ef + Nf). Possible extensions of this approach are also discussed.
NASA Astrophysics Data System (ADS)
Alevizos, Evangelos; Snellen, Mirjam; Simons, Dick; Siemes, Kerstin; Greinert, Jens
2018-06-01
This study applies three classification methods exploiting the angular dependence of acoustic seafloor backscatter along with high resolution sub-bottom profiling for seafloor sediment characterization in the Eckernförde Bay, Baltic Sea Germany. This area is well suited for acoustic backscatter studies due to its shallowness, its smooth bathymetry and the presence of a wide range of sediment types. Backscatter data were acquired using a Seabeam1180 (180 kHz) multibeam echosounder and sub-bottom profiler data were recorded using a SES-2000 parametric sonar transmitting 6 and 12 kHz. The high density of seafloor soundings allowed extracting backscatter layers for five beam angles over a large part of the surveyed area. A Bayesian probability method was employed for sediment classification based on the backscatter variability at a single incidence angle, whereas Maximum Likelihood Classification (MLC) and Principal Components Analysis (PCA) were applied to the multi-angle layers. The Bayesian approach was used for identifying the optimum number of acoustic classes because cluster validation is carried out prior to class assignment and class outputs are ordinal categorical values. The method is based on the principle that backscatter values from a single incidence angle express a normal distribution for a particular sediment type. The resulting Bayesian classes were well correlated to median grain sizes and the percentage of coarse material. The MLC method uses angular response information from five layers of training areas extracted from the Bayesian classification map. The subsequent PCA analysis is based on the transformation of these five layers into two principal components that comprise most of the data variability. These principal components were clustered in five classes after running an external cluster validation test. In general both methods MLC and PCA, separated the various sediment types effectively, showing good agreement (kappa >0.7) with the Bayesian approach which also correlates well with ground truth data (r2 > 0.7). In addition, sub-bottom data were used in conjunction with the Bayesian classification results to characterize acoustic classes with respect to their geological and stratigraphic interpretation. The joined interpretation of seafloor and sub-seafloor data sets proved to be an efficient approach for a better understanding of seafloor backscatter patchiness and to discriminate acoustically similar classes in different geological/bathymetric settings.
Zhou, Tao; Li, Zhaofu; Pan, Jianjun
2018-01-27
This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively.
Sauwen, N; Acou, M; Van Cauter, S; Sima, D M; Veraart, J; Maes, F; Himmelreich, U; Achten, E; Van Huffel, S
2016-01-01
Tumor segmentation is a particularly challenging task in high-grade gliomas (HGGs), as they are among the most heterogeneous tumors in oncology. An accurate delineation of the lesion and its main subcomponents contributes to optimal treatment planning, prognosis and follow-up. Conventional MRI (cMRI) is the imaging modality of choice for manual segmentation, and is also considered in the vast majority of automated segmentation studies. Advanced MRI modalities such as perfusion-weighted imaging (PWI), diffusion-weighted imaging (DWI) and magnetic resonance spectroscopic imaging (MRSI) have already shown their added value in tumor tissue characterization, hence there have been recent suggestions of combining different MRI modalities into a multi-parametric MRI (MP-MRI) approach for brain tumor segmentation. In this paper, we compare the performance of several unsupervised classification methods for HGG segmentation based on MP-MRI data including cMRI, DWI, MRSI and PWI. Two independent MP-MRI datasets with a different acquisition protocol were available from different hospitals. We demonstrate that a hierarchical non-negative matrix factorization variant which was previously introduced for MP-MRI tumor segmentation gives the best performance in terms of mean Dice-scores for the pathologic tissue classes on both datasets.
Iterative Track Fitting Using Cluster Classification in Multi Wire Proportional Chamber
NASA Astrophysics Data System (ADS)
Primor, David; Mikenberg, Giora; Etzion, Erez; Messer, Hagit
2007-10-01
This paper addresses the problem of track fitting of a charged particle in a multi wire proportional chamber (MWPC) using cathode readout strips. When a charged particle crosses a MWPC, a positive charge is induced on a cluster of adjacent strips. In the presence of high radiation background, the cluster charge measurements may be contaminated due to background particles, leading to less accurate hit position estimation. The least squares method for track fitting assumes the same position error distribution for all hits and thus loses its optimal properties on contaminated data. For this reason, a new robust algorithm is proposed. The algorithm first uses the known spatial charge distribution caused by a single charged particle over the strips, and classifies the clusters into ldquocleanrdquo and ldquodirtyrdquo clusters. Then, using the classification results, it performs an iterative weighted least squares fitting procedure, updating its optimal weights each iteration. The performance of the suggested algorithm is compared to other track fitting techniques using a simulation of tracks with radiation background. It is shown that the algorithm improves the track fitting performance significantly. A practical implementation of the algorithm is presented for muon track fitting in the cathode strip chamber (CSC) of the ATLAS experiment.
Optimal mapping of site-specific multivariate soil properties.
Burrough, P A; Swindell, J
1997-01-01
This paper demonstrates how geostatistics and fuzzy k-means classification can be used together to improve our practical understanding of crop yield-site response. Two aspects of soil are important for precision farming: (a) sensible classes for a given crop, and (b) their spatial variation. Local site classifications are more sensitive than general taxonomies and can be provided by the method of fuzzy k-means to transform a multivariate data set with i attributes measured at n sites into k overlapping classes; each site has a membership value mk for each class in the range 0-1. Soil variation is of interest when conditions vary over patches manageable by agricultural machinery. The spatial variation of each of the k classes can be analysed by computing the variograms of mk over the n sites. Memberships for each of the k classes can be mapped by ordinary kriging. Areas of class dominance and the transition zones between them can be identified by an inter-class confusion index; reducing the zones to boundaries gives crisp maps of dominant soil groups that can be used to guide precision farming equipment. Automation of the procedure is straightforward given sufficient data. Time variations in soil properties can be automatically incorporated in the computation of membership values. The procedures are illustrated with multi-year crop yield data collected from a 5 ha demonstration field at the Royal Agricultural College in Cirencester, UK.
NASA Astrophysics Data System (ADS)
Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye
2016-06-01
This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.
Automatic classification of time-variable X-ray sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo, Kitty K.; Farrell, Sean; Murphy, Tara
2014-05-01
To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, andmore » other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.« less
Deep multi-scale convolutional neural network for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Zhang, Feng-zhe; Yang, Xia
2018-04-01
In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.
A multi-layer MRI description of Parkinson's disease
NASA Astrophysics Data System (ADS)
La Rocca, M.; Amoroso, N.; Lella, E.; Bellotti, R.; Tangaro, S.
2017-09-01
Magnetic resonance imaging (MRI) along with complex network is currently one of the most widely adopted techniques for detection of structural changes in neurological diseases, such as Parkinson's Disease (PD). In this paper, we present a digital image processing study, within the multi-layer network framework, combining more classifiers to evaluate the informative power of the MRI features, for the discrimination of normal controls (NC) and PD subjects. We define a network for each MRI scan; the nodes are the sub-volumes (patches) the images are divided into and the links are defined using the Pearson's pairwise correlation between patches. We obtain a multi-layer network whose important network features, obtained with different feature selection methods, are used to feed a supervised multi-level random forest classifier which exploits this base of knowledge for accurate classification. Method evaluation has been carried out using T1 MRI scans of 354 individuals, including 177 PD subjects and 177 NC from the Parkinson's Progression Markers Initiative (PPMI) database. The experimental results demonstrate that the features obtained from multiplex networks are able to accurately describe PD patterns. Besides, also if a privileged scale for studying PD disease exists, exploring the informative content of more scales leads to a significant improvement of the performances in the discrimination between disease and healthy subjects. In particular, this method gives a comprehensive overview of brain regions statistically affected by the disease, an additional value to the presented study.
MultiSpec: A Desktop and Online Geospatial Image Data Processing Tool
NASA Astrophysics Data System (ADS)
Biehl, L. L.; Hsu, W. K.; Maud, A. R. M.; Yeh, T. T.
2017-12-01
MultiSpec is an easy to learn and use, freeware image processing tool for interactively analyzing a broad spectrum of geospatial image data, with capabilities such as image display, unsupervised and supervised classification, feature extraction, feature enhancement, and several other functions. Originally developed for Macintosh and Windows desktop computers, it has a community of several thousand users worldwide, including researchers and educators, as a practical and robust solution for analyzing multispectral and hyperspectral remote sensing data in several different file formats. More recently MultiSpec was adapted to run in the HUBzero collaboration platform so that it can be used within a web browser, allowing new user communities to be engaged through science gateways. MultiSpec Online has also been extended to interoperate with other components (e.g., data management) in HUBzero through integration with the geospatial data building blocks (GABBs) project. This integration enables a user to directly launch MultiSpec Online from data that is stored and/or shared in a HUBzero gateway and to save output data from MultiSpec Online to hub storage, allowing data sharing and multi-step workflows without having to move data between different systems. MultiSpec has also been used in K-12 classes for which one example is the GLOBE program (www.globe.gov) and in outreach material such as that provided by the USGS (eros.usgs.gov/educational-activities). MultiSpec Online now provides teachers with another way to use MultiSpec without having to install the desktop tool. Recently MultiSpec Online was used in a geospatial data session with 30-35 middle school students at the Turned Onto Technology and Leadership (TOTAL) Camp in the summers of 2016 and 2017 at Purdue University. The students worked on a flood mapping exercise using Landsat 5 data to learn about land remote sensing using supervised classification techniques. Online documentation is available for MultiSpec (engineering.purdue.edu/ biehl/MultiSpec/) including a reference manual and several tutorials allowing young high-school students through research faculty to learn the basic functions in MultiSpec. Some of the tutorials have been translated to other languages by MultiSpec users.
Accurate quantum Z rotations with less magic
NASA Astrophysics Data System (ADS)
Landahl, Andrew; Cesare, Chris
2013-03-01
We present quantum protocols for executing arbitrarily accurate π /2k rotations of a qubit about its Z axis. Unlike reduced instruction set computing (RISC) protocols which use a two-step process of synthesizing high-fidelity ``magic'' states from which T = Z (π / 4) gates can be teleported and then compiling a sequence of adaptive stabilizer operations and T gates to approximate Z (π /2k) , our complex instruction set computing (CISC) protocol distills magic states for the Z (π /2k) gates directly. Replacing this two-step process with a single step results in substantial reductions in the number of gates needed. The key to our construction is a family of shortened quantum Reed-Muller codes of length 2 k + 2 - 1 , whose distillation threshold shrinks with k but is greater than 0.85% for k <= 6 . AJL and CC were supported in part by the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
A Mixtures-of-Trees Framework for Multi-Label Classification
Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos
2015-01-01
We propose a new probabilistic approach for multi-label classification that aims to represent the class posterior distribution P(Y|X). Our approach uses a mixture of tree-structured Bayesian networks, which can leverage the computational advantages of conditional tree-structured models and the abilities of mixtures to compensate for tree-structured restrictions. We develop algorithms for learning the model from data and for performing multi-label predictions using the learned model. Experiments on multiple datasets demonstrate that our approach outperforms several state-of-the-art multi-label classification methods. PMID:25927011
NASA Astrophysics Data System (ADS)
Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie
2018-04-01
The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.
NASA Astrophysics Data System (ADS)
Melville, Bethany; Lucieer, Arko; Aryal, Jagannath
2018-04-01
This paper presents a random forest classification approach for identifying and mapping three types of lowland native grassland communities found in the Tasmanian Midlands region. Due to the high conservation priority assigned to these communities, there has been an increasing need to identify appropriate datasets that can be used to derive accurate and frequently updateable maps of community extent. Therefore, this paper proposes a method employing repeat classification and statistical significance testing as a means of identifying the most appropriate dataset for mapping these communities. Two datasets were acquired and analysed; a Landsat ETM+ scene, and a WorldView-2 scene, both from 2010. Training and validation data were randomly subset using a k-fold (k = 50) approach from a pre-existing field dataset. Poa labillardierei, Themeda triandra and lowland native grassland complex communities were identified in addition to dry woodland and agriculture. For each subset of randomly allocated points, a random forest model was trained based on each dataset, and then used to classify the corresponding imagery. Validation was performed using the reciprocal points from the independent subset that had not been used to train the model. Final training and classification accuracies were reported as per class means for each satellite dataset. Analysis of Variance (ANOVA) was undertaken to determine whether classification accuracy differed between the two datasets, as well as between classifications. Results showed mean class accuracies between 54% and 87%. Class accuracy only differed significantly between datasets for the dry woodland and Themeda grassland classes, with the WorldView-2 dataset showing higher mean classification accuracies. The results of this study indicate that remote sensing is a viable method for the identification of lowland native grassland communities in the Tasmanian Midlands, and that repeat classification and statistical significant testing can be used to identify optimal datasets for vegetation community mapping.
NASA Astrophysics Data System (ADS)
Sigman, John Brevard
Buried explosive hazards present a pressing problem worldwide. Millions of acres and thousands of sites are contaminated in the United States alone [1, 2]. There are three categories of explosive hazards: metallic, intermediate-electrical conducting (IEC), and non-conducting targets. Metallic target detection and classification by electromagnetic (EM) signature has been the subject of research for many years. Key to the success of this research is modern multi-static Electromagnetic Induction (EMI) sensors, which are able to measure the wideband EMI response from metallic buried targets. However, no hardware solutions exist which can characterize IEC and non-conducting targets. While high-conducting metallic targets exhibit a quadrature peak response for frequencies in a traditional EMI regime under 100 kHz, the response of intermediate-conducting objects manifests at higher frequencies, between 100 kHz and 15 MHz. In addition to high-quality electromagnetic sensor data and robust electromagnetic models, a classification procedure is required to discriminate Targets of Interest (TOI) from clutter. Currently, costly human experts are used for this task. This expense and effort can be spared by using statistical signal processing and machine learning. This thesis has two main parts. In the first part, we explore using the high frequency EMI (HFEMI) band (100 kHz-15 MHz) for detection of carbon fiber UXO, voids, and of materials with characteristics that may be associated with improvised explosive devices (IED). We constructed an HFEMI sensing instrument, and apply the techniques of metal detection to sensing in a band of frequencies which are the transition between the induction and radar bands. In this transition domain, physical considerations and technological issues arise that cannot be solved via the approaches used in either of the bracketing lower and higher frequency ranges. In the second half of this thesis, we present a procedure for automatic classification of UXO. For maximum generality, our algorithm is robust and can handle sparse training examples of multi-class data. This procedure uses an unsupervised starter, semi-supervised techniques to gather training data, and concludes with supervised learning until all TOI are found. Additionally, an inference method for estimating the number of remaining true positives from a partial Receiver Operating Characteristic (ROC) curve is presented and applied to live-site dig histories.
Applicability Assessment of Uavsar Data in Wetland Monitoring: a Case Study of Louisiana Wetland
NASA Astrophysics Data System (ADS)
Zhao, J.; Niu, Y.; Lu, Z.; Yang, J.; Li, P.; Liu, W.
2018-04-01
Wetlands are highly productive and support a wide variety of ecosystem goods and services. Monitoring wetland is essential and potential. Because of the repeat-pass nature of satellite orbit and airborne, time-series of remote sensing data can be obtained to monitor wetland. UAVSAR is a NASA L-band synthetic aperture radar (SAR) sensor compact pod-mounted polarimetric instrument for interferometric repeat-track observations. Moreover, UAVSAR images can accurately map crustal deformations associated with natural hazards, such as volcanoes and earthquakes. And its polarization agility facilitates terrain and land-use classification and change detection. In this paper, the multi-temporal UAVSAR data are applied for monitoring the wetland change. Using the multi-temporal polarimetric SAR (PolSAR) data, the change detection maps are obtained by unsupervised and supervised method. And the coherence is extracted from the interfometric SAR (InSAR) data to verify the accuracy of change detection map. The experimental results show that the multi-temporal UAVSAR data is fit for wetland monitor.
Acoustic classification of multiple simultaneous bird species: a multi-instance multi-label approach
F. Briggs; B. Lakshminarayanan; L. Neal; X.Z. Fern; R. Raich; S.F. Hadley; A.S. Hadley; M.G. Betts
2012-01-01
Although field-collected recordings typically contain multiple simultaneously vocalizing birds of different species, acoustic species classification in this setting has received little study so far. This work formulates the problem of classifying the set of species present in an audio recording using the multi-instance multi-label (MIML) framework for machine learning...
Wilkerson, Richard C.; Linton, Yvonne-Marie; Fonseca, Dina M.; Schultz, Ted R.; Price, Dana C.; Strickman, Daniel A.
2015-01-01
The tribe Aedini (Family Culicidae) contains approximately one-quarter of the known species of mosquitoes, including vectors of deadly or debilitating disease agents. This tribe contains the genus Aedes, which is one of the three most familiar genera of mosquitoes. During the past decade, Aedini has been the focus of a series of extensive morphology-based phylogenetic studies published by Reinert, Harbach, and Kitching (RH&K). Those authors created 74 new, elevated or resurrected genera from what had been the single genus Aedes, almost tripling the number of genera in the entire family Culicidae. The proposed classification is based on subjective assessments of the “number and nature of the characters that support the branches” subtending particular monophyletic groups in the results of cladistic analyses of a large set of morphological characters of representative species. To gauge the stability of RH&K’s generic groupings we reanalyzed their data with unweighted parsimony jackknife and maximum-parsimony analyses, with and without ordering 14 of the characters as in RH&K. We found that their phylogeny was largely weakly supported and their taxonomic rankings failed priority and other useful taxon-naming criteria. Consequently, we propose simplified aedine generic designations that 1) restore a classification system that is useful for the operational community; 2) enhance the ability of taxonomists to accurately place new species into genera; 3) maintain the progress toward a natural classification based on monophyletic groups of species; and 4) correct the current classification system that is subject to instability as new species are described and existing species more thoroughly defined. We do not challenge the phylogenetic hypotheses generated by the above-mentioned series of morphological studies. However, we reduce the ranks of the genera and subgenera of RH&K to subgenera or informal species groups, respectively, to preserve stability as new data become available. PMID:26226613
Fiot, Jean-Baptiste; Cohen, Laurent D; Raniga, Parnesh; Fripp, Jurgen
2013-09-01
Support vector machines (SVM) are machine learning techniques that have been used for segmentation and classification of medical images, including segmentation of white matter hyper-intensities (WMH). Current approaches using SVM for WMH segmentation extract features from the brain and classify these followed by complex post-processing steps to remove false positives. The method presented in this paper combines advanced pre-processing, tissue-based feature selection and SVM classification to obtain efficient and accurate WMH segmentation. Features from 125 patients, generated from up to four MR modalities [T1-w, T2-w, proton-density and fluid attenuated inversion recovery(FLAIR)], differing neighbourhood sizes and the use of multi-scale features were compared. We found that although using all four modalities gave the best overall classification (average Dice scores of 0.54 ± 0.12, 0.72 ± 0.06 and 0.82 ± 0.06 respectively for small, moderate and severe lesion loads); this was not significantly different (p = 0.50) from using just T1-w and FLAIR sequences (Dice scores of 0.52 ± 0.13, 0.71 ± 0.08 and 0.81 ± 0.07). Furthermore, there was a negligible difference between using 5 × 5 × 5 and 3 × 3 × 3 features (p = 0.93). Finally, we show that careful consideration of features and pre-processing techniques not only saves storage space and computation time but also leads to more efficient classification, which outperforms the one based on all features with post-processing. Copyright © 2013 John Wiley & Sons, Ltd.
Lima, Ana Carolina E S; de Castro, Leandro Nunes
2014-10-01
Social media allow web users to create and share content pertaining to different subjects, exposing their activities, opinions, feelings and thoughts. In this context, online social media has attracted the interest of data scientists seeking to understand behaviours and trends, whilst collecting statistics for social sites. One potential application for these data is personality prediction, which aims to understand a user's behaviour within social media. Traditional personality prediction relies on users' profiles, their status updates, the messages they post, etc. Here, a personality prediction system for social media data is introduced that differs from most approaches in the literature, in that it works with groups of texts, instead of single texts, and does not take users' profiles into account. Also, the proposed approach extracts meta-attributes from texts and does not work directly with the content of the messages. The set of possible personality traits is taken from the Big Five model and allows the problem to be characterised as a multi-label classification task. The problem is then transformed into a set of five binary classification problems and solved by means of a semi-supervised learning approach, due to the difficulty in annotating the massive amounts of data generated in social media. In our implementation, the proposed system was trained with three well-known machine-learning algorithms, namely a Naïve Bayes classifier, a Support Vector Machine, and a Multilayer Perceptron neural network. The system was applied to predict the personality of Tweets taken from three datasets available in the literature, and resulted in an approximately 83% accurate prediction, with some of the personality traits presenting better individual classification rates than others. Copyright © 2014 Elsevier Ltd. All rights reserved.
Operational Tree Species Mapping in a Diverse Tropical Forest with Airborne Imaging Spectroscopy.
Baldeck, Claire A; Asner, Gregory P; Martin, Robin E; Anderson, Christopher B; Knapp, David E; Kellner, James R; Wright, S Joseph
2015-01-01
Remote identification and mapping of canopy tree species can contribute valuable information towards our understanding of ecosystem biodiversity and function over large spatial scales. However, the extreme challenges posed by highly diverse, closed-canopy tropical forests have prevented automated remote species mapping of non-flowering tree crowns in these ecosystems. We set out to identify individuals of three focal canopy tree species amongst a diverse background of tree and liana species on Barro Colorado Island, Panama, using airborne imaging spectroscopy data. First, we compared two leading single-class classification methods--binary support vector machine (SVM) and biased SVM--for their performance in identifying pixels of a single focal species. From this comparison we determined that biased SVM was more precise and created a multi-species classification model by combining the three biased SVM models. This model was applied to the imagery to identify pixels belonging to the three focal species and the prediction results were then processed to create a map of focal species crown objects. Crown-level cross-validation of the training data indicated that the multi-species classification model had pixel-level producer's accuracies of 94-97% for the three focal species, and field validation of the predicted crown objects indicated that these had user's accuracies of 94-100%. Our results demonstrate the ability of high spatial and spectral resolution remote sensing to accurately detect non-flowering crowns of focal species within a diverse tropical forest. We attribute the success of our model to recent classification and mapping techniques adapted to species detection in diverse closed-canopy forests, which can pave the way for remote species mapping in a wider variety of ecosystems.
Operational Tree Species Mapping in a Diverse Tropical Forest with Airborne Imaging Spectroscopy
Baldeck, Claire A.; Asner, Gregory P.; Martin, Robin E.; Anderson, Christopher B.; Knapp, David E.; Kellner, James R.; Wright, S. Joseph
2015-01-01
Remote identification and mapping of canopy tree species can contribute valuable information towards our understanding of ecosystem biodiversity and function over large spatial scales. However, the extreme challenges posed by highly diverse, closed-canopy tropical forests have prevented automated remote species mapping of non-flowering tree crowns in these ecosystems. We set out to identify individuals of three focal canopy tree species amongst a diverse background of tree and liana species on Barro Colorado Island, Panama, using airborne imaging spectroscopy data. First, we compared two leading single-class classification methods—binary support vector machine (SVM) and biased SVM—for their performance in identifying pixels of a single focal species. From this comparison we determined that biased SVM was more precise and created a multi-species classification model by combining the three biased SVM models. This model was applied to the imagery to identify pixels belonging to the three focal species and the prediction results were then processed to create a map of focal species crown objects. Crown-level cross-validation of the training data indicated that the multi-species classification model had pixel-level producer’s accuracies of 94–97% for the three focal species, and field validation of the predicted crown objects indicated that these had user’s accuracies of 94–100%. Our results demonstrate the ability of high spatial and spectral resolution remote sensing to accurately detect non-flowering crowns of focal species within a diverse tropical forest. We attribute the success of our model to recent classification and mapping techniques adapted to species detection in diverse closed-canopy forests, which can pave the way for remote species mapping in a wider variety of ecosystems. PMID:26153693
Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam
2012-01-15
Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.
Single-trial EEG RSVP classification using convolutional neural networks
NASA Astrophysics Data System (ADS)
Shamwell, Jared; Lee, Hyungtae; Kwon, Heesung; Marathe, Amar R.; Lawhern, Vernon; Nothwang, William
2016-05-01
Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.
A Robust Deep Model for Improved Classification of AD/MCI Patients
Li, Feng; Tran, Loc; Thung, Kim-Han; Ji, Shuiwang; Shen, Dinggang; Li, Jiang
2015-01-01
Accurate classification of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight co-adaptation, which is a typical cause of over-fitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multi-task learning strategy into the deep learning framework. We applied the proposed method to the ADNI data set and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods. PMID:25955998
[Accuracy improvement of spectral classification of crop using microwave backscatter data].
Jia, Kun; Li, Qiang-Zi; Tian, Yi-Chen; Wu, Bing-Fang; Zhang, Fei-Fei; Meng, Ji-Hua
2011-02-01
In the present study, VV polarization microwave backscatter data used for improving accuracies of spectral classification of crop is investigated. Classification accuracy using different classifiers based on the fusion data of HJ satellite multi-spectral and Envisat ASAR VV backscatter data are compared. The results indicate that fusion data can take full advantage of spectral information of HJ multi-spectral data and the structure sensitivity feature of ASAR VV polarization data. The fusion data enlarges the spectral difference among different classifications and improves crop classification accuracy. The classification accuracy using fusion data can be increased by 5 percent compared to the single HJ data. Furthermore, ASAR VV polarization data is sensitive to non-agrarian area of planted field, and VV polarization data joined classification can effectively distinguish the field border. VV polarization data associating with multi-spectral data used in crop classification enlarges the application of satellite data and has the potential of spread in the domain of agriculture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djara, V.; Cherkaoui, K.; Negara, M. A.
2015-11-28
An alternative multi-frequency inversion-charge pumping (MFICP) technique was developed to directly separate the inversion charge density (N{sub inv}) from the trapped charge density in high-k/InGaAs metal-oxide-semiconductor field-effect transistors (MOSFETs). This approach relies on the fitting of the frequency response of border traps, obtained from inversion-charge pumping measurements performed over a wide range of frequencies at room temperature on a single MOSFET, using a modified charge trapping model. The obtained model yielded the capture time constant and density of border traps located at energy levels aligned with the InGaAs conduction band. Moreover, the combination of MFICP and pulsed I{sub d}-V{sub g}more » measurements enabled an accurate effective mobility vs N{sub inv} extraction and analysis. The data obtained using the MFICP approach are consistent with the most recent reports on high-k/InGaAs.« less
A label distance maximum-based classifier for multi-label learning.
Liu, Xiaoli; Bao, Hang; Zhao, Dazhe; Cao, Peng
2015-01-01
Multi-label classification is useful in many bioinformatics tasks such as gene function prediction and protein site localization. This paper presents an improved neural network algorithm, Max Label Distance Back Propagation Algorithm for Multi-Label Classification. The method was formulated by modifying the total error function of the standard BP by adding a penalty term, which was realized by maximizing the distance between the positive and negative labels. Extensive experiments were conducted to compare this method against state-of-the-art multi-label methods on three popular bioinformatic benchmark datasets. The results illustrated that this proposed method is more effective for bioinformatic multi-label classification compared to commonly used techniques.
NASA Astrophysics Data System (ADS)
Alfano, R.; Soetemans, D.; Bauman, G. S.; Gibson, E.; Gaed, M.; Moussa, M.; Gomez, J. A.; Chin, J. L.; Pautler, S.; Ward, A. D.
2018-02-01
Multi-parametric MRI (mp-MRI) is becoming a standard in contemporary prostate cancer screening and diagnosis, and has shown to aid physicians in cancer detection. It offers many advantages over traditional systematic biopsy, which has shown to have very high clinical false-negative rates of up to 23% at all stages of the disease. However beneficial, mp-MRI is relatively complex to interpret and suffers from inter-observer variability in lesion localization and grading. Computer-aided diagnosis (CAD) systems have been developed as a solution as they have the power to perform deterministic quantitative image analysis. We measured the accuracy of such a system validated using accurately co-registered whole-mount digitized histology. We trained a logistic linear classifier (LOGLC), support vector machine (SVC), k-nearest neighbour (KNN) and random forest classifier (RFC) in a four part ROI based experiment against: 1) cancer vs. non-cancer, 2) high-grade (Gleason score ≥4+3) vs. low-grade cancer (Gleason score <4+3), 3) high-grade vs. other tissue components and 4) high-grade vs. benign tissue by selecting the classifier with the highest AUC using 1-10 features from forward feature selection. The CAD model was able to classify malignant vs. benign tissue and detect high-grade cancer with high accuracy. Once fully validated, this work will form the basis for a tool that enhances the radiologist's ability to detect malignancies, potentially improving biopsy guidance, treatment selection, and focal therapy for prostate cancer patients, maximizing the potential for cure and increasing quality of life.
Fast Image Texture Classification Using Decision Trees
NASA Technical Reports Server (NTRS)
Thompson, David R.
2011-01-01
Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.
Pan, Jianjun
2018-01-01
This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively. PMID:29382073
A Comparison of Artificial Intelligence Methods on Determining Coronary Artery Disease
NASA Astrophysics Data System (ADS)
Babaoğlu, Ismail; Baykan, Ömer Kaan; Aygül, Nazif; Özdemir, Kurtuluş; Bayrak, Mehmet
The aim of this study is to show a comparison of multi-layered perceptron neural network (MLPNN) and support vector machine (SVM) on determination of coronary artery disease existence upon exercise stress testing (EST) data. EST and coronary angiography were performed on 480 patients with acquiring 23 verifying features from each. The robustness of the proposed methods is examined using classification accuracy, k-fold cross-validation method and Cohen's kappa coefficient. The obtained classification accuracies are approximately 78% and 79% for MLPNN and SVM respectively. Both MLPNN and SVM methods are rather satisfactory than human-based method looking to Cohen's kappa coefficients. Besides, SVM is slightly better than MLPNN when looking to the diagnostic accuracy, average of sensitivity and specificity, and also Cohen's kappa coefficient.
Sedai, Suman; Garnavi, Rahil; Roy, Pallab; Xi Liang
2015-08-01
Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.
Brain tissue segmentation in 4D CT using voxel classification
NASA Astrophysics Data System (ADS)
van den Boom, R.; Oei, M. T. H.; Lafebre, S.; Oostveen, L. J.; Meijer, F. J. A.; Steens, S. C. A.; Prokop, M.; van Ginneken, B.; Manniesing, R.
2012-02-01
A method is proposed to segment anatomical regions of the brain from 4D computer tomography (CT) patient data. The method consists of a three step voxel classification scheme, each step focusing on structures that are increasingly difficult to segment. The first step classifies air and bone, the second step classifies vessels and the third step classifies white matter, gray matter and cerebrospinal fluid. As features the time averaged intensity value and the temporal intensity change value were used. In each step, a k-Nearest-Neighbor classifier was used to classify the voxels. Training data was obtained by placing regions of interest in reconstructed 3D image data. The method has been applied to ten 4D CT cerebral patient data. A leave-one-out experiment showed consistent and accurate segmentation results.
NASA Technical Reports Server (NTRS)
Myers, V. I. (Principal Investigator); Dalsted, K. J.; Best, R. G.; Smith, J. R.; Eidenshink, J. C.; Schmer, F. A.; Andrawis, A. S.; Rahn, P. H.
1977-01-01
The author has identified the following significant results. Digital analysis of LANDSAT CCT's indicated that two discrete spectral background zones occurred among the five soil zone. K-CLASS classification of corn revealed that accuracy increased when two background zones were used, compared to the classification of corn stratified by five soil zones. Selectively varying film type developer and development time produces higher contract in reprocessed imagery. Interpretation of rangeland and cropped land data from 1968 aerial photography and 1976 LANDSAT imagery indicated losses in rangeland habitat. Thermal imagery was useful in locating potential sources of sub-surface water and geothermal energy, estimating evapotranspiration, and inventorying the land.
Segmentation and object-oriented processing of single-season and multi-season Landsat-7 ETM+ data was utilized for the classification of wetlands in a 1560 km2 study area of north central Florida. This segmentation and object-oriented classification outperformed the traditional ...
Obtaining Accurate Probabilities Using Classifier Calibration
ERIC Educational Resources Information Center
Pakdaman Naeini, Mahdi
2016-01-01
Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…
A new self-report inventory of dyslexia for students: criterion and construct validity.
Tamboer, Peter; Vorst, Harrie C M
2015-02-01
The validity of a Dutch self-report inventory of dyslexia was ascertained in two samples of students. Six biographical questions, 20 general language statements and 56 specific language statements were based on dyslexia as a multi-dimensional deficit. Dyslexia and non-dyslexia were assessed with two criteria: identification with test results (Sample 1) and classification using biographical information (both samples). Using discriminant analyses, these criteria were predicted with various groups of statements. All together, 11 discriminant functions were used to estimate classification accuracy of the inventory. In Sample 1, 15 statements predicted the test criterion with classification accuracy of 98%, and 18 statements predicted the biographical criterion with classification accuracy of 97%. In Sample 2, 16 statements predicted the biographical criterion with classification accuracy of 94%. Estimations of positive and negative predictive value were 89% and 99%. Items of various discriminant functions were factor analysed to find characteristic difficulties of students with dyslexia, resulting in a five-factor structure in Sample 1 and a four-factor structure in Sample 2. Answer bias was investigated with measures of internal consistency reliability. Less than 20 self-report items are sufficient to accurately classify students with and without dyslexia. This supports the usefulness of self-assessment of dyslexia as a valid alternative to diagnostic test batteries. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Taha, Z.; Razman, M. A. M.; Ghani, A. S. Abdul; Majeed, A. P. P. Abdul; Musa, R. M.; Adnan, F. A.; Sallehudin, M. F.; Mukai, Y.
2018-04-01
Fish Hunger behaviour is essential in determining the fish feeding routine, particularly for fish farmers. The inability to provide accurate feeding routines (under-feeding or over-feeding) may lead the death of the fish and consequently inhibits the quantity of the fish produced. Moreover, the excessive food that is not consumed by the fish will be dissolved in the water and accordingly reduce the water quality through the reduction of oxygen quantity. This problem also leads the death of the fish or even spur fish diseases. In the present study, a correlation of Barramundi fish-school behaviour with hunger condition through the hybrid data integration of image processing technique is established. The behaviour is clustered with respect to the position of the school size as well as the school density of the fish before feeding, during feeding and after feeding. The clustered fish behaviour is then classified through k-Nearest Neighbour (k-NN) learning algorithm. Three different variations of the algorithm namely cosine, cubic and weighted are assessed on its ability to classify the aforementioned fish hunger behaviour. It was found from the study that the weighted k-NN variation provides the best classification with an accuracy of 86.5%. Therefore, it could be concluded that the proposed integration technique may assist fish farmers in ascertaining fish feeding routine.
Vidyasagar, Mathukumalli
2015-01-01
This article reviews several techniques from machine learning that can be used to study the problem of identifying a small number of features, from among tens of thousands of measured features, that can accurately predict a drug response. Prediction problems are divided into two categories: sparse classification and sparse regression. In classification, the clinical parameter to be predicted is binary, whereas in regression, the parameter is a real number. Well-known methods for both classes of problems are briefly discussed. These include the SVM (support vector machine) for classification and various algorithms such as ridge regression, LASSO (least absolute shrinkage and selection operator), and EN (elastic net) for regression. In addition, several well-established methods that do not directly fall into machine learning theory are also reviewed, including neural networks, PAM (pattern analysis for microarrays), SAM (significance analysis for microarrays), GSEA (gene set enrichment analysis), and k-means clustering. Several references indicative of the application of these methods to cancer biology are discussed.
Dias, Luís G; Veloso, Ana C A; Sousa, Mara E B C; Estevinho, Letícia; Machado, Adélio A S C; Peres, António M
2015-11-05
Nowadays the main honey producing countries require accurate labeling of honey before commercialization, including floral classification. Traditionally, this classification is made by melissopalynology analysis, an accurate but time-consuming task requiring laborious sample pre-treatment and high-skilled technicians. In this work the potential use of a potentiometric electronic tongue for pollinic assessment is evaluated, using monofloral and polyfloral honeys. The results showed that after splitting honeys according to color (white, amber and dark), the novel methodology enabled quantifying the relative percentage of the main pollens (Castanea sp., Echium sp., Erica sp., Eucaliptus sp., Lavandula sp., Prunus sp., Rubus sp. and Trifolium sp.). Multiple linear regression models were established for each type of pollen, based on the best sensors' sub-sets selected using the simulated annealing algorithm. To minimize the overfitting risk, a repeated K-fold cross-validation procedure was implemented, ensuring that at least 10-20% of the honeys were used for internal validation. With this approach, a minimum average determination coefficient of 0.91 ± 0.15 was obtained. Also, the proposed technique enabled the correct classification of 92% and 100% of monofloral and polyfloral honeys, respectively. The quite satisfactory performance of the novel procedure for quantifying the relative pollen frequency may envisage its applicability for honey labeling and geographical origin identification. Nevertheless, this approach is not a full alternative to the traditional melissopalynologic analysis; it may be seen as a practical complementary tool for preliminary honey floral classification, leaving only problematic cases for pollinic evaluation. Copyright © 2015 Elsevier B.V. All rights reserved.
Sarker, Abeed; Gonzalez, Graciela
2015-02-01
Automatic detection of adverse drug reaction (ADR) mentions from text has recently received significant interest in pharmacovigilance research. Current research focuses on various sources of text-based information, including social media-where enormous amounts of user posted data is available, which have the potential for use in pharmacovigilance if collected and filtered accurately. The aims of this study are: (i) to explore natural language processing (NLP) approaches for generating useful features from text, and utilizing them in optimized machine learning algorithms for automatic classification of ADR assertive text segments; (ii) to present two data sets that we prepared for the task of ADR detection from user posted internet data; and (iii) to investigate if combining training data from distinct corpora can improve automatic classification accuracies. One of our three data sets contains annotated sentences from clinical reports, and the two other data sets, built in-house, consist of annotated posts from social media. Our text classification approach relies on generating a large set of features, representing semantic properties (e.g., sentiment, polarity, and topic), from short text nuggets. Importantly, using our expanded feature sets, we combine training data from different corpora in attempts to boost classification accuracies. Our feature-rich classification approach performs significantly better than previously published approaches with ADR class F-scores of 0.812 (previously reported best: 0.770), 0.538 and 0.678 for the three data sets. Combining training data from multiple compatible corpora further improves the ADR F-scores for the in-house data sets to 0.597 (improvement of 5.9 units) and 0.704 (improvement of 2.6 units) respectively. Our research results indicate that using advanced NLP techniques for generating information rich features from text can significantly improve classification accuracies over existing benchmarks. Our experiments illustrate the benefits of incorporating various semantic features such as topics, concepts, sentiments, and polarities. Finally, we show that integration of information from compatible corpora can significantly improve classification performance. This form of multi-corpus training may be particularly useful in cases where data sets are heavily imbalanced (e.g., social media data), and may reduce the time and costs associated with the annotation of data in the future. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Portable Automatic Text Classification for Adverse Drug Reaction Detection via Multi-corpus Training
Gonzalez, Graciela
2014-01-01
Objective Automatic detection of Adverse Drug Reaction (ADR) mentions from text has recently received significant interest in pharmacovigilance research. Current research focuses on various sources of text-based information, including social media — where enormous amounts of user posted data is available, which have the potential for use in pharmacovigilance if collected and filtered accurately. The aims of this study are: (i) to explore natural language processing approaches for generating useful features from text, and utilizing them in optimized machine learning algorithms for automatic classification of ADR assertive text segments; (ii) to present two data sets that we prepared for the task of ADR detection from user posted internet data; and (iii) to investigate if combining training data from distinct corpora can improve automatic classification accuracies. Methods One of our three data sets contains annotated sentences from clinical reports, and the two other data sets, built in-house, consist of annotated posts from social media. Our text classification approach relies on generating a large set of features, representing semantic properties (e.g., sentiment, polarity, and topic), from short text nuggets. Importantly, using our expanded feature sets, we combine training data from different corpora in attempts to boost classification accuracies. Results Our feature-rich classification approach performs significantly better than previously published approaches with ADR class F-scores of 0.812 (previously reported best: 0.770), 0.538 and 0.678 for the three data sets. Combining training data from multiple compatible corpora further improves the ADR F-scores for the in-house data sets to 0.597 (improvement of 5.9 units) and 0.704 (improvement of 2.6 units) respectively. Conclusions Our research results indicate that using advanced NLP techniques for generating information rich features from text can significantly improve classification accuracies over existing benchmarks. Our experiments illustrate the benefits of incorporating various semantic features such as topics, concepts, sentiments, and polarities. Finally, we show that integration of information from compatible corpora can significantly improve classification performance. This form of multi-corpus training may be particularly useful in cases where data sets are heavily imbalanced (e.g., social media data), and may reduce the time and costs associated with the annotation of data in the future. PMID:25451103
NASA Astrophysics Data System (ADS)
Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio
2017-05-01
WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.
Bajoub, Aadil; Medina-Rodríguez, Santiago; Gómez-Romero, María; Ajal, El Amine; Bagur-González, María Gracia; Fernández-Gutiérrez, Alberto; Carrasco-Pancorbo, Alegría
2017-01-15
High Performance Liquid Chromatography (HPLC) with diode array (DAD) and fluorescence (FLD) detection was used to acquire the fingerprints of the phenolic fraction of monovarietal extra-virgin olive oils (extra-VOOs) collected over three consecutive crop seasons (2011/2012-2013/2014). The chromatographic fingerprints of 140 extra-VOO samples processed from olive fruits of seven olive varieties, were recorded and statistically treated for varietal authentication purposes. First, DAD and FLD chromatographic-fingerprint datasets were separately processed and, subsequently, were joined using "Low-level" and "Mid-Level" data fusion methods. After the preliminary examination by principal component analysis (PCA), three supervised pattern recognition techniques, Partial Least Squares Discriminant Analysis (PLS-DA), Soft Independent Modeling of Class Analogies (SIMCA) and K-Nearest Neighbors (k-NN) were applied to the four chromatographic-fingerprinting matrices. The classification models built were very sensitive and selective, showing considerably good recognition and prediction abilities. The combination "chromatographic dataset+chemometric technique" allowing the most accurate classification for each monovarietal extra-VOO was highlighted. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Grazian, A.; Salimbeni, S.; Pentericci, L.; Fontana, A.; Nonino, M.; Vanzella, E.; Cristiani, S.; de Santis, C.; Gallozzi, S.; Giallongo, E.; Santini, P.
2007-04-01
Context: The classification scheme for high redshift galaxies is complex at the present time, with simple colour-selection criteria (i.e. EROs, IEROs, LBGs, DRGs, BzKs), resulting in ill-defined properties for the stellar mass and star formation rate of these distant galaxies. Aims: The goal of this work is to investigate the properties of different classes of high-z galaxies, focusing in particular on the stellar masses of LBGs, DRGs, and BzKs, in order to derive their contribution to the total mass budget of the distant Universe. Methods: We used the GOODS-MUSIC catalog, containing ~3000 Ks-selected (~10 000 z-selected) galaxies with multi-wavelength coverage extending from the U band to the Spitzer 8~μm band, with spectroscopic or accurate photometric redshifts. We selected samples of BM/BX/LBGs, DRGs, and BzK galaxies to discuss the overlap and the limitations of these criteria, which can be overridden by a selection criterion based on physical parameters. We then measured the stellar masses of these galaxies and computed the stellar mass density (SMD) for the different samples up to redshift ≃4. Results: We show that the BzK-PE criterion is not optimal for selecting early type galaxies at the faint end. On the other hand, BzK-SF is highly contaminated by passively evolving galaxies at red z-Ks colours. We find that LBGs and DRGs contribute almost equally to the global SMD at z≥ 2 and, in general, that star-forming galaxies form a substantial fraction of the universal SMD. Passively evolving galaxies show a strong negative density evolution from redshift 2 to 3, indicating that we are witnessing the epoch of mass assembly of such objects. Finally we have indications that by pushing the selection to deeper magnitudes, the contribution of less massive DRGs could overtake that of LBGs. Deeper surveys, like the HUDF, are required to confirm this suggestion.
Ali, Safdar; Majid, Abdul; Khan, Asifullah
2014-04-01
Development of an accurate and reliable intelligent decision-making method for the construction of cancer diagnosis system is one of the fast growing research areas of health sciences. Such decision-making system can provide adequate information for cancer diagnosis and drug discovery. Descriptors derived from physicochemical properties of protein sequences are very useful for classifying cancerous proteins. Recently, several interesting research studies have been reported on breast cancer classification. To this end, we propose the exploitation of the physicochemical properties of amino acids in protein primary sequences such as hydrophobicity (Hd) and hydrophilicity (Hb) for breast cancer classification. Hd and Hb properties of amino acids, in recent literature, are reported to be quite effective in characterizing the constituent amino acids and are used to study protein foldings, interactions, structures, and sequence-order effects. Especially, using these physicochemical properties, we observed that proline, serine, tyrosine, cysteine, arginine, and asparagine amino acids offer high discrimination between cancerous and healthy proteins. In addition, unlike traditional ensemble classification approaches, the proposed 'IDM-PhyChm-Ens' method was developed by combining the decision spaces of a specific classifier trained on different feature spaces. The different feature spaces used were amino acid composition, split amino acid composition, and pseudo amino acid composition. Consequently, we have exploited different feature spaces using Hd and Hb properties of amino acids to develop an accurate method for classification of cancerous protein sequences. We developed ensemble classifiers using diverse learning algorithms such as random forest (RF), support vector machines (SVM), and K-nearest neighbor (KNN) trained on different feature spaces. We observed that ensemble-RF, in case of cancer classification, performed better than ensemble-SVM and ensemble-KNN. Our analysis demonstrates that ensemble-RF, ensemble-SVM and ensemble-KNN are more effective than their individual counterparts. The proposed 'IDM-PhyChm-Ens' method has shown improved performance compared to existing techniques.
Combining multiple decisions: applications to bioinformatics
NASA Astrophysics Data System (ADS)
Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.
2008-01-01
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.
3D multi-view convolutional neural networks for lung nodule classification
Kang, Guixia; Hou, Beibei; Zhang, Ningbo
2017-01-01
The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architecture and directed acyclic graph architecture, including 3D Inception and 3D Inception-ResNet. All networks employ the multi-view-one-network strategy. We conduct a binary classification (benign and malignant) and a ternary classification (benign, primary malignant and metastatic malignant) on Computed Tomography (CT) images from Lung Image Database Consortium and Image Database Resource Initiative database (LIDC-IDRI). All results are obtained via 10-fold cross validation. As regards the MV-CNN with chain architecture, results show that the performance of 3D MV-CNN surpasses that of 2D MV-CNN by a significant margin. Finally, a 3D Inception network achieved an error rate of 4.59% for the binary classification and 7.70% for the ternary classification, both of which represent superior results for the corresponding task. We compare the multi-view-one-network strategy with the one-view-one-network strategy. The results reveal that the multi-view-one-network strategy can achieve a lower error rate than the one-view-one-network strategy. PMID:29145492
NASA Astrophysics Data System (ADS)
Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun
2004-04-01
This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.
Consensus Classification Using Non-Optimized Classifiers.
Brownfield, Brett; Lemos, Tony; Kalivas, John H
2018-04-03
Classifying samples into categories is a common problem in analytical chemistry and other fields. Classification is usually based on only one method, but numerous classifiers are available with some being complex, such as neural networks, and others are simple, such as k nearest neighbors. Regardless, most classification schemes require optimization of one or more tuning parameters for best classification accuracy, sensitivity, and specificity. A process not requiring exact selection of tuning parameter values would be useful. To improve classification, several ensemble approaches have been used in past work to combine classification results from multiple optimized single classifiers. The collection of classifications for a particular sample are then combined by a fusion process such as majority vote to form the final classification. Presented in this Article is a method to classify a sample by combining multiple classification methods without specifically classifying the sample by each method, that is, the classification methods are not optimized. The approach is demonstrated on three analytical data sets. The first is a beer authentication set with samples measured on five instruments, allowing fusion of multiple instruments by three ways. The second data set is composed of textile samples from three classes based on Raman spectra. This data set is used to demonstrate the ability to classify simultaneously with different data preprocessing strategies, thereby reducing the need to determine the ideal preprocessing method, a common prerequisite for accurate classification. The third data set contains three wine cultivars for three classes measured at 13 unique chemical and physical variables. In all cases, fusion of nonoptimized classifiers improves classification. Also presented are atypical uses of Procrustes analysis and extended inverted signal correction (EISC) for distinguishing sample similarities to respective classes.
Distinguishing remobilized ash from erupted volcanic plumes using space-borne multi-angle imaging.
Flower, Verity J B; Kahn, Ralph A
2017-10-28
Volcanic systems are comprised of a complex combination of ongoing eruptive activity and secondary hazards, such as remobilized ash plumes. Similarities in the visual characteristics of remobilized and erupted plumes, as imaged by satellite-based remote sensing, complicate the accurate classification of these events. The stereo imaging capabilities of the Multi-angle Imaging SpectroRadiometer (MISR) were used to determine the altitude and distribution of suspended particles. Remobilized ash shows distinct dispersion, with particles distributed within ~1.5 km of the surface. Particle transport is consistently constrained by local topography, limiting dispersion pathways downwind. The MISR Research Aerosol (RA) retrieval algorithm was used to assess plume particle microphysical properties. Remobilized ash plumes displayed a dominance of large particles with consistent absorption and angularity properties, distinct from emitted plumes. The combination of vertical distribution, topographic control, and particle microphysical properties makes it possible to distinguish remobilized ash flows from eruptive plumes, globally.
Jaki, Thomas; Allacher, Peter; Horling, Frank
2016-09-05
Detecting and characterizing of anti-drug antibodies (ADA) against a protein therapeutic are crucially important to monitor the unwanted immune response. Usually a multi-tiered approach that initially rapidly screens for positive samples that are subsequently confirmed in a separate assay is employed for testing of patient samples for ADA activity. In this manuscript we evaluate the ability of different methods used to classify subject with screening and competition based confirmatory assays. We find that for the overall performance of the multi-stage process the method used for confirmation is most important where a t-test is best when differences are moderate to large. Moreover we find that, when differences between positive and negative samples are not sufficiently large, using a competition based confirmation step does yield poor classification of positive samples. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Generating highly accurate prediction hypotheses through collaborative ensemble learning
NASA Astrophysics Data System (ADS)
Arsov, Nino; Pavlovski, Martin; Basnarkov, Lasko; Kocarev, Ljupco
2017-03-01
Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their predictive capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance trade-off. To further improve this, we alter the bagged-boosting scheme by introducing collaboration between the multi-model’s constituent learners at various levels. This novel stability-guided classification scheme is delivered in two flavours: during or after the boosting process. Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets. In both cases, our models obtained a 40% generalization error decrease. But their true ability to capture details in data was revealed through their application for protein detection in texture analysis of gel electrophoresis images. They achieve improved performance of approximately 0.9773 AUROC when compared to the AUROC of 0.9574 obtained by an SVM based on recursive feature elimination.
Experiments on Supervised Learning Algorithms for Text Categorization
NASA Technical Reports Server (NTRS)
Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.
2005-01-01
Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.
Drug-related webpages classification based on multi-modal local decision fusion
NASA Astrophysics Data System (ADS)
Hu, Ruiguang; Su, Xiaojing; Liu, Yanxin
2018-03-01
In this paper, multi-modal local decision fusion is used for drug-related webpages classification. First, meaningful text are extracted through HTML parsing, and effective images are chosen by the FOCARSS algorithm. Second, six SVM classifiers are trained for six kinds of drug-taking instruments, which are represented by PHOG. One SVM classifier is trained for the cannabis, which is represented by the mid-feature of BOW model. For each instance in a webpage, seven SVMs give seven labels for its image, and other seven labels are given by searching the names of drug-taking instruments and cannabis in its related text. Concatenating seven labels of image and seven labels of text, the representation of those instances in webpages are generated. Last, Multi-Instance Learning is used to classify those drugrelated webpages. Experimental results demonstrate that the classification accuracy of multi-instance learning with multi-modal local decision fusion is much higher than those of single-modal classification.
Feature selection and classification of multiparametric medical images using bagging and SVM
NASA Astrophysics Data System (ADS)
Fan, Yong; Resnick, Susan M.; Davatzikos, Christos
2008-03-01
This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-10-21
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-01-01
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347
Zu, Chen; Jie, Biao; Liu, Mingxia; Chen, Songcan
2015-01-01
Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI. PMID:26572145
Data Processing And Machine Learning Methods For Multi-Modal Operator State Classification Systems
NASA Technical Reports Server (NTRS)
Hearn, Tristan A.
2015-01-01
This document is intended as an introduction to a set of common signal processing learning methods that may be used in the software portion of a functional crew state monitoring system. This includes overviews of both the theory of the methods involved, as well as examples of implementation. Practical considerations are discussed for implementing modular, flexible, and scalable processing and classification software for a multi-modal, multi-channel monitoring system. Example source code is also given for all of the discussed processing and classification methods.
Application of DBNs for concerned internet information detecting
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Gao, Song
2017-03-01
In recent years, deep learning has achieved great success in many fields, ranging from voice recognition and image classification to computer vision. In this study we apply DBNs to concerned internet information in Chinese detecting problem, since there are inherent differences between English and Chinese. Contrastive divergence (CD) is employed in the DBNs to learn a multi-layer generative model from numerous unlabeled data. The features obtained by this model are used to initialize the feed-forward neural network, which can be fine-tuned with backpropagation. Experiment results indicate that, the model and training method we proposed can be used to detect the concerned internet information effectively and accurately.
NASA Astrophysics Data System (ADS)
Marais Sicre, Claire; Baup, Frederic; Fieuzal, Remy
2015-04-01
In the context of climate change (with consequences on temperature and precipitation patterns), persons involved in agricultural management have the imperative to combine: sufficient productivity (as a response of the increment of the necessary foods) and durability of the resources (in order to restrain waste of water, fertilizer or environmental damages). To this end, a detailed knowledge of land use will improve the management of food and water, while preserving the ecosystems. Among the wide range of available monitoring tools, numerous studies demonstrated the interest of satellite images for agricultural mapping. Recently, the launch of several radar and optical sensors offer new perspectives for the multi-wavelength crop monitoring (Terrasar-X, Radarsat-2, Sentinel-1, Landsat-8…) allowing surface survey whatever the cloud conditions. Previous studies have demonstrated the interest of using multi-temporal approaches for crop classification, requiring several images for suitable classification results. Unfortunately, these approaches are limited (due to the satellite orbit cycle) and require waiting several days, week or month before offering an accurate land use map. The objective of this study is to compare the accuracy of object-oriented classification (random forest algorithm combined with vector layer coming from segmentation) to map winter crop (barley, rapeseed, grasslands and wheat) and soil states (bare soils with different surface roughness) using quasi-synchronous images. Satellite data are composed of multi-frequency and multi-polarization (HH, VV, HV and VH) images acquired near the 14th of April, 2010, over a studied area (90km²) located close to Toulouse in France. This is a region of alluvial plains and hills, which are mostly mixed farming and governed by a temperate climate. Remote sensing images are provided by Formosat-2 (04/18), Radarsat-2 (C-band, 04/15), Terrasar-X (X-band, 04/14) and ALOS (L-band, 04/14). Ground data are collected over 214 plots during the MCM'10 experiment conducted by the CESBIO laboratory in 2010. Classifications performances have been evaluated considering two cases: using only one frequency in optical or microwave domain, or using a combination of several frequencies (mixed between optical and microwave). For the first case, best results were obtained using optical wavelength with mean overall accuracy (OA) of 84%, followed by Terrasar-X (HH) and Radarsat-2 (HV or HV) which respectively offer overall accuracies of 77% and 73%. Concerning the vegetation, wheat was well classified whatever the wavelength used (OA > 93%). Barley was more complicated to classified and could be mingled with wheat or grassland. Best results were obtained using of green, red, blue, X-band or L-band wavelength offering an OA superior to 45%. Radar images were clearly well adapted to identify rapeseed (OA > 83%), especially at C (VV, HH and HV) and X-band (HH). The accuracy of grassland classification never exceeded 79% and results were stable between frequencies (excepted at L-band: 51%). The three soil roughness states were quite well classified whatever the wavelength and performances decreased with the increase of soil roughness. The combine use of multi-frequencies increased performances of the classification. Overall accuracy reached respectively 83% and 96% for C-band full polarization and for Formosat-2 multispectral approaches.
Building a Multi-Discipline Digital Library Through Extending the Dienst Protocol
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Maly, Kurt; Shen, Stewart N. T.
1997-01-01
The purpose of this project is to establish multi-discipline capability for a unified, canonical digital library service for scientific and technical information (STI). This is accomplished by extending the Dienst Protocol to be aware of subject classification of a servers holdings. We propose a hierarchical, general, and extendible subject classification that can encapsulate existing classification systems.
NASA Astrophysics Data System (ADS)
Gao, Lin; Cheng, Wei; Zhang, Jinhua; Wang, Jue
2016-08-01
Brain-computer interface (BCI) systems provide an alternative communication and control approach for people with limited motor function. Therefore, the feature extraction and classification approach should differentiate the relative unusual state of motion intention from a common resting state. In this paper, we sought a novel approach for multi-class classification in BCI applications. We collected electroencephalographic (EEG) signals registered by electrodes placed over the scalp during left hand motor imagery, right hand motor imagery, and resting state for ten healthy human subjects. We proposed using the Kolmogorov complexity (Kc) for feature extraction and a multi-class Adaboost classifier with extreme learning machine as base classifier for classification, in order to classify the three-class EEG samples. An average classification accuracy of 79.5% was obtained for ten subjects, which greatly outperformed commonly used approaches. Thus, it is concluded that the proposed method could improve the performance for classification of motor imagery tasks for multi-class samples. It could be applied in further studies to generate the control commands to initiate the movement of a robotic exoskeleton or orthosis, which finally facilitates the rehabilitation of disabled people.
NASA Technical Reports Server (NTRS)
Mehta, N. C.
1984-01-01
The utility of radar scatterometers for discrimination and characterization of natural vegetation was investigated. Backscatter measurements were acquired with airborne multi-frequency, multi-polarization, multi-angle radar scatterometers over a test site in a southern temperate forest. Separability between ground cover classes was studied using a two-class separability measure. Very good separability is achieved between most classes. Longer wavelength is useful in separating trees from non-tree classes, while shorter wavelength and cross polarization are helpful for discrimination among tree classes. Using the maximum likelihood classifier, 50% overall classification accuracy is achieved using a single, short-wavelength scatterometer channel. Addition of multiple incidence angles and another radar band improves classification accuracy by 20% and 50%, respectively, over the single channel accuracy. Incorporation of a third radar band seems redundant for vegetation classification. Vertical transmit polarization is critically important for all classes.
United states national land cover data base development? 1992-2001 and beyond
Yang, L.
2008-01-01
An accurate, up-to-date and spatially-explicate national land cover database is required for monitoring the status and trends of the nation's terrestrial ecosystem, and for managing and conserving land resources at the national scale. With all the challenges and resources required to develop such a database, an innovative and scientifically sound planning must be in place and a partnership be formed among users from government agencies, research institutes and private sectors. In this paper, we summarize major scientific and technical issues regarding the development of the NLCD 1992 and 2001. Experiences and lessons learned from the project are documented with regard to project design, technical approaches, accuracy assessment strategy, and projecti imiplementation.Future improvements in developing next generation NLCD beyond 2001 are suggested, including: 1) enhanced satellite data preprocessing in correction of atmospheric and adjacency effect and the topographic normalization; 2) improved classification accuracy through comprehensive and consistent training data and new algorithm development; 3) multi-resolution and multi-temporal database targeting major land cover changes and land cover database updates; 4) enriched database contents by including additional biophysical parameters and/or more detailed land cover classes through synergizing multi-sensor, multi-temporal, and multi-spectral satellite data and ancillary data, and 5) transform the NLCD project into a national land cover monitoring program. ?? 2008 IEEE.
Accelerometer and Camera-Based Strategy for Improved Human Fall Detection.
Zerrouki, Nabil; Harrou, Fouzi; Sun, Ying; Houacine, Amrane
2016-12-01
In this paper, we address the problem of detecting human falls using anomaly detection. Detection and classification of falls are based on accelerometric data and variations in human silhouette shape. First, we use the exponentially weighted moving average (EWMA) monitoring scheme to detect a potential fall in the accelerometric data. We used an EWMA to identify features that correspond with a particular type of fall allowing us to classify falls. Only features corresponding with detected falls were used in the classification phase. A benefit of using a subset of the original data to design classification models minimizes training time and simplifies models. Based on features corresponding to detected falls, we used the support vector machine (SVM) algorithm to distinguish between true falls and fall-like events. We apply this strategy to the publicly available fall detection databases from the university of Rzeszow's. Results indicated that our strategy accurately detected and classified fall events, suggesting its potential application to early alert mechanisms in the event of fall situations and its capability for classification of detected falls. Comparison of the classification results using the EWMA-based SVM classifier method with those achieved using three commonly used machine learning classifiers, neural network, K-nearest neighbor and naïve Bayes, proved our model superior.
Iddamalgoda, Lahiru; Das, Partha S; Aponso, Achala; Sundararajan, Vijayaraghava S; Suravajhala, Prashanth; Valadi, Jayaraman K
2016-01-01
Data mining and pattern recognition methods reveal interesting findings in genetic studies, especially on how the genetic makeup is associated with inherited diseases. Although researchers have proposed various data mining models for biomedical approaches, there remains a challenge in accurately prioritizing the single nucleotide polymorphisms (SNP) associated with the disease. In this commentary, we review the state-of-art data mining and pattern recognition models for identifying inherited diseases and deliberate the need of binary classification- and scoring-based prioritization methods in determining causal variants. While we discuss the pros and cons associated with these methods known, we argue that the gene prioritization methods and the protein interaction (PPI) methods in conjunction with the K nearest neighbors' could be used in accurately categorizing the genetic factors in disease causation.
Day 1 for the Integrated Multi-Satellite Retrievals for GPM (IMERG) Data Sets
NASA Astrophysics Data System (ADS)
Huffman, G. J.; Bolvin, D. T.; Braithwaite, D.; Hsu, K. L.; Joyce, R.; Kidd, C.; Sorooshian, S.; Xie, P.
2014-12-01
The Integrated Multi-satellitE Retrievals for GPM (IMERG) is designed to compute the best time series of (nearly) global precipitation from "all" precipitation-relevant satellites and global surface precipitation gauge analyses. IMERG was developed to use GPM Core Observatory data as a reference for the international constellation of satellites of opportunity that constitute the GPM virtual constellation. Computationally, IMERG is a unified U.S. algorithm drawing on strengths in the three contributing groups, whose previous work includes: 1) the TRMM Multi-satellite Precipitation Analysis (TMPA); 2) the CPC Morphing algorithm with Kalman Filtering (K-CMORPH); and 3) the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks using a Cloud Classification System (PERSIANN-CCS). We review the IMERG design, development, testing, and current status. IMERG provides 0.1°x0.1° half-hourly data, and will be run at multiple times, providing successively more accurate estimates: 4 hours, 8 hours, and 2 months after observation time. In Day 1 the spatial extent is 60°N-S, for the period March 2014 to the present. In subsequent reprocessing the data will extend to fully global, covering the period 1998 to the present. Both the set of input data set retrievals and the IMERG system are substantially different than those used in previous U.S. products. The input passive microwave data are all being produced with GPROF2014, which is substantially upgraded compared to previous versions. For the first time, this includes microwave sounders. Accordingly, there is a strong need to carefully check the initial test data sets for performance. IMERG output will be illustrated using pre-operational test data, including the variety of supporting fields, such as the merged-microwave and infrared estimates, and the precipitation type. Finally, we will summarize the expected release of various output products, and the subsequent reprocessing sequence.
Object Manifold Alignment for Multi-Temporal High Resolution Remote Sensing Images Classification
NASA Astrophysics Data System (ADS)
Gao, G.; Zhang, M.; Gu, Y.
2017-05-01
Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, "pepper and salt" appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and "pepper and salt" problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of "pepper and salt".
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Computer-aided diagnostic approach of dermoscopy images acquiring relevant features
NASA Astrophysics Data System (ADS)
Castillejos-Fernández, H.; Franco-Arcega, A.; López-Ortega, O.
2016-09-01
In skin cancer detection, automated analysis of borders, colors, and structures of a lesion relies upon an accurate segmentation process and it is an important first step in any Computer-Aided Diagnosis (CAD) system. However, irregular and disperse lesion borders, low contrast, artifacts in images and variety of colors within the interest region make the problem difficult. In this paper, we propose an efficient approach of automatic classification which considers specific lesion features. First, for the selection of lesion skin we employ the segmentation algorithm W-FCM.1 Then, in the feature extraction stage we consider several aspects: the area of the lesion, which is calculated by correlating axes and we calculate the specific the value of asymmetry in both axes. For color analysis we employ an ensemble of clusterers including K-Means, Fuzzy K-Means and Kohonep maps, all of which estimate the presence of one or more colors defined in ABCD rule and the values for each of the segmented colors. Another aspect to consider is the type of structures that appear in the lesion Those are defined by using the ell-known GLCM method. During the classification stage we compare several methods in order to define if the lesion is benign or malignant. An important contribution of the current approach in segmentation-classification problem resides in the use of information from all color channels together, as well as the measure of each color in the lesion and the axes correlation. The segmentation and classification measures have been performed using sensibility, specificity, accuracy and AUC metric over a set of dermoscopy images from ISDIS data set
The distance function effect on k-nearest neighbor classification for medical datasets.
Hu, Li-Yu; Huang, Min-Wei; Ke, Shih-Wen; Tsai, Chih-Fong
2016-01-01
K-nearest neighbor (k-NN) classification is conventional non-parametric classifier, which has been used as the baseline classifier in many pattern classification problems. It is based on measuring the distances between the test data and each of the training data to decide the final classification output. Since the Euclidean distance function is the most widely used distance metric in k-NN, no study examines the classification performance of k-NN by different distance functions, especially for various medical domain problems. Therefore, the aim of this paper is to investigate whether the distance function can affect the k-NN performance over different medical datasets. Our experiments are based on three different types of medical datasets containing categorical, numerical, and mixed types of data and four different distance functions including Euclidean, cosine, Chi square, and Minkowsky are used during k-NN classification individually. The experimental results show that using the Chi square distance function is the best choice for the three different types of datasets. However, using the cosine and Euclidean (and Minkowsky) distance function perform the worst over the mixed type of datasets. In this paper, we demonstrate that the chosen distance function can affect the classification accuracy of the k-NN classifier. For the medical domain datasets including the categorical, numerical, and mixed types of data, K-NN based on the Chi square distance function performs the best.
Objected-oriented remote sensing image classification method based on geographic ontology model
NASA Astrophysics Data System (ADS)
Chu, Z.; Liu, Z. J.; Gu, H. Y.
2016-11-01
Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.
Young Kim, Eun; Johnson, Hans J
2013-01-01
A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.
Hattotuwagama, Channa K; Guan, Pingping; Doytchinova, Irini A; Flower, Darren R
2004-11-21
Quantitative structure-activity relationship (QSAR) analysis is a main cornerstone of modern informatic disciplines. Predictive computational models, based on QSAR technology, of peptide-major histocompatibility complex (MHC) binding affinity have now become a vital component of modern day computational immunovaccinology. Historically, such approaches have been built around semi-qualitative, classification methods, but these are now giving way to quantitative regression methods. The additive method, an established immunoinformatics technique for the quantitative prediction of peptide-protein affinity, was used here to identify the sequence dependence of peptide binding specificity for three mouse class I MHC alleles: H2-D(b), H2-K(b) and H2-K(k). As we show, in terms of reliability the resulting models represent a significant advance on existing methods. They can be used for the accurate prediction of T-cell epitopes and are freely available online ( http://www.jenner.ac.uk/MHCPred).
NASA Astrophysics Data System (ADS)
Zhuang, Chao; Zhou, Zhifang; Illman, Walter A.; Guo, Qiaona; Wang, Jinguo
2017-09-01
The classical aquitard-drainage model COMPAC has been modified to simulate the compaction process of a heterogeneous aquitard consisting of multiple sub-units (Multi-COMPAC). By coupling Multi-COMPAC with the parameter estimation code PEST++, the vertical hydraulic conductivity ( K v) and elastic ( S ske) and inelastic ( S skp) skeletal specific-storage values of each sub-unit can be estimated using observed long-term multi-extensometer and groundwater level data. The approach was first tested through a synthetic case with known parameters. Results of the synthetic case revealed that it was possible to accurately estimate the three parameters for each sub-unit. Next, the methodology was applied to a field site located in Changzhou city, China. Based on the detailed stratigraphic information and extensometer data, the aquitard of interest was subdivided into three sub-units. Parameters K v, S ske and S skp of each sub-unit were estimated simultaneously and then were compared with laboratory results and with bulk values and geologic data from previous studies, demonstrating the reliability of parameter estimates. Estimated S skp values ranged within the magnitude of 10-4 m-1, while K v ranged over 10-10-10-8 m/s, suggesting moderately high heterogeneity of the aquitard. However, the elastic deformation of the third sub-unit, consisting of soft plastic silty clay, is masked by delayed drainage, and the inverse procedure leads to large uncertainty in the S ske estimate for this sub-unit.
Auto-simultaneous laser treatment and Ohshiro's classification of laser treatment
NASA Astrophysics Data System (ADS)
Ohshiro, Toshio
2005-07-01
When the laser was first applied in medicine and surgery in the late 1960"s and early 1970"s, early adopters reported better wound healing and less postoperative pain with laser procedures compared with the same procedure performed with the cold scalpel or with electrothermy, and multiple surgical effects such as incision, vaporization and hemocoagulation could be achieved with the same laser beam. There was thus an added beneficial component which was associated only with laser surgery. This was first recognized as the `?-effect", was then classified by the author as simultaneous laser therapy, but is now more accurately classified by the author as part of the auto-simultaneous aspect of laser treatment. Indeed, with the dramatic increase of the applications of the laser in surgery and medicine over the last 2 decades there has been a parallel increase in the need for a standardized classification of laser treatment. Some classifications have been machine-based, and thus inaccurate because at appropriate parameters, a `low-power laser" can produce a surgical effect and a `high power laser", a therapeutic one . A more accurate classification based on the tissue reaction is presented, developed by the author. In addition to this, the author has devised a graphical representation of laser surgical and therapeutic beams whereby the laser type, parameters, penetration depth, and tissue reaction can all be shown in a single illustration, which the author has termed the `Laser Apple", due to the typical pattern generated when a laser beam is incident on tissue. Laser/tissue reactions fall into three broad groups. If the photoreaction in the tissue is irreversible, then it is classified as high-reactive level laser treatment (HLLT). If some irreversible damage occurs together with reversible photodamage, as in tissue welding, the author refers to this as mid-reactive level laser treatment (MLLT). If the level of reaction in the target tissue is lower than the cells" survival threshold, then this is low reactive-level laser therapy (LLLT). All three of these classifications can occur simultaneously in the one target, and fall under the umbrella of laser treatment (LT). LT is further subdivided into three main types: mono-type LT (Mo-LT, treatment with a single laser system; multi-type LT (Mu-LT, treatment with multiple laser systems); and concomitant LT (Cc-LT), laser treatment in combination, each of which is further subdivided by tissue reaction to give an accurate, treatment-based categorization of laser treatment. When this effect-based classification is combined with and illustrated by the appropriate laser apple pattern, an accurate and simple method of classifying laser/tissue reactions by the reaction, rather than by the laser used to produce the reaction, is achieved. Examples will be given to illustrate the author"s new approach to this important concept.
Men, Hong; Shi, Yan; Fu, Songlin; Jiao, Yanan; Qiao, Yu; Liu, Jingjing
2017-01-01
Multi-sensor data fusion can provide more comprehensive and more accurate analysis results. However, it also brings some redundant information, which is an important issue with respect to finding a feature-mining method for intuitive and efficient analysis. This paper demonstrates a feature-mining method based on variable accumulation to find the best expression form and variables’ behavior affecting beer flavor. First, e-tongue and e-nose were used to gather the taste and olfactory information of beer, respectively. Second, principal component analysis (PCA), genetic algorithm-partial least squares (GA-PLS), and variable importance of projection (VIP) scores were applied to select feature variables of the original fusion set. Finally, the classification models based on support vector machine (SVM), random forests (RF), and extreme learning machine (ELM) were established to evaluate the efficiency of the feature-mining method. The result shows that the feature-mining method based on variable accumulation obtains the main feature affecting beer flavor information, and the best classification performance for the SVM, RF, and ELM models with 96.67%, 94.44%, and 98.33% prediction accuracy, respectively. PMID:28753917
Srivastava, Saurabh Kumar; Singh, Sandeep Kumar; Suri, Jasjit S
2018-04-13
A machine learning (ML)-based text classification system has several classifiers. The performance evaluation (PE) of the ML system is typically driven by the training data size and the partition protocols used. Such systems lead to low accuracy because the text classification systems lack the ability to model the input text data in terms of noise characteristics. This research study proposes a concept of misrepresentation ratio (MRR) on input healthcare text data and models the PE criteria for validating the hypothesis. Further, such a novel system provides a platform to amalgamate several attributes of the ML system such as: data size, classifier type, partitioning protocol and percentage MRR. Our comprehensive data analysis consisted of five types of text data sets (TwitterA, WebKB4, Disease, Reuters (R8), and SMS); five kinds of classifiers (support vector machine with linear kernel (SVM-L), MLP-based neural network, AdaBoost, stochastic gradient descent and decision tree); and five types of training protocols (K2, K4, K5, K10 and JK). Using the decreasing order of MRR, our ML system demonstrates the mean classification accuracies as: 70.13 ± 0.15%, 87.34 ± 0.06%, 93.73 ± 0.03%, 94.45 ± 0.03% and 97.83 ± 0.01%, respectively, using all the classifiers and protocols. The corresponding AUC is 0.98 for SMS data using Multi-Layer Perceptron (MLP) based neural network. All the classifiers, the best accuracy of 91.84 ± 0.04% is shown to be of MLP-based neural network and this is 6% better over previously published. Further we observed that as MRR decreases, the system robustness increases and validated by standard deviations. The overall text system accuracy using all data types, classifiers, protocols is 89%, thereby showing the entire ML system to be novel, robust and unique. The system is also tested for stability and reliability.
Properties of DRGs, LBGs, and BzK Galaxies in the GOODS South Field
NASA Astrophysics Data System (ADS)
Grazian, A.; Salimbeni, S.; Pentericci, L.; Fontana, A.; Santini, P.; Giallongo, E.; de Santis, C.; Gallozzi, S.; Nonino, M.; Cristiani, S.; Vanzella, E.
2007-12-01
We use the GOODS-MUSIC catalog with multi-wavelength coverage extending from the U band to the Spitzer 8 μm band, and spectroscopic or accurate photometric redshifts to select samples of BM/BX/LBGs, DRGs, and BzK galaxies. We discuss the overlap and the limitations of these selection criteria, which can be overcome with a criterion based on physical parameters (age and star formation timescale). We show that the BzK-PE criterion is not optimal for selecting early type galaxies at the faint end. We also find that LBGs and DRGs contribute almost equally to the global Stellar Mass Density (SMD) at z≥ 2 and in general that star forming galaxies form a substantial fraction of the universal SMD.
Fast Query-Optimized Kernel-Machine Classification
NASA Technical Reports Server (NTRS)
Mazzoni, Dominic; DeCoste, Dennis
2004-01-01
A recently developed algorithm performs kernel-machine classification via incremental approximate nearest support vectors. The algorithm implements support-vector machines (SVMs) at speeds 10 to 100 times those attainable by use of conventional SVM algorithms. The algorithm offers potential benefits for classification of images, recognition of speech, recognition of handwriting, and diverse other applications in which there are requirements to discern patterns in large sets of data. SVMs constitute a subset of kernel machines (KMs), which have become popular as models for machine learning and, more specifically, for automated classification of input data on the basis of labeled training data. While similar in many ways to k-nearest-neighbors (k-NN) models and artificial neural networks (ANNs), SVMs tend to be more accurate. Using representations that scale only linearly in the numbers of training examples, while exploring nonlinear (kernelized) feature spaces that are exponentially larger than the original input dimensionality, KMs elegantly and practically overcome the classic curse of dimensionality. However, the price that one must pay for the power of KMs is that query-time complexity scales linearly with the number of training examples, making KMs often orders of magnitude more computationally expensive than are ANNs, decision trees, and other popular machine learning alternatives. The present algorithm treats an SVM classifier as a special form of a k-NN. The algorithm is based partly on an empirical observation that one can often achieve the same classification as that of an exact KM by using only small fraction of the nearest support vectors (SVs) of a query. The exact KM output is a weighted sum over the kernel values between the query and the SVs. In this algorithm, the KM output is approximated with a k-NN classifier, the output of which is a weighted sum only over the kernel values involving k selected SVs. Before query time, there are gathered statistics about how misleading the output of the k-NN model can be, relative to the outputs of the exact KM for a representative set of examples, for each possible k from 1 to the total number of SVs. From these statistics, there are derived upper and lower thresholds for each step k. These thresholds identify output levels for which the particular variant of the k-NN model already leans so strongly positively or negatively that a reversal in sign is unlikely, given the weaker SV neighbors still remaining. At query time, the partial output of each query is incrementally updated, stopping as soon as it exceeds the predetermined statistical thresholds of the current step. For an easy query, stopping can occur as early as step k = 1. For more difficult queries, stopping might not occur until nearly all SVs are touched. A key empirical observation is that this approach can tolerate very approximate nearest-neighbor orderings. In experiments, SVs and queries were projected to a subspace comprising the top few principal- component dimensions and neighbor orderings were computed in that subspace. This approach ensured that the overhead of the nearest-neighbor computations was insignificant, relative to that of the exact KM computation.
Bady, Pierre; Sciuscio, Davide; Diserens, Annie-Claire; Bloch, Jocelyne; van den Bent, Martin J; Marosi, Christine; Dietrich, Pierre-Yves; Weller, Michael; Mariani, Luigi; Heppner, Frank L; Mcdonald, David R; Lacombe, Denis; Stupp, Roger; Delorenzi, Mauro; Hegi, Monika E
2012-10-01
The methylation status of the O(6)-methylguanine-DNA methyltransferase (MGMT) gene is an important predictive biomarker for benefit from alkylating agent therapy in glioblastoma. Recent studies in anaplastic glioma suggest a prognostic value for MGMT methylation. Investigation of pathogenetic and epigenetic features of this intriguingly distinct behavior requires accurate MGMT classification to assess high throughput molecular databases. Promoter methylation-mediated gene silencing is strongly dependent on the location of the methylated CpGs, complicating classification. Using the HumanMethylation450 (HM-450K) BeadChip interrogating 176 CpGs annotated for the MGMT gene, with 14 located in the promoter, two distinct regions in the CpG island of the promoter were identified with high importance for gene silencing and outcome prediction. A logistic regression model (MGMT-STP27) comprising probes cg12434587 [corrected] and cg12981137 provided good classification properties and prognostic value (kappa = 0.85; log-rank p < 0.001) using a training-set of 63 glioblastomas from homogenously treated patients, for whom MGMT methylation was previously shown to be predictive for outcome based on classification by methylation-specific PCR. MGMT-STP27 was successfully validated in an independent cohort of chemo-radiotherapy-treated glioblastoma patients (n = 50; kappa = 0.88; outcome, log-rank p < 0.001). Lower prevalence of MGMT methylation among CpG island methylator phenotype (CIMP) positive tumors was found in glioblastomas from The Cancer Genome Atlas than in low grade and anaplastic glioma cohorts, while in CIMP-negative gliomas MGMT was classified as methylated in approximately 50 % regardless of tumor grade. The proposed MGMT-STP27 prediction model allows mining of datasets derived on the HM-450K or HM-27K BeadChip to explore effects of distinct epigenetic context of MGMT methylation suspected to modulate treatment resistance in different tumor types.
NASA Astrophysics Data System (ADS)
Wen, Hongwei; Liu, Yue; Wang, Shengpei; Li, Zuoyong; Zhang, Jishui; Peng, Yun; He, Huiguang
2017-03-01
Tourette syndrome (TS) is a childhood-onset neurobehavioral disorder. To date, TS is still misdiagnosed due to its varied presentation and lacking of obvious clinical symptoms. Therefore, studies of objective imaging biomarkers are of great importance for early TS diagnosis. As tic generation has been linked to disturbed structural networks, and many efforts have been made recently to investigate brain functional or structural networks using machine learning methods, for the purpose of disease diagnosis. However, few studies were related to TS and some drawbacks still existed in them. Therefore, we propose a novel classification framework integrating a multi-threshold strategy and a network fusion scheme to address the preexisting drawbacks. Here we used diffusion MRI probabilistic tractography to construct the structural networks of 44 TS children and 48 healthy children. We ameliorated the similarity network fusion algorithm specially to fuse the multi-threshold structural networks. Graph theoretical analysis was then implemented, and nodal degree, nodal efficiency and nodal betweenness centrality were selected as features. Finally, support vector machine recursive feature extraction (SVM-RFE) algorithm was used for feature selection, and then optimal features are fed into SVM to automatically discriminate TS children from controls. We achieved a high accuracy of 89.13% evaluated by a nested cross validation, demonstrated the superior performance of our framework over other comparison methods. The involved discriminative regions for classification primarily located in the basal ganglia and frontal cortico-cortical networks, all highly related to the pathology of TS. Together, our study may provide potential neuroimaging biomarkers for early-stage TS diagnosis.
Detection of artificially ripened mango using spectrometric analysis
NASA Astrophysics Data System (ADS)
Mithun, B. S.; Mondal, Milton; Vishwakarma, Harsh; Shinde, Sujit; Kimbahune, Sanjay
2017-05-01
Hyperspectral sensing has been proven to be useful to determine the quality of food in general. It has also been used to distinguish naturally and artificially ripened mangoes by analyzing the spectral signature. However the focus has been on improving the accuracy of classification after performing dimensionality reduction, optimum feature selection and using suitable learning algorithm on the complete visible and NIR spectrum range data, namely 350nm to 1050nm. In this paper we focus on, (i) the use of low wavelength resolution and low cost multispectral sensor to reliably identify artificially ripened mango by selectively using the spectral information so that classification accuracy is not hampered at the cost of low resolution spectral data and (ii) use of visible spectrum i.e. 390nm to 700 nm data to accurately discriminate artificially ripened mangoes. Our results show that on a low resolution spectral data, the use of logistic regression produces an accuracy of 98.83% and outperforms other methods like classification tree, random forest significantly. And this is achieved by analyzing only 36 spectral reflectance data points instead of the complete 216 data points available in visual and NIR range. Another interesting experimental observation is that we are able to achieve more than 98% classification accuracy by selecting only 15 irradiance values in the visible spectrum. Even the number of data needs to be collected using hyper-spectral or multi-spectral sensor can be reduced by a factor of 24 for classification with high degree of confidence
An Accurate New Potential Function for Ground-State X{e}_2 from UV and Virial Coefficient Data
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.; Mackie, J. Cameron; Chandrasekhar, Pragna
2011-06-01
Determining accurate analytic pair potentials for rare gas dimers has been a longstanding goal in molecular physics. However, most potential energy functions reported to date fail to optimally represent the available spectroscopic data, in spite of the fact that such data provide constraints of unparalleled precision on the attractive potential energy wells of these species. A recent study of ArXe showed that it is a straightforward matter to combine multi-isotopologue spectroscopic data (in that case, microwave, and high resolution UV measurements) and virial coefficients in a direct fit to obtain a flexible analytic potential function that incorporates the theoretically predicted damped inverse-power long-range behaviour. The present work reports the application of this approach to Xe_2, with a direct fit to high resolution rotationally resolved UV emission data for v''=0 and 1, band head data for v''=0-9, and virial coefficient data for T=165-950 K being used to obtain an accurate new potential energy function for the ground state of this Van der Waals molecule. Analogous results for other rare-gas pairs will also be presented, as time permits. L. Piticco, F. Merkt, A.A. Cholewinski, F.R. McCourt and R.J. Le Roy, J. Mol. Spectrosc. 264, 83 (2010). A. Wüest and K.G. Bruin and F. Merkt, Can. J. Chem. 82, 750 (2004). D.E. Freeman, K. Yoshino, and Y. Tanaka, J. Chem. Phys. 61, 4880 (1974). J.H. Dymond, K.N. Marsh, R.C. Wilhoit and K.C. Wong, in Landold-Börnstein, New Series, Group IV, edited by M. Frenkel and K.N. Marsh, Vol. 21 (2003).
From sequence to enzyme mechanism using multi-label machine learning.
De Ferrari, Luna; Mitchell, John B O
2014-05-19
In this work we predict enzyme function at the level of chemical mechanism, providing a finer granularity of annotation than traditional Enzyme Commission (EC) classes. Hence we can predict not only whether a putative enzyme in a newly sequenced organism has the potential to perform a certain reaction, but how the reaction is performed, using which cofactors and with susceptibility to which drugs or inhibitors, details with important consequences for drug and enzyme design. Work that predicts enzyme catalytic activity based on 3D protein structure features limits the prediction of mechanism to proteins already having either a solved structure or a close relative suitable for homology modelling. In this study, we evaluate whether sequence identity, InterPro or Catalytic Site Atlas sequence signatures provide enough information for bulk prediction of enzyme mechanism. By splitting MACiE (Mechanism, Annotation and Classification in Enzymes database) mechanism labels to a finer granularity, which includes the role of the protein chain in the overall enzyme complex, the method can predict at 96% accuracy (and 96% micro-averaged precision, 99.9% macro-averaged recall) the MACiE mechanism definitions of 248 proteins available in the MACiE, EzCatDb (Database of Enzyme Catalytic Mechanisms) and SFLD (Structure Function Linkage Database) databases using an off-the-shelf K-Nearest Neighbours multi-label algorithm. We find that InterPro signatures are critical for accurate prediction of enzyme mechanism. We also find that incorporating Catalytic Site Atlas attributes does not seem to provide additional accuracy. The software code (ml2db), data and results are available online at http://sourceforge.net/projects/ml2db/ and as supplementary files.
Piernas Sánchez, C M; Morales Falo, E M; Zamora Navarro, S; Garaulet Aza, M
2010-01-01
The excess of visceral abdominal adipose tissue is one of the major concerns in obesity and its clinical treatment. To apply the two-dimensional predictive equation proposed by Garaulet et al. to determine the abdominal fat distribution and to compare the results with the body composition obtained by multi-frequency bioelectrical impedance analysis (M-BIA). We studied 230 women, who underwent anthropometry and M-BIA. The predictive equation was applied. Multivariate lineal and partial correlation analyses were performed with control for BMI and % body fat, using SPSS 15.0 with statistical significance P < 0.05. Overall, women were considered as having subcutaneous distribution of abdominal fat. Truncal fat, regional fat and muscular mass were negatively associated with VA/SA(predicted), while the visceral index obtained by M-BIA was positively correlated with VA/SA(predicted). The predictive equation may be useful in the clinical practice to obtain an accurate, costless and safe classification of abdominal obesity.
Aircraft Operations Classification System
NASA Technical Reports Server (NTRS)
Harlow, Charles; Zhu, Weihong
2001-01-01
Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events.
Noninvasive diagnosis of intraamniotic infection: proteomic biomarkers in vaginal fluid.
Hitti, Jane; Lapidus, Jodi A; Lu, Xinfang; Reddy, Ashok P; Jacob, Thomas; Dasari, Surendra; Eschenbach, David A; Gravett, Michael G; Nagalla, Srinivasa R
2010-07-01
We analyzed the vaginal fluid proteome to identify biomarkers of intraamniotic infection among women in preterm labor. Proteome analysis was performed on vaginal fluid specimens from women with preterm labor, using multidimensional liquid chromatography, tandem mass spectrometry, and label-free quantification. Enzyme immunoassays were used to quantify candidate proteins. Classification accuracy for intraamniotic infection (positive amniotic fluid bacterial culture and/or interleukin-6 >2 ng/mL) was evaluated using receiver-operator characteristic curves obtained by logistic regression. Of 170 subjects, 30 (18%) had intraamniotic infection. Vaginal fluid proteome analysis revealed 338 unique proteins. Label-free quantification identified 15 proteins differentially expressed in intraamniotic infection, including acute-phase reactants, immune modulators, high-abundance amniotic fluid proteins and extracellular matrix-signaling factors; these findings were confirmed by enzyme immunoassay. A multi-analyte algorithm showed accurate classification of intraamniotic infection. Vaginal fluid proteome analyses identified proteins capable of discriminating between patients with and without intraamniotic infection. Copyright (c) 2010 Mosby, Inc. All rights reserved.
The K2-HERMES Survey. I. Planet-candidate Properties from K2 Campaigns 1–3
NASA Astrophysics Data System (ADS)
Wittenmyer, Robert A.; Sharma, Sanjib; Stello, Dennis; Buder, Sven; Kos, Janez; Asplund, Martin; Duong, Ly; Lin, Jane; Lind, Karin; Ness, Melissa; Zwitter, Tomaz; Horner, Jonathan; Clark, Jake; Kane, Stephen R.; Huber, Daniel; Bland-Hawthorn, Joss; Casey, Andrew R.; De Silva, Gayandhi M.; D’Orazi, Valentina; Freeman, Ken; Martell, Sarah; Simpson, Jeffrey D.; Zucker, Daniel B.; Anguiano, Borja; Casagrande, Luca; Esdaile, James; Hon, Marc; Ireland, Michael; Kafle, Prajwal R.; Khanna, Shourya; Marshall, J. P.; Saddon, Mohd Hafiz Mohd; Traven, Gregor; Wright, Duncan
2018-02-01
Accurate and precise radius estimates of transiting exoplanets are critical for understanding their compositions and formation mechanisms. To know the planet, we must know the host star in as much detail as possible. We present first results from the K2-HERMES project, which uses the HERMES multi-object spectrograph on the Anglo-Australian Telescope to obtain R ∼ 28000 spectra of up to 360 stars in one exposure. This ongoing project aims to derive self-consistent spectroscopic parameters for about half of K2 target stars. We present complete stellar parameters and isochrone-derived masses and radii for 46 stars hosting 57 K2 candidate planets in Campaigns 1–3. Our revised host-star radii cast severe doubt on three candidate planets: EPIC 201407812.01, EPIC 203070421.01, and EPIC 202843107.01, all of which now have inferred radii well in excess of the largest known inflated Jovian planets.
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.
Falls classification using tri-axial accelerometers during the five-times-sit-to-stand test.
Doheny, Emer P; Walsh, Cathal; Foran, Timothy; Greene, Barry R; Fan, Chie Wei; Cunningham, Clodagh; Kenny, Rose Anne
2013-09-01
The five-times-sit-to-stand test (FTSS) is an established assessment of lower limb strength, balance dysfunction and falls risk. Clinically, the time taken to complete the task is recorded with longer times indicating increased falls risk. Quantifying the movement using tri-axial accelerometers may provide a more objective and potentially more accurate falls risk estimate. 39 older adults, 19 with a history of falls, performed four repetitions of the FTSS in their homes. A tri-axial accelerometer was attached to the lateral thigh and used to identify each sit-stand-sit phase and sit-stand and stand-sit transitions. A second tri-axial accelerometer, attached to the sternum, captured torso acceleration. The mean and variation of the root-mean-squared amplitude, jerk and spectral edge frequency of the acceleration during each section of the assessment were examined. The test-retest reliability of each feature was examined using intra-class correlation analysis, ICC(2,k). A model was developed to classify participants according to falls status. Only features with ICC>0.7 were considered during feature selection. Sequential forward feature selection within leave-one-out cross-validation resulted in a model including four reliable accelerometer-derived features, providing 74.4% classification accuracy, 80.0% specificity and 68.7% sensitivity. An alternative model using FTSS time alone resulted in significantly reduced classification performance. Results suggest that the described methodology could provide a robust and accurate falls risk assessment. Copyright © 2013 Elsevier B.V. All rights reserved.
An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.
Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan
2015-01-01
A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the individual subjects, therefore, it can be used as a significant tool in clinical practice.
NASA Astrophysics Data System (ADS)
Huber, Daniel; Bryson, Stephen T.; Haas, Michael R.; Barclay, Thomas; Barentsen, Geert; Howell, Steve B.; Sharma, Sanjib; Stello, Dennis; Thompson, Susan E.
2016-05-01
The K2 Mission uses the Kepler spacecraft to obtain high-precision photometry over ≈80 day campaigns in the ecliptic plane. The Ecliptic Plane Input Catalog (EPIC) provides coordinates, photometry, and kinematics based on a federation of all-sky catalogs to support target selection and target management for the K2 mission. We describe the construction of the EPIC, as well as modifications and shortcomings of the catalog. Kepler magnitudes (Kp) are shown to be accurate to ≈0.1 mag for the Kepler field, and the EPIC is typically complete to Kp ≈ 17 (Kp ≈ 19 for campaigns covered by Sloan Digital Sky Survey). We furthermore classify 138,600 targets in Campaigns 1-8 (≈88% of the full target sample) using colors, proper motions, spectroscopy, parallaxes, and galactic population synthesis models, with typical uncertainties for G-type stars of ≈3% in {T}{{eff}}, ≈0.3 dex in {log} g, ≈40% in radius, ≈10% in mass, and ≈40% in distance. Our results show that stars targeted by K2 are dominated by K-M dwarfs (≈41% of all selected targets), F-G dwarfs (≈36%), and K giants (≈21%), consistent with key K2 science programs to search for transiting exoplanets and galactic archeology studies using oscillating red giants. However, we find significant variation of the fraction of cool dwarfs with galactic latitude, indicating a target selection bias due to interstellar reddening and increased contamination by giant stars near the galactic plane. We discuss possible systematic errors in the derived stellar properties, and differences with published classifications for K2 exoplanet host stars. The EPIC is hosted at the Mikulski Archive for Space Telescopes (MAST): http://archive.stsci.edu/k2/epic/search.php.
NASA Astrophysics Data System (ADS)
Fezzani, Ridha; Berger, Laurent
2018-06-01
An automated signal-based method was developed in order to analyse the seafloor backscatter data logged by calibrated multibeam echosounder. The processing consists first in the clustering of each survey sub-area into a small number of homogeneous sediment types, based on the backscatter average level at one or several incidence angles. Second, it uses their local average angular response to extract discriminant descriptors, obtained by fitting the field data to the Generic Seafloor Acoustic Backscatter parametric model. Third, the descriptors are used for seafloor type classification. The method was tested on the multi-year data recorded by a calibrated 90-kHz Simrad ME70 multibeam sonar operated in the Bay of Biscay, France and Celtic Sea, Ireland. It was applied for seafloor-type classification into 12 classes, to a dataset of 158 spots surveyed for demersal and benthic fauna study and monitoring. Qualitative analyses and classified clusters using extracted parameters show a good discriminatory potential, indicating the robustness of this approach.
Pesteie, Mehran; Abolmaesumi, Purang; Ashab, Hussam Al-Deen; Lessoway, Victoria A; Massey, Simon; Gunka, Vit; Rohling, Robert N
2015-06-01
Injection therapy is a commonly used solution for back pain management. This procedure typically involves percutaneous insertion of a needle between or around the vertebrae, to deliver anesthetics near nerve bundles. Most frequently, spinal injections are performed either blindly using palpation or under the guidance of fluoroscopy or computed tomography. Recently, due to the drawbacks of the ionizing radiation of such imaging modalities, there has been a growing interest in using ultrasound imaging as an alternative. However, the complex spinal anatomy with different wave-like structures, affected by speckle noise, makes the accurate identification of the appropriate injection plane difficult. The aim of this study was to propose an automated system that can identify the optimal plane for epidural steroid injections and facet joint injections. A multi-scale and multi-directional feature extraction system to provide automated identification of the appropriate plane is proposed. Local Hadamard coefficients are obtained using the sequency-ordered Hadamard transform at multiple scales. Directional features are extracted from local coefficients which correspond to different regions in the ultrasound images. An artificial neural network is trained based on the local directional Hadamard features for classification. The proposed method yields distinctive features for classification which successfully classified 1032 images out of 1090 for epidural steroid injection and 990 images out of 1052 for facet joint injection. In order to validate the proposed method, a leave-one-out cross-validation was performed. The average classification accuracy for leave-one-out validation was 94 % for epidural and 90 % for facet joint targets. Also, the feature extraction time for the proposed method was 20 ms for a native 2D ultrasound image. A real-time machine learning system based on the local directional Hadamard features extracted by the sequency-ordered Hadamard transform for detecting the laminae and facet joints in ultrasound images has been proposed. The system has the potential to assist the anesthesiologists in quickly finding the target plane for epidural steroid injections and facet joint injections.
Coban, Huseyin Oguz; Koc, Ayhan; Eker, Mehmet
2010-01-01
Previous studies have been able to successfully detect changes in gently-sloping forested areas with low-diversity and homogeneous vegetation cover using medium-resolution satellite data such as landsat. The aim of the present study is to examine the capacity of multi-temporal landsat data to identify changes in forested areas with mixed vegetation and generally located on steep slopes or non-uniform topography landsat thematic mapper (TM) and landsat enhanced thematic mapperplus (ETM+) data for the years 1987-2000 was used to detect changes within a 19,500 ha forested area in the Western Black sea region of Turkey. The data comply with the forest cover type maps previously created for forest management plans of the research area. The methods used to detect changes were: post-classification comparison, image differencing, image rationing and NDVI (Normalized Difference Vegetation Index) differencing methods. Following the supervised classification process, error matrices were used to evaluate the accuracy of classified images obtained. The overall accuracy has been calculated as 87.59% for 1987 image and as 91.81% for 2000 image. General kappa statistics have been calculated as 0.8543 and 0.9038 for 1987 and 2000, respectively. The changes identified via the post-classification comparison method were compared with other change detetion methods. Maximum coherence was found to be 74.95% at 4/3 band rate. The NDVI difference and 3rd band difference methods achieved the same coherence with slight variations. The results suggest that landsat satellite data accurately conveys the temporal changes which occur on steeply-sloping forested areas with a mixed structure, providing a limited amount of detail but with a high level of accuracy. Moreover it has been decided that the post-classification comparison method can meet the needs of forestry activities better than other methods as it provides information about the direction of these changes.
Automated Snow Extent Mapping Based on Orthophoto Images from Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Niedzielski, Tomasz; Spallek, Waldemar; Witek-Kasprzak, Matylda
2018-04-01
The paper presents the application of the k-means clustering in the process of automated snow extent mapping using orthophoto images generated using the Structure-from-Motion (SfM) algorithm from oblique aerial photographs taken by unmanned aerial vehicle (UAV). A simple classification approach has been implemented to discriminate between snow-free and snow-covered terrain. The procedure uses the k-means clustering and classifies orthophoto images based on the three-dimensional space of red-green-blue (RGB) or near-infrared-red-green (NIRRG) or near-infrared-green-blue (NIRGB) bands. To test the method, several field experiments have been carried out, both in situations when snow cover was continuous and when it was patchy. The experiments have been conducted using three fixed-wing UAVs (swinglet CAM by senseFly, eBee by senseFly, and Birdie by FlyTech UAV) on 10/04/2015, 23/03/2016, and 16/03/2017 within three test sites in the Izerskie Mountains in southwestern Poland. The resulting snow extent maps, produced automatically using the classification method, have been validated against real snow extents delineated through a visual analysis and interpretation offered by human analysts. For the simplest classification setup, which assumes two classes in the k-means clustering, the extent of snow patches was estimated accurately, with areal underestimation of 4.6% (RGB) and overestimation of 5.5% (NIRGB). For continuous snow cover with sparse discontinuities at places where trees or bushes protruded from snow, the agreement between automatically produced snow extent maps and observations was better, i.e. 1.5% (underestimation with RGB) and 0.7-0.9% (overestimation, either with RGB or with NIRRG). Shadows on snow were found to be mainly responsible for the misclassification.
McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne
2018-04-01
Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Segmentation method of eye region based on fuzzy logic system for classifying open and closed eyes
NASA Astrophysics Data System (ADS)
Kim, Ki Wan; Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Lee, Eui Chul; Park, Kang Ryoung
2015-03-01
The classification of eye openness and closure has been researched in various fields, e.g., driver drowsiness detection, physiological status analysis, and eye fatigue measurement. For a classification with high accuracy, accurate segmentation of the eye region is required. Most previous research used the segmentation method by image binarization on the basis that the eyeball is darker than skin, but the performance of this approach is frequently affected by thick eyelashes or shadows around the eye. Thus, we propose a fuzzy-based method for classifying eye openness and closure. First, the proposed method uses I and K color information from the HSI and CMYK color spaces, respectively, for eye segmentation. Second, the eye region is binarized using the fuzzy logic system based on I and K inputs, which is less affected by eyelashes and shadows around the eye. The combined image of I and K pixels is obtained through the fuzzy logic system. Third, in order to reflect the effect by all the inference values on calculating the output score of the fuzzy system, we use the revised weighted average method, where all the rectangular regions by all the inference values are considered for calculating the output score. Fourth, the classification of eye openness or closure is successfully made by the proposed fuzzy-based method with eye images of low resolution which are captured in the environment of people watching TV at a distance. By using the fuzzy logic system, our method does not require the additional procedure of training irrespective of the chosen database. Experimental results with two databases of eye images show that our method is superior to previous approaches.
Prediction of carbonate rock type from NMR responses using data mining techniques
NASA Astrophysics Data System (ADS)
Gonçalves, Eduardo Corrêa; da Silva, Pablo Nascimento; Silveira, Carla Semiramis; Carneiro, Giovanna; Domingues, Ana Beatriz; Moss, Adam; Pritchard, Tim; Plastino, Alexandre; Azeredo, Rodrigo Bagueira de Vasconcellos
2017-05-01
Recent studies have indicated that the accurate identification of carbonate rock types in a reservoir can be employed as a preliminary step to enhance the effectiveness of petrophysical property modeling. Furthermore, rock typing activity has been shown to be of key importance in several steps of formation evaluation, such as the study of sedimentary series, reservoir zonation and well-to-well correlation. In this paper, a methodology based exclusively on the analysis of 1H-NMR (Nuclear Magnetic Resonance) relaxation responses - using data mining algorithms - is evaluated to perform the automatic classification of carbonate samples according to their rock type. We analyze the effectiveness of six different classification algorithms (k-NN, Naïve Bayes, C4.5, Random Forest, SMO and Multilayer Perceptron) and two data preprocessing strategies (discretization and feature selection). The dataset used in this evaluation is formed by 78 1H-NMR T2 distributions of fully brine-saturated rock samples from six different rock type classes. The experiments reveal that the combination of preprocessing strategies with classification algorithms is able to achieve a prediction accuracy of 97.4%.
A compressed sensing method with analytical results for lidar feature classification
NASA Astrophysics Data System (ADS)
Allen, Josef D.; Yuan, Jiangbo; Liu, Xiuwen; Rahmes, Mark
2011-04-01
We present an innovative way to autonomously classify LiDAR points into bare earth, building, vegetation, and other categories. One desirable product of LiDAR data is the automatic classification of the points in the scene. Our algorithm automatically classifies scene points using Compressed Sensing Methods via Orthogonal Matching Pursuit algorithms utilizing a generalized K-Means clustering algorithm to extract buildings and foliage from a Digital Surface Models (DSM). This technology reduces manual editing while being cost effective for large scale automated global scene modeling. Quantitative analyses are provided using Receiver Operating Characteristics (ROC) curves to show Probability of Detection and False Alarm of buildings vs. vegetation classification. Histograms are shown with sample size metrics. Our inpainting algorithms then fill the voids where buildings and vegetation were removed, utilizing Computational Fluid Dynamics (CFD) techniques and Partial Differential Equations (PDE) to create an accurate Digital Terrain Model (DTM) [6]. Inpainting preserves building height contour consistency and edge sharpness of identified inpainted regions. Qualitative results illustrate other benefits such as Terrain Inpainting's unique ability to minimize or eliminate undesirable terrain data artifacts.
Sentiment classification technology based on Markov logic networks
NASA Astrophysics Data System (ADS)
He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe
2016-07-01
With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.
Research on multi-switch synchronization based on single trigger generator
NASA Astrophysics Data System (ADS)
Geng, Jiuyuan; Cheng, Xinbing; Yang, Jianhua; Yang, Xiao; Chen, Rong
2018-05-01
Multi-switch synchronous operation is an effective approach to provide high-voltage high-current for a high-power device. In this paper, we present a synchronization system with a corona stabilized triggered switch (CSTS) as main switch and an all-solid modularized quasi-square pulse forming system. In addition, this paper provides explanations of low jitter and accurate triggering of CSTS based on streamer theory. Different switches of the module are triggered by an electrical pulse created by a trigger generator, a quasi-square pulse can be created on the load. The experimental results show that it is able to switch voltages in excess of 40kV with nanosecond system jitter for three-module synchronous operation.
ASERA: A Spectrum Eye Recognition Assistant
NASA Astrophysics Data System (ADS)
Yuan, Hailong; Zhang, Haotong; Zhang, Yanxia; Lei, Yajuan; Dong, Yiqiao; Zhao, Yongheng
2018-04-01
ASERA, ASpectrum Eye Recognition Assistant, aids in quasar spectral recognition and redshift measurement and can also be used to recognize various types of spectra of stars, galaxies and AGNs (Active Galactic Nucleus). This interactive software allows users to visualize observed spectra, superimpose template spectra from the Sloan Digital Sky Survey (SDSS), and interactively access related spectral line information. ASERA is an efficient and user-friendly semi-automated toolkit for the accurate classification of spectra observed by LAMOST (the Large Sky Area Multi-object Fiber Spectroscopic Telescope) and is available as a standalone Java application and as a Java applet. The software offers several functions, including wavelength and flux scale settings, zoom in and out, redshift estimation, and spectral line identification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsieh, Bau-Ching; Wang, Wei-Hao; Hsieh, Chih-Chiang
2012-12-15
We present ultra-deep J and K{sub S} imaging observations covering a 30' Multiplication-Sign 30' area of the Extended Chandra Deep Field-South (ECDFS) carried out by our Taiwan ECDFS Near-Infrared Survey (TENIS). The median 5{sigma} limiting magnitudes for all detected objects in the ECDFS reach 24.5 and 23.9 mag (AB) for J and K{sub S} , respectively. In the inner 400 arcmin{sup 2} region where the sensitivity is more uniform, objects as faint as 25.6 and 25.0 mag are detected at 5{sigma}. Thus, this is by far the deepest J and K{sub S} data sets available for the ECDFS. To combinemore » TENIS with the Spitzer IRAC data for obtaining better spectral energy distributions of high-redshift objects, we developed a novel deconvolution technique (IRACLEAN) to accurately estimate the IRAC fluxes. IRACLEAN can minimize the effect of blending in the IRAC images caused by the large point-spread functions and reduce the confusion noise. We applied IRACLEAN to the images from the Spitzer IRAC/MUSYC Public Legacy in the ECDFS survey (SIMPLE) and generated a J+K{sub S} -selected multi-wavelength catalog including the photometry of both the TENIS near-infrared and the SIMPLE IRAC data. We publicly release the data products derived from this work, including the J and K{sub S} images and the J+K{sub S} -selected multi-wavelength catalog.« less
NASA Astrophysics Data System (ADS)
Land, Walker H., Jr.; Sadik, Omowunmi A.; Embrechts, Mark J.; Leibensperger, Dale; Wong, Lut; Wanekaya, Adam; Uematsu, Michiko
2003-08-01
Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. Furthermore, recent events have highlighted awareness that chemical and biological agents (CBAs) may become the preferred, cheap alternative WMD, because these agents can effectively attack large populations while leaving infrastructures intact. Despite the availability of numerous sensing devices, intelligent hybrid sensors that can detect and degrade CBAs are virtually nonexistent. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using parathion and dichlorvos as model stimulants compounds. SVMs were used for the design and evaluation of new and more accurate data extraction, preprocessing and classification. Experimental results for the paradigms developed using Structural Risk Minimization, show a significant increase in classification accuracy when compared to the existing AromaScan baseline system. Specifically, the results of this research has demonstrated that, for the Parathion versus Dichlorvos pair, when compared to the AromaScan baseline system: (1) a 23% improvement in the overall ROC Az index using the S2000 kernel, with similar improvements with the Gaussian and polynomial (of degree 2) kernels, (2) a significant 173% improvement in specificity with the S2000 kernel. This means that the number of false negative errors were reduced by 173%, while making no false positive errors, when compared to the AromaScan base line performance. (3) The Gaussian and polynomial kernels demonstrated similar specificity at 100% sensitivity. All SVM classifiers provided essentially perfect classification performance for the Dichlorvos versus Trichlorfon pair. For the most difficult classification task, the Parathion versus Paraoxon pair, the following results were achieved (using the three SVM kernels: (1) ROC Az indices from approximately 93% to greater than 99%, (2) partial Az values from ~79% to 93%, (3) specificities from 76% to ~84% at 100 and 98% sensitivity, and (4) PPVs from 73% to ~84% at 100% and 98% sensitivities. These are excellent results, considering only one atom differentiates these nerve agents.
Carbon storage in Chinese grassland ecosystems: Influence of different integrative methods.
Ma, Anna; He, Nianpeng; Yu, Guirui; Wen, Ding; Peng, Shunlei
2016-02-17
The accurate estimate of grassland carbon (C) is affected by many factors at the large scale. Here, we used six methods (three spatial interpolation methods and three grassland classification methods) to estimate C storage of Chinese grasslands based on published data from 2004 to 2014, and assessed the uncertainty resulting from different integrative methods. The uncertainty (coefficient of variation, CV, %) of grassland C storage was approximately 4.8% for the six methods tested, which was mainly determined by soil C storage. C density and C storage to the soil layer depth of 100 cm were estimated to be 8.46 ± 0.41 kg C m(-2) and 30.98 ± 1.25 Pg C, respectively. Ecosystem C storage was composed of 0.23 ± 0.01 (0.7%) above-ground biomass, 1.38 ± 0.14 (4.5%) below-ground biomass, and 29.37 ± 1.2 (94.8%) Pg C in the 0-100 cm soil layer. Carbon storage calculated by the grassland classification methods (18 grassland types) was closer to the mean value than those calculated by the spatial interpolation methods. Differences in integrative methods may partially explain the high uncertainty in C storage estimates in different studies. This first evaluation demonstrates the importance of multi-methodological approaches to accurately estimate C storage in large-scale terrestrial ecosystems.
Efficient Cancer Detection Using Multiple Neural Networks.
Shell, John; Gregory, William D
2017-01-01
The inspection of live excised tissue specimens to ascertain malignancy is a challenging task in dermatopathology and generally in histopathology. We introduce a portable desktop prototype device that provides highly accurate neural network classification of malignant and benign tissue. The handheld device collects 47 impedance data samples from 1 Hz to 32 MHz via tetrapolar blackened platinum electrodes. The data analysis was implemented with six different backpropagation neural networks (BNN). A data set consisting of 180 malignant and 180 benign breast tissue data files in an approved IRB study at the Aurora Medical Center, Milwaukee, WI, USA, were utilized as a neural network input. The BNN structure consisted of a multi-tiered consensus approach autonomously selecting four of six neural networks to determine a malignant or benign classification. The BNN analysis was then compared with the histology results with consistent sensitivity of 100% and a specificity of 100%. This implementation successfully relied solely on statistical variation between the benign and malignant impedance data and intricate neural network configuration. This device and BNN implementation provides a novel approach that could be a valuable tool to augment current medical practice assessment of the health of breast, squamous, and basal cell carcinoma and other excised tissue without requisite tissue specimen expertise. It has the potential to provide clinical management personnel with a fast non-invasive accurate assessment of biopsied or sectioned excised tissue in various clinical settings.
Efficient Cancer Detection Using Multiple Neural Networks
Gregory, William D.
2017-01-01
The inspection of live excised tissue specimens to ascertain malignancy is a challenging task in dermatopathology and generally in histopathology. We introduce a portable desktop prototype device that provides highly accurate neural network classification of malignant and benign tissue. The handheld device collects 47 impedance data samples from 1 Hz to 32 MHz via tetrapolar blackened platinum electrodes. The data analysis was implemented with six different backpropagation neural networks (BNN). A data set consisting of 180 malignant and 180 benign breast tissue data files in an approved IRB study at the Aurora Medical Center, Milwaukee, WI, USA, were utilized as a neural network input. The BNN structure consisted of a multi-tiered consensus approach autonomously selecting four of six neural networks to determine a malignant or benign classification. The BNN analysis was then compared with the histology results with consistent sensitivity of 100% and a specificity of 100%. This implementation successfully relied solely on statistical variation between the benign and malignant impedance data and intricate neural network configuration. This device and BNN implementation provides a novel approach that could be a valuable tool to augment current medical practice assessment of the health of breast, squamous, and basal cell carcinoma and other excised tissue without requisite tissue specimen expertise. It has the potential to provide clinical management personnel with a fast non-invasive accurate assessment of biopsied or sectioned excised tissue in various clinical settings. PMID:29282435
Deep convolutional neural networks for classifying GPR B-scans
NASA Astrophysics Data System (ADS)
Besaw, Lance E.; Stimac, Philip J.
2015-05-01
Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.
Temporal Clustering in the Multi-Target Tracking Environment
1988-08-01
UNLASS IliED % ,, SECURITY CLASSIFICATION OF THIS PAGE (When Data Entered)_ REPORT DOCUMENTATION PAGE BF READ INSTRUCTIONS6. PEFORE COMPLETING FORM...1. REPORT NUMBER (. )OVT ACCESSION No. 3. RE-Ur-’SAFIT/CI/NR 88- 1 ? ,7 1 11 - _t tA Jr TITLE (and Subtitle) S. TYPE OF REPORT A PERIOD COVEREDTLMfP6...AL C405TEW .GI )K T 4 MULT)- F11M9THESI I, TiNtLqET TOLFNCK lUG 6.mVjllop,3MrJT TEI ~6. PERFORMING O-4O. REPORT hUMBER AU THOR(s) 8. CONTRACT OR
Satellite/Submarine Arctic Sea Ice Remote Sensing in 2004 and 2007
NASA Astrophysics Data System (ADS)
Hughes, N. E.; Wadhams, P.; Rodrigues, J.
2007-12-01
After an interlude of 8 years the U.K. Royal Navy returned to the Arctic Ocean with an under-ice mission by the submarine shape HMS Tireless in April 2004. A full environmental monitoring programme in which U.K. civilian scientists were allowed to participate was integrated into the mission. This was subsequently followed by a second expedition, in March 2007, which allowed further measurements to be acquired. These have so far been the only opportunities for civilian scientists to utilise navy submarines in the Arctic since the demise of the U.S. SCICEX programme in 2000. This paper presents some of the data collected on these new missions and uses it for validation of sea ice information derived from coincident acquisitions by modern satellite sensors such as the ESA Envisat ASAR and NASA MODIS. In both the 2004 and 2007 expeditions shape Tireless took a track north of Greenland along the latitude 85° N. This was similar to the route used for an earlier submarine-aircraft combined survey in April 1987 with which our results shall be compared. In all three missions the submarine was equipped with a standard upward-looking echosounder and sidescan for ice observations and a full range of satellite-borne, or airborne in the case of the earlier mission, microwave and optical sensors were available for validation. In this study we concentrate on the submarine track north of Greenland from the Marginal Ice Zone (MIZ) in Fram Strait through to the Lincoln Sea around 65° W. This transect encompasses a wide range of differing sea ice conditions, from the highly mobile mixture of first year and multi year ice being transported on the trans-polar drift through to the highly deformed ice north of Greenland and Ellesmere Island. The combination of submarine measurements of ice thickness and satellite/aircraft top-side measurements gives an accurate indication of how changes in the ice regime are taking place and allows the potential development of multi-sensor data fusion algorithms for improved sea ice classification and estimation of thickness.
Rankin, James A; Tomeny, Theodore S; Barry, Tammy D
2017-09-01
The behavioral and emotional functioning of typically-developing (TD) siblings of youth with autism spectrum disorder (ASD) has been frequently assessed in the literature; however, these assessments typically include only one informant, rarely considering differences between parent and self-reports of sibling adjustment. This study examined parent-youth reported informant discrepancies in behavioral and emotional functioning, including whether parent and youth reports yielded the same conclusions regarding TD sibling risk status. Among 113 parents and TD siblings of youth with ASD, TD siblings self-reported more overall, conduct, hyperactivity, and peer problems (compared to parent reports). Although few siblings were considered at-risk, those who were identified were not usually identified as at-risk on both informants' reports. Moreover, ASD symptoms, broader autism phenotype symptoms, parent mental health concerns, and social support from parents were all related to differences in at-risk classification between parent- and sibling self-report. This paper highlights the necessity of multi-informant reporting when considering TD sibling psychological functioning. This study helps to address gaps in the literature on assessment of emotional and behavioral functioning of TD siblings of youth with ASD. The results highlight the importance of utilizing both parent- and self-report when identifying TD siblings at-risk for maladjustment. Although few siblings were considered at-risk, those who were identified were not usually identified as such on both informants' reports, and a variety of sibling- and parent-factors were associated with differences in at-risk classification. Thus, inclusion and examination of both parent- and self-report of TD sibling psychological functioning is vital for accurately identifying numbers of TD siblings at-risk of maladjustment. Copyright © 2017 Elsevier Ltd. All rights reserved.
A transversal approach for patch-based label fusion via matrix completion
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang
2015-01-01
Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394
Multi-scale Gaussian representation and outline-learning based cell image segmentation.
Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli
2013-01-01
High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.
Multi-scale Gaussian representation and outline-learning based cell image segmentation
2013-01-01
Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488
A data set for evaluating the performance of multi-class multi-object video tracking
NASA Astrophysics Data System (ADS)
Chakraborty, Avishek; Stamatescu, Victor; Wong, Sebastien C.; Wigley, Grant; Kearney, David
2017-05-01
One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.
Sertel, O.; Kong, J.; Shimada, H.; Catalyurek, U.V.; Saltz, J.H.; Gurcan, M.N.
2009-01-01
We are developing a computer-aided prognosis system for neuroblastoma (NB), a cancer of the nervous system and one of the most malignant tumors affecting children. Histopathological examination is an important stage for further treatment planning in routine clinical diagnosis of NB. According to the International Neuroblastoma Pathology Classification (the Shimada system), NB patients are classified into favorable and unfavorable histology based on the tissue morphology. In this study, we propose an image analysis system that operates on digitized H&E stained whole-slide NB tissue samples and classifies each slide as either stroma-rich or stroma-poor based on the degree of Schwannian stromal development. Our statistical framework performs the classification based on texture features extracted using co-occurrence statistics and local binary patterns. Due to the high resolution of digitized whole-slide images, we propose a multi-resolution approach that mimics the evaluation of a pathologist such that the image analysis starts from the lowest resolution and switches to higher resolutions when necessary. We employ an offine feature selection step, which determines the most discriminative features at each resolution level during the training step. A modified k-nearest neighbor classifier is used to determine the confidence level of the classification to make the decision at a particular resolution level. The proposed approach was independently tested on 43 whole-slide samples and provided an overall classification accuracy of 88.4%. PMID:20161324
An Optimization-based Framework to Learn Conditional Random Fields for Multi-label Classification
Naeini, Mahdi Pakdaman; Batal, Iyad; Liu, Zitao; Hong, CharmGil; Hauskrecht, Milos
2015-01-01
This paper studies multi-label classification problem in which data instances are associated with multiple, possibly high-dimensional, label vectors. This problem is especially challenging when labels are dependent and one cannot decompose the problem into a set of independent classification problems. To address the problem and properly represent label dependencies we propose and study a pairwise conditional random Field (CRF) model. We develop a new approach for learning the structure and parameters of the CRF from data. The approach maximizes the pseudo likelihood of observed labels and relies on the fast proximal gradient descend for learning the structure and limited memory BFGS for learning the parameters of the model. Empirical results on several datasets show that our approach outperforms several multi-label classification baselines, including recently published state-of-the-art methods. PMID:25927015
R-parametrization and its role in classification of linear multivariable feedback systems
NASA Technical Reports Server (NTRS)
Chen, Robert T. N.
1988-01-01
A classification of all the compensators that stabilize a given general plant in a linear, time-invariant multi-input, multi-output feedback system is developed. This classification, along with the associated necessary and sufficient conditions for stability of the feedback system, is achieved through the introduction of a new parameterization, referred to as R-Parameterization, which is a dual of the familiar Q-Parameterization. The classification is made to the stability conditions of the compensators and the plant by themselves; and necessary and sufficient conditions are based on the stability of Q and R themselves.
Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng
2007-01-01
The recent availability of low cost and miniaturized hardware has allowed wireless sensor networks (WSNs) to retrieve audio and video data in real world applications, which has fostered the development of wireless multimedia sensor networks (WMSNs). Resource constraints and challenging multimedia data volume make development of efficient algorithms to perform in-network processing of multimedia contents imperative. This paper proposes solving problems in the domain of WMSNs from the perspective of multi-agent systems. The multi-agent framework enables flexible network configuration and efficient collaborative in-network processing. The focus is placed on target classification in WMSNs where audio information is retrieved by microphones. To deal with the uncertainties related to audio information retrieval, the statistical approaches of power spectral density estimates, principal component analysis and Gaussian process classification are employed. A multi-agent negotiation mechanism is specially developed to efficiently utilize limited resources and simultaneously enhance classification accuracy and reliability. The negotiation is composed of two phases, where an auction based approach is first exploited to allocate the classification task among the agents and then individual agent decisions are combined by the committee decision mechanism. Simulation experiments with real world data are conducted and the results show that the proposed statistical approaches and negotiation mechanism not only reduce memory and computation requirements in WMSNs but also significantly enhance classification accuracy and reliability. PMID:28903223
Multi-Group Reductions of LTE Air Plasma Radiative Transfer in Cylindrical Geometries
NASA Technical Reports Server (NTRS)
Scoggins, James; Magin, Thierry Edouard Bertran; Wray, Alan; Mansour, Nagi N.
2013-01-01
Air plasma radiation in Local Thermodynamic Equilibrium (LTE) within cylindrical geometries is studied with an application towards modeling the radiative transfer inside arc-constrictors, a central component of constricted-arc arc jets. A detailed database of spectral absorption coefficients for LTE air is formulated using the NEQAIR code developed at NASA Ames Research Center. The database stores calculated absorption coefficients for 1,051,755 wavelengths between 0.04 µm and 200 µm over a wide temperature (500K to 15 000K) and pressure (0.1 atm to 10.0 atm) range. The multi-group method for spectral reduction is studied by generating a range of reductions including pure binning and banding reductions from the detailed absorption coefficient database. The accuracy of each reduction is compared to line-by-line calculations for cylindrical temperature profiles resembling typical profiles found in arc-constrictors. It is found that a reduction of only 1000 groups is sufficient to accurately model the LTE air radiation over a large temperature and pressure range. In addition to the reduction comparison, the cylindrical-slab formulation is compared with the finite-volume method for the numerical integration of the radiative flux inside cylinders with varying length. It is determined that cylindrical-slabs can be used to accurately model most arc-constrictors due to their high length to radius ratios.
Free classification of American English dialects by native and non-native listeners
Clopper, Cynthia G.; Bradlow, Ann R.
2009-01-01
Most second language acquisition research focuses on linguistic structures, and less research has examined the acquisition of sociolinguistic patterns. The current study explored the perceptual classification of regional dialects of American English by native and non-native listeners using a free classification task. Results revealed similar classification strategies for the native and non-native listeners. However, the native listeners were more accurate overall than the non-native listeners. In addition, the non-native listeners were less able to make use of constellations of cues to accurately classify the talkers by dialect. However, the non-native listeners were able to attend to cues that were either phonologically or sociolinguistically relevant in their native language. These results suggest that non-native listeners can use information in the speech signal to classify talkers by regional dialect, but that their lack of signal-independent cultural knowledge about variation in the second language leads to less accurate classification performance. PMID:20161400
Early season monitoring of corn and soybeans with TerraSAR-X and RADARSAT-2
NASA Astrophysics Data System (ADS)
McNairn, H.; Kross, A.; Lapen, D.; Caves, R.; Shang, J.
2014-05-01
Early and on-going crop production forecasts are important to facilitate food price stability for regions at risk, and for agriculture exporters, to set market value. Most regional and global efforts in forecasting rely on multiple sources of information from the field. With increased access to data from spaceborne Synthetic Aperture Radar (SAR), these sensors could contribute information on crop acreage. But these acreage estimates must be available early in the season to assist with production forecasts. This study acquired TerraSAR-X and RADARSAT-2 data over a region in eastern Canada dominated by economically important corn and soybean production. Using a supervised decision tree classifier, results determined that either sensor was capable of delivering highly accurate maps of corn and soybeans at the end of the growing season. Accuracies far exceeded 90%. Spatial and multi-temporal filtering approaches were compared and small improvements in accuracies were found by applying the multi-temporal filter to the RADARSAT-2 data. Of significant interest, this study determined that by using only three TerraSAR-X images corn could be accurately identified by the end of June, a mere six weeks after planting and at a vegetative growth stage (V6 - sixth leaf collar developed). However, soybeans required additional acquisitions given the variance in planting densities and planting dates in this region of Canada. In this case, accurate soybean classification required TerraSAR-X images until early August at the start of the reproductive stage (R5 - seed development is beginning). Also important, by applying a multi-temporal filter accurate mapping (close to 90%) of corn and soybeans from RADARSAT-2 could occur five weeks earlier (by August 19) than if a spatial filter was used. Thus application of this filtering approach could accelerate delivery of a crop inventory for this region of Canada. Corn and soybeans are important commodities both globally and within Canada. This study makes an important contribution as it demonstrates that TerraSAR-X can deliver acreage estimates of these two crops early enough to assist with in-season production forecasting.
NASA Astrophysics Data System (ADS)
Pastor, M. A.; Casado, M. J.
2012-10-01
This paper presents an evaluation of the multi-model simulations for the 4th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) in terms of their ability to simulate the ERA40 circulation types over the Euro-Atlantic region in winter season. Two classification schemes, k-means and SANDRA, have been considered to test the sensitivity of the evaluation results to the classification procedure. The assessment allows establishing different rankings attending spatial and temporal features of the circulation types. Regarding temporal characteristics, in general, all AR4 models tend to underestimate the frequency of occurrence. The best model simulating spatial characteristics is the UKMO-HadGEM1 whereas CCSM3, UKMO-HadGEM1 and CGCM3.1(T63) are the best simulating the temporal features, for both classification schemes. This result agrees with the AR4 models ranking obtained when having analysed the ability of the same AR4 models to simulate Euro-Atlantic variability modes. This study has proved the utility of applying such a synoptic climatology approach as a diagnostic tool for models' assessment. The ability of the models to properly reproduce the position of ridges and troughs and the frequency of synoptic patterns, will therefore improve our confidence in the response of models to future climate changes.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
Sarrafzadeh, Omid; Dehnavi, Alireza Mehri
2015-01-01
Segmentation of leukocytes acts as the foundation for all automated image-based hematological disease recognition systems. Most of the time, hematologists are interested in evaluation of white blood cells only. Digital image processing techniques can help them in their analysis and diagnosis. The main objective of this paper is to detect leukocytes from a blood smear microscopic image and segment them into their two dominant elements, nucleus and cytoplasm. The segmentation is conducted using two stages of applying K-means clustering. First, the nuclei are segmented using K-means clustering. Then, a proposed method based on region growing is applied to separate the connected nuclei. Next, the nuclei are subtracted from the original image. Finally, the cytoplasm is segmented using the second stage of K-means clustering. The results indicate that the proposed method is able to extract the nucleus and cytoplasm regions accurately and works well even though there is no significant contrast between the components in the image. In this paper, a method based on K-means clustering and region growing is proposed in order to detect leukocytes from a blood smear microscopic image and segment its components, the nucleus and the cytoplasm. As region growing step of the algorithm relies on the information of edges, it will not able to separate the connected nuclei more accurately in poor edges and it requires at least a weak edge to exist between the nuclei. The nucleus and cytoplasm segments of a leukocyte can be used for feature extraction and classification which leads to automated leukemia detection.
Nucleus and cytoplasm segmentation in microscopic images using K-means clustering and region growing
Sarrafzadeh, Omid; Dehnavi, Alireza Mehri
2015-01-01
Background: Segmentation of leukocytes acts as the foundation for all automated image-based hematological disease recognition systems. Most of the time, hematologists are interested in evaluation of white blood cells only. Digital image processing techniques can help them in their analysis and diagnosis. Materials and Methods: The main objective of this paper is to detect leukocytes from a blood smear microscopic image and segment them into their two dominant elements, nucleus and cytoplasm. The segmentation is conducted using two stages of applying K-means clustering. First, the nuclei are segmented using K-means clustering. Then, a proposed method based on region growing is applied to separate the connected nuclei. Next, the nuclei are subtracted from the original image. Finally, the cytoplasm is segmented using the second stage of K-means clustering. Results: The results indicate that the proposed method is able to extract the nucleus and cytoplasm regions accurately and works well even though there is no significant contrast between the components in the image. Conclusions: In this paper, a method based on K-means clustering and region growing is proposed in order to detect leukocytes from a blood smear microscopic image and segment its components, the nucleus and the cytoplasm. As region growing step of the algorithm relies on the information of edges, it will not able to separate the connected nuclei more accurately in poor edges and it requires at least a weak edge to exist between the nuclei. The nucleus and cytoplasm segments of a leukocyte can be used for feature extraction and classification which leads to automated leukemia detection. PMID:26605213
NASA Astrophysics Data System (ADS)
Iqtait, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.
Parametric Time-Frequency Analysis and Its Applications in Music Classification
NASA Astrophysics Data System (ADS)
Shen, Ying; Li, Xiaoli; Ma, Ngok-Wah; Krishnan, Sridhar
2010-12-01
Analysis of nonstationary signals, such as music signals, is a challenging task. The purpose of this study is to explore an efficient and powerful technique to analyze and classify music signals in higher frequency range (44.1 kHz). The pursuit methods are good tools for this purpose, but they aimed at representing the signals rather than classifying them as in Y. Paragakin et al., 2009. Among the pursuit methods, matching pursuit (MP), an adaptive true nonstationary time-frequency signal analysis tool, is applied for music classification. First, MP decomposes the sample signals into time-frequency functions or atoms. Atom parameters are then analyzed and manipulated, and discriminant features are extracted from atom parameters. Besides the parameters obtained using MP, an additional feature, central energy, is also derived. Linear discriminant analysis and the leave-one-out method are used to evaluate the classification accuracy rate for different feature sets. The study is one of the very few works that analyze atoms statistically and extract discriminant features directly from the parameters. From our experiments, it is evident that the MP algorithm with the Gabor dictionary decomposes nonstationary signals, such as music signals, into atoms in which the parameters contain strong discriminant information sufficient for accurate and efficient signal classifications.
NASA Astrophysics Data System (ADS)
Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.
2016-03-01
The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.
A multiple-point spatially weighted k-NN method for object-based classification
NASA Astrophysics Data System (ADS)
Tang, Yunwei; Jing, Linhai; Li, Hui; Atkinson, Peter M.
2016-10-01
Object-based classification, commonly referred to as object-based image analysis (OBIA), is now commonly regarded as able to produce more appealing classification maps, often of greater accuracy, than pixel-based classification and its application is now widespread. Therefore, improvement of OBIA using spatial techniques is of great interest. In this paper, multiple-point statistics (MPS) is proposed for object-based classification enhancement in the form of a new multiple-point k-nearest neighbour (k-NN) classification method (MPk-NN). The proposed method first utilises a training image derived from a pre-classified map to characterise the spatial correlation between multiple points of land cover classes. The MPS borrows spatial structures from other parts of the training image, and then incorporates this spatial information, in the form of multiple-point probabilities, into the k-NN classifier. Two satellite sensor images with a fine spatial resolution were selected to evaluate the new method. One is an IKONOS image of the Beijing urban area and the other is a WorldView-2 image of the Wolong mountainous area, in China. The images were object-based classified using the MPk-NN method and several alternatives, including the k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the new spatial weighting based on MPS can achieve greater classification accuracy relative to the alternatives and it is, thus, recommended as appropriate for object-based classification.
Contextual classification of multispectral image data: Approximate algorithm
NASA Technical Reports Server (NTRS)
Tilton, J. C. (Principal Investigator)
1980-01-01
An approximation to a classification algorithm incorporating spatial context information in a general, statistical manner is presented which is computationally less intensive. Classifications that are nearly as accurate are produced.
Charoenkwan, Phasit; Hwang, Eric; Cutler, Robert W; Lee, Hua-Chin; Ko, Li-Wei; Huang, Hui-Ling; Ho, Shinn-Ying
2013-01-01
High-content screening (HCS) has become a powerful tool for drug discovery. However, the discovery of drugs targeting neurons is still hampered by the inability to accurately identify and quantify the phenotypic changes of multiple neurons in a single image (named multi-neuron image) of a high-content screen. Therefore, it is desirable to develop an automated image analysis method for analyzing multi-neuron images. We propose an automated analysis method with novel descriptors of neuromorphology features for analyzing HCS-based multi-neuron images, called HCS-neurons. To observe multiple phenotypic changes of neurons, we propose two kinds of descriptors which are neuron feature descriptor (NFD) of 13 neuromorphology features, e.g., neurite length, and generic feature descriptors (GFDs), e.g., Haralick texture. HCS-neurons can 1) automatically extract all quantitative phenotype features in both NFD and GFDs, 2) identify statistically significant phenotypic changes upon drug treatments using ANOVA and regression analysis, and 3) generate an accurate classifier to group neurons treated by different drug concentrations using support vector machine and an intelligent feature selection method. To evaluate HCS-neurons, we treated P19 neurons with nocodazole (a microtubule depolymerizing drug which has been shown to impair neurite development) at six concentrations ranging from 0 to 1000 ng/mL. The experimental results show that all the 13 features of NFD have statistically significant difference with respect to changes in various levels of nocodazole drug concentrations (NDC) and the phenotypic changes of neurites were consistent to the known effect of nocodazole in promoting neurite retraction. Three identified features, total neurite length, average neurite length, and average neurite area were able to achieve an independent test accuracy of 90.28% for the six-dosage classification problem. This NFD module and neuron image datasets are provided as a freely downloadable MatLab project at http://iclab.life.nctu.edu.tw/HCS-Neurons. Few automatic methods focus on analyzing multi-neuron images collected from HCS used in drug discovery. We provided an automatic HCS-based method for generating accurate classifiers to classify neurons based on their phenotypic changes upon drug treatments. The proposed HCS-neurons method is helpful in identifying and classifying chemical or biological molecules that alter the morphology of a group of neurons in HCS.
Multi-template tensor-based morphometry: Application to analysis of Alzheimer's disease
Koikkalainen, Juha; Lötjönen, Jyrki; Thurfjell, Lennart; Rueckert, Daniel; Waldemar, Gunhild; Soininen, Hilkka
2012-01-01
In this paper methods for using multiple templates in tensor-based morphometry (TBM) are presented and comparedtothe conventional single-template approach. TBM analysis requires non-rigid registrations which are often subject to registration errors. When using multiple templates and, therefore, multiple registrations, it can be assumed that the registration errors are averaged and eventually compensated. Four different methods are proposed for multi-template TBM. The methods were evaluated using magnetic resonance (MR) images of healthy controls, patients with stable or progressive mild cognitive impairment (MCI), and patients with Alzheimer's disease (AD) from the ADNI database (N=772). The performance of TBM features in classifying images was evaluated both quantitatively and qualitatively. Classification results show that the multi-template methods are statistically significantly better than the single-template method. The overall classification accuracy was 86.0% for the classification of control and AD subjects, and 72.1%for the classification of stable and progressive MCI subjects. The statistical group-level difference maps produced using multi-template TBM were smoother, formed larger continuous regions, and had larger t-values than the maps obtained with single-template TBM. PMID:21419228
Wan, Shixiang; Duan, Yucong; Zou, Quan
2017-09-01
Predicting the subcellular localization of proteins is an important and challenging problem. Traditional experimental approaches are often expensive and time-consuming. Consequently, a growing number of research efforts employ a series of machine learning approaches to predict the subcellular location of proteins. There are two main challenges among the state-of-the-art prediction methods. First, most of the existing techniques are designed to deal with multi-class rather than multi-label classification, which ignores connections between multiple labels. In reality, multiple locations of particular proteins imply that there are vital and unique biological significances that deserve special focus and cannot be ignored. Second, techniques for handling imbalanced data in multi-label classification problems are necessary, but never employed. For solving these two issues, we have developed an ensemble multi-label classifier called HPSLPred, which can be applied for multi-label classification with an imbalanced protein source. For convenience, a user-friendly webserver has been established at http://server.malab.cn/HPSLPred. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Liu, Yang; Yin, Xiu-Wen; Wang, Zi-Yu; Li, Xue-Lian; Pan, Meng; Li, Yan-Ping; Dong, Ling
2017-11-01
One of the advantages of biopharmaceutics classification system of Chinese materia medica (CMMBCS) is expanding the classification research level from single ingredient to multi-components of Chinese herb, and from multi-components research to holistic research of the Chinese materia medica. In present paper, the alkaloids of extract of huanglian were chosen as the main research object to explore their change rules in solubility and intestinal permeability of single-component and multi-components, and to determine the biopharmaceutical classification of extract of Huanglian from holistic level. The typical shake-flask method and HPLC were used to detect the solubility of single ingredient of alkaloids from extract of huanglian. The quantitative research of alkaloids in intestinal absorption was measured in single-pass intestinal perfusion experiment while permeability coefficient of extract of huanglian was calculated by self-defined weight coefficient method. Copyright© by the Chinese Pharmaceutical Association.
Classification of deadlift biomechanics with wearable inertial measurement units.
O'Reilly, Martin A; Whelan, Darragh F; Ward, Tomas E; Delahunt, Eamonn; Caulfield, Brian M
2017-06-14
The deadlift is a compound full-body exercise that is fundamental in resistance training, rehabilitation programs and powerlifting competitions. Accurate quantification of deadlift biomechanics is important to reduce the risk of injury and ensure training and rehabilitation goals are achieved. This study sought to develop and evaluate deadlift exercise technique classification systems utilising Inertial Measurement Units (IMUs), recording at 51.2Hz, worn on the lumbar spine, both thighs and both shanks. It also sought to compare classification quality when these IMUs are worn in combination and in isolation. Two datasets of IMU deadlift data were collected. Eighty participants first completed deadlifts with acceptable technique and 5 distinct, deliberately induced deviations from acceptable form. Fifty-five members of this group also completed a fatiguing protocol (3-Repition Maximum test) to enable the collection of natural deadlift deviations. For both datasets, universal and personalised random-forests classifiers were developed and evaluated. Personalised classifiers outperformed universal classifiers in accuracy, sensitivity and specificity in the binary classification of acceptable or aberrant technique and in the multi-label classification of specific deadlift deviations. Whilst recent research has favoured universal classifiers due to the reduced overhead in setting them up for new system users, this work demonstrates that such techniques may not be appropriate for classifying deadlift technique due to the poor accuracy achieved. However, personalised classifiers perform very well in assessing deadlift technique, even when using data derived from a single lumbar-worn IMU to detect specific naturally occurring technique mistakes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hierarchical vs non-hierarchical audio indexation and classification for video genres
NASA Astrophysics Data System (ADS)
Dammak, Nouha; BenAyed, Yassine
2018-04-01
In this paper, Support Vector Machines (SVMs) are used for segmenting and indexing video genres based on only audio features extracted at block level, which has a prominent asset by capturing local temporal information. The main contribution of our study is to show the wide effect on the classification accuracies while using an hierarchical categorization structure based on Mel Frequency Cepstral Coefficients (MFCC) audio descriptor. In fact, the classification consists in three common video genres: sports videos, music clips and news scenes. The sub-classification may divide each genre into several multi-speaker and multi-dialect sub-genres. The validation of this approach was carried out on over 360 minutes of video span yielding a classification accuracy of over 99%.
Multi-scale gyrokinetic simulations of an Alcator C-Mod, ELM-y H-mode plasma
NASA Astrophysics Data System (ADS)
Howard, N. T.; Holland, C.; White, A. E.; Greenwald, M.; Rodriguez-Fernandez, P.; Candy, J.; Creely, A. J.
2018-01-01
High fidelity, multi-scale gyrokinetic simulations capable of capturing both ion ({k}θ {ρ }s∼ { O }(1.0)) and electron-scale ({k}θ {ρ }e∼ { O }(1.0)) turbulence were performed in the core of an Alcator C-Mod ELM-y H-mode discharge which exhibits reactor-relevant characteristics. These simulations, performed with all experimental inputs and realistic ion to electron mass ratio ({({m}i/{m}e)}1/2=60.0) provide insight into the physics fidelity that may be needed for accurate simulation of the core of fusion reactor discharges. Three multi-scale simulations and series of separate ion and electron-scale simulations performed using the GYRO code (Candy and Waltz 2003 J. Comput. Phys. 186 545) are presented. As with earlier multi-scale results in L-mode conditions (Howard et al 2016 Nucl. Fusion 56 014004), both ion and multi-scale simulations results are compared with experimentally inferred ion and electron heat fluxes, as well as the measured values of electron incremental thermal diffusivities—indicative of the experimental electron temperature profile stiffness. Consistent with the L-mode results, cross-scale coupling is found to play an important role in the simulation of these H-mode conditions. Extremely stiff ion-scale transport is observed in these high-performance conditions which is shown to likely play and important role in the reproduction of measurements of perturbative transport. These results provide important insight into the role of multi-scale plasma turbulence in the core of reactor-relevant plasmas and establish important constraints on the the fidelity of models needed for predictive simulations.
An Assessment of Worldview-2 Imagery for the Classification Of a Mixed Deciduous Forest
NASA Astrophysics Data System (ADS)
Carter, Nahid
Remote sensing provides a variety of methods for classifying forest communities and can be a valuable tool for the impact assessment of invasive species. The emerald ash borer (Agrilus planipennis) infestation of ash trees (Fraxinus) in the United States has resulted in the mortality of large stands of ash throughout the Northeast. This study assessed the suitability of multi-temporal Worldview-2 multispectral satellite imagery for classifying a mixed deciduous forest in Upstate New York. Training sites were collected using a Global Positioning System (GPS) receiver, with each training site consisting of a single tree of a corresponding class. Six classes were collected; Ash, Maple, Oak, Beech, Evergreen, and Other. Three different classifications were investigated on four data sets. A six class classification (6C), a two class classification consisting of ash and all other classes combined (2C), and a merging of the ash and maple classes for a five class classification (5C). The four data sets included Worldview-2 multispectral data collection from June 2010 (J-WV2) and September 2010 (S-WV2), a layer stacked data set using J-WV2 and S-WV2 (LS-WV2), and a reduced data set (RD-WV2). RD-WV2 was created using a statistical analysis of the processed and unprocessed imagery. Statistical analysis was used to reduce the dimensionality of the data and identify key bands to create a fourth data set (RD-WV2). Overall accuracy varied considerably depending upon the classification type, but results indicated that ash was confused with maple in a majority of the classifications. Ash was most accurately identified using the 2C classification and RD-WV2 data set (81.48%). A combination of the ash and maple classes yielded an accuracy of 89.41%. Future work should focus on separating the ash and maple classifiers by using data sources such as hyperspectral imagery, LiDAR, or extensive forest surveys.
Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images
NASA Astrophysics Data System (ADS)
Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.
2017-10-01
Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.
NASA Astrophysics Data System (ADS)
Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza
2017-06-01
Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.
NASA Astrophysics Data System (ADS)
Ahmad Fauzi, Mohammad Faizal; Gokozan, Hamza Numan; Elder, Brad; Puduvalli, Vinay K.; Otero, Jose J.; Gurcan, Metin N.
2014-03-01
Brain cancer surgery requires intraoperative consultation by neuropathology to guide surgical decisions regarding the extent to which the tumor undergoes gross total resection. In this context, the differential diagnosis between glioblastoma and metastatic cancer is challenging as the decision must be made during surgery in a short time-frame (typically 30 minutes). We propose a method to classify glioblastoma versus metastatic cancer based on extracting textural features from the non-nuclei region of cytologic preparations. For glioblastoma, these regions of interest are filled with glial processes between the nuclei, which appear as anisotropic thin linear structures. For metastasis, these regions correspond to a more homogeneous appearance, thus suitable texture features can be extracted from these regions to distinguish between the two tissue types. In our work, we use the Discrete Wavelet Frames to characterize the underlying texture due to its multi-resolution capability in modeling underlying texture. The textural characterization is carried out in primarily the non-nuclei regions after nuclei regions are segmented by adapting our visually meaningful decomposition segmentation algorithm to this problem. k-nearest neighbor method was then used to classify the features into glioblastoma or metastasis cancer class. Experiment on 53 images (29 glioblastomas and 24 metastases) resulted in average accuracy as high as 89.7% for glioblastoma, 87.5% for metastasis and 88.7% overall. Further studies are underway to incorporate nuclei region features into classification on an expanded dataset, as well as expanding the classification to more types of cancers.
NASA Astrophysics Data System (ADS)
Sah, Shagan
An increasingly important application of remote sensing is to provide decision support during emergency response and disaster management efforts. Land cover maps constitute one such useful application product during disaster events; if generated rapidly after any disaster, such map products can contribute to the efficacy of the response effort. In light of recent nuclear incidents, e.g., after the earthquake/tsunami in Japan (2011), our research focuses on constructing rapid and accurate land cover maps of the impacted area in case of an accidental nuclear release. The methodology involves integration of results from two different approaches, namely coarse spatial resolution multi-temporal and fine spatial resolution imagery, to increase classification accuracy. Although advanced methods have been developed for classification using high spatial or temporal resolution imagery, only a limited amount of work has been done on fusion of these two remote sensing approaches. The presented methodology thus involves integration of classification results from two different remote sensing modalities in order to improve classification accuracy. The data used included RapidEye and MODIS scenes over the Nine Mile Point Nuclear Power Station in Oswego (New York, USA). The first step in the process was the construction of land cover maps from freely available, high temporal resolution, low spatial resolution MODIS imagery using a time-series approach. We used the variability in the temporal signatures among different land cover classes for classification. The time series-specific features were defined by various physical properties of a pixel, such as variation in vegetation cover and water content over time. The pixels were classified into four land cover classes - forest, urban, water, and vegetation - using Euclidean and Mahalanobis distance metrics. On the other hand, a high spatial resolution commercial satellite, such as RapidEye, can be tasked to capture images over the affected area in the case of a nuclear event. This imagery served as a second source of data to augment results from the time series approach. The classifications from the two approaches were integrated using an a posteriori probability-based fusion approach. This was done by establishing a relationship between the classes, obtained after classification of the two data sources. Despite the coarse spatial resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion-based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. This fusion thus contributed to classification accuracy refinement, with a few additional advantages, such as correction for cloud cover and providing for an approach that is robust against point-in-time seasonal anomalies, due to the inclusion of multi-temporal data. We concluded that this approach is capable of generating land cover maps of acceptable accuracy and rapid turnaround, which in turn can yield reliable estimates of crop acreage of a region. The final algorithm is part of an automated software tool, which can be used by emergency response personnel to generate a nuclear ingestion pathway information product within a few hours of data collection.
Wang, Kun-Ching
2015-01-14
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
Application of kernel functions for accurate similarity search in large chemical databases.
Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H
2010-04-29
Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.
NASA Astrophysics Data System (ADS)
Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan
2015-12-01
In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.
Relativistic MR–MP Energy Levels for L-shell Ions of Silicon
Santana, Juan A.; Lopez-Dauphin, Nahyr A.; Beiersdorfer, Peter
2018-01-15
Level energies are reported for Si v, Si vi, Si vii, Si viii, Si ix, Si x, Si xi, and Si xii. The energies have been calculated with the relativistic Multi-Reference Møller–Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20 eV in Si v to 0.04 eV in Si xii. For K-vacancy states, the available values recommendedmore » in the NIST database are limited to Si xii and Si xiii. The average energy deviation is below 0.3 eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. Here, we expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements.« less
Relativistic MR–MP Energy Levels for L-shell Ions of Silicon
NASA Astrophysics Data System (ADS)
Santana, Juan A.; Lopez-Dauphin, Nahyr A.; Beiersdorfer, Peter
2018-01-01
Level energies are reported for Si V, Si VI, Si VII, Si VIII, Si IX, Si X, Si XI, and Si XII. The energies have been calculated with the relativistic Multi-Reference Møller–Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20 eV in Si V to 0.04 eV in Si XII. For K-vacancy states, the available values recommended in the NIST database are limited to Si XII and Si XIII. The average energy deviation is below 0.3 eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. We expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements.
VizieR Online Data Catalog: Relativistic MR-MP energy levels for Si (Santana+, 2018)
NASA Astrophysics Data System (ADS)
Santana, J. A.; Lopez-Dauphin, N. A.; Beiersdorfer, P.
2018-03-01
Level energies are reported for Si V, Si VI, Si VII, Si VIII, Si IX, Si X, Si XI, and Si XII. The energies have been calculated with the relativistic Multi- Reference Moller-Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20eV in SiV to 0.04eV in SiXII. For K-vacancy states, the available values recommended in the NIST database are limited to Si XII and Si XIII. The average energy deviation is below 0.3eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. We expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements. (1 data file).
Relativistic MR–MP Energy Levels for L-shell Ions of Silicon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santana, Juan A.; Lopez-Dauphin, Nahyr A.; Beiersdorfer, Peter
Level energies are reported for Si v, Si vi, Si vii, Si viii, Si ix, Si x, Si xi, and Si xii. The energies have been calculated with the relativistic Multi-Reference Møller–Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20 eV in Si v to 0.04 eV in Si xii. For K-vacancy states, the available values recommendedmore » in the NIST database are limited to Si xii and Si xiii. The average energy deviation is below 0.3 eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. Here, we expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements.« less
Multi-Sectional Views Textural Based SVM for MS Lesion Segmentation in Multi-Channels MRIs
Abdullah, Bassem A; Younis, Akmal A; John, Nigel M
2012-01-01
In this paper, a new technique is proposed for automatic segmentation of multiple sclerosis (MS) lesions from brain magnetic resonance imaging (MRI) data. The technique uses a trained support vector machine (SVM) to discriminate between the blocks in regions of MS lesions and the blocks in non-MS lesion regions mainly based on the textural features with aid of the other features. The classification is done on each of the axial, sagittal and coronal sectional brain view independently and the resultant segmentations are aggregated to provide more accurate output segmentation. The main contribution of the proposed technique described in this paper is the use of textural features to detect MS lesions in a fully automated approach that does not rely on manually delineating the MS lesions. In addition, the technique introduces the concept of the multi-sectional view segmentation to produce verified segmentation. The proposed textural-based SVM technique was evaluated using three simulated datasets and more than fifty real MRI datasets. The results were compared with state of the art methods. The obtained results indicate that the proposed method would be viable for use in clinical practice for the detection of MS lesions in MRI. PMID:22741026
Analysis and application of classification methods of complex carbonate reservoirs
NASA Astrophysics Data System (ADS)
Li, Xiongyan; Qin, Ruibao; Ping, Haitao; Wei, Dan; Liu, Xiaomei
2018-06-01
There are abundant carbonate reservoirs from the Cenozoic to Mesozoic era in the Middle East. Due to variation in sedimentary environment and diagenetic process of carbonate reservoirs, several porosity types coexist in carbonate reservoirs. As a result, because of the complex lithologies and pore types as well as the impact of microfractures, the pore structure is very complicated. Therefore, it is difficult to accurately calculate the reservoir parameters. In order to accurately evaluate carbonate reservoirs, based on the pore structure evaluation of carbonate reservoirs, the classification methods of carbonate reservoirs are analyzed based on capillary pressure curves and flow units. Based on the capillary pressure curves, although the carbonate reservoirs can be classified, the relationship between porosity and permeability after classification is not ideal. On the basis of the flow units, the high-precision functional relationship between porosity and permeability after classification can be established. Therefore, the carbonate reservoirs can be quantitatively evaluated based on the classification of flow units. In the dolomite reservoirs, the average absolute error of calculated permeability decreases from 15.13 to 7.44 mD. Similarly, the average absolute error of calculated permeability of limestone reservoirs is reduced from 20.33 to 7.37 mD. Only by accurately characterizing pore structures and classifying reservoir types, reservoir parameters could be calculated accurately. Therefore, characterizing pore structures and classifying reservoir types are very important to accurate evaluation of complex carbonate reservoirs in the Middle East.
NASA Astrophysics Data System (ADS)
Manzi, Lucio; Barrow, Andrew S.; Scott, Daniel; Layfield, Robert; Wright, Timothy G.; Moses, John E.; Oldham, Neil J.
2016-11-01
Specific interactions between proteins and their binding partners are fundamental to life processes. The ability to detect protein complexes, and map their sites of binding, is crucial to understanding basic biology at the molecular level. Methods that employ sensitive analytical techniques such as mass spectrometry have the potential to provide valuable insights with very little material and on short time scales. Here we present a differential protein footprinting technique employing an efficient photo-activated probe for use with mass spectrometry. Using this methodology the location of a carbohydrate substrate was accurately mapped to the binding cleft of lysozyme, and in a more complex example, the interactions between a 100 kDa, multi-domain deubiquitinating enzyme, USP5 and a diubiquitin substrate were located to different functional domains. The much improved properties of this probe make carbene footprinting a viable method for rapid and accurate identification of protein binding sites utilizing benign, near-UV photoactivation.
Al Ajmi, Eiman; Forghani, Behzad; Reinhold, Caroline; Bayat, Maryam; Forghani, Reza
2018-06-01
There is a rich amount of quantitative information in spectral datasets generated from dual-energy CT (DECT). In this study, we compare the performance of texture analysis performed on multi-energy datasets to that of virtual monochromatic images (VMIs) at 65 keV only, using classification of the two most common benign parotid neoplasms as a testing paradigm. Forty-two patients with pathologically proven Warthin tumour (n = 25) or pleomorphic adenoma (n = 17) were evaluated. Texture analysis was performed on VMIs ranging from 40 to 140 keV in 5-keV increments (multi-energy analysis) or 65-keV VMIs only, which is typically considered equivalent to single-energy CT. Random forest (RF) models were constructed for outcome prediction using separate randomly selected training and testing sets or the entire patient set. Using multi-energy texture analysis, tumour classification in the independent testing set had accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 92%, 86%, 100%, 100%, and 83%, compared to 75%, 57%, 100%, 100%, and 63%, respectively, for single-energy analysis. Multi-energy texture analysis demonstrates superior performance compared to single-energy texture analysis of VMIs at 65 keV for classification of benign parotid tumours. • We present and validate a paradigm for texture analysis of DECT scans. • Multi-energy dataset texture analysis is superior to single-energy dataset texture analysis. • DECT texture analysis has high accura\\cy for diagnosis of benign parotid tumours. • DECT texture analysis with machine learning can enhance non-invasive diagnostic tumour evaluation.
A novel fruit shape classification method based on multi-scale analysis
NASA Astrophysics Data System (ADS)
Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin
2005-11-01
Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.
Ha, Unsoo; Lee, Yongsu; Kim, Hyunki; Roh, Taehwan; Bae, Joonsung; Kim, Changhyeon; Yoo, Hoi-Jun
2015-12-01
A multimodal mental management system in the shape of the wearable headband and earplugs is proposed to monitor electroencephalography (EEG), hemoencephalography (HEG) and heart rate variability (HRV) for accurate mental health monitoring. It enables simultaneous transcranial electrical stimulation (tES) together with real-time monitoring. The total weight of the proposed system is less than 200 g. The multi-loop low-noise amplifier (MLLNA) achieves over 130 dB CMRR for EEG sensing and the capacitive correlated-double sampling transimpedance amplifier (CCTIA) has low-noise characteristics for HEG and HRV sensing. Measured three-physiology domains such as neural, vascular and autonomic domain signals are combined with canonical correlation analysis (CCA) and temporal kernel canonical correlation analysis (tkCCA) algorithm to find the neural-vascular-autonomic coupling. It supports highly accurate classification with the 19% maximum improvement with multimodal monitoring. For the multi-channel stimulation functionality, after-effects maximization monitoring and sympathetic nerve disorder monitoring, the stimulator is designed as reconfigurable. The 3.37 × 2.25 mm(2) chip has 2-channel EEG sensor front-end, 2-channel NIRS sensor front-end, NIRS current driver to drive dual-wavelength VCSEL and 6-b DAC current source for tES mode. It dissipates 24 mW with 2 mA stimulation current and 5 mA NIRS driver current.
Hu, Tangao; Liu, Jiahong; Zheng, Gang; Li, Yao; Xie, Bin
2018-05-09
Accurate and timely information describing urban wetland resources and their changes over time, especially in rapidly urbanizing areas, is becoming more important. We applied an object-based image analysis and nearest neighbour classifier to map and monitor changes in land use/cover using multi-temporal high spatial resolution satellite imagery in an urban wetland area (Hangzhou Xixi Wetland) from 2000, 2005, 2007, 2009 and 2013. The overall eight-class classification accuracies averaged 84.47% for the five years. The maps showed that between 2000 and 2013 the amount of non-wetland (urban) area increased by approximately 100%. Herbaceous (32.22%), forest (29.57%) and pond (23.85%) are the main land-cover types that changed to non-wetland, followed by cropland (6.97%), marsh (4.04%) and river (3.35%). In addition, the maps of change patterns showed that urban wetland loss is mainly distributed west and southeast of the study area due to real estate development, and the greatest loss of urban wetlands occurred from 2007 to 2013. The results demonstrate the advantages of using multi-temporal high spatial resolution satellite imagery to provide an accurate, economical means to map and analyse changes in land use/cover over time and the ability to use the results as inputs to urban wetland management and policy decisions.
A method for automatic matching of multi-timepoint findings for enhanced clinical workflow
NASA Astrophysics Data System (ADS)
Raghupathi, Laks; Dinesh, MS; Devarakota, Pandu R.; Valadez, Gerardo Hermosillo; Wolf, Matthias
2013-03-01
Non-interventional diagnostics (CT or MR) enables early identification of diseases like cancer. Often, lesion growth assessment done during follow-up is used to distinguish between benign and malignant ones. Thus correspondences need to be found for lesions localized at each time point. Manually matching the radiological findings can be time consuming as well as tedious due to possible differences in orientation and position between scans. Also, the complicated nature of the disease makes the physicians to rely on multiple modalities (PETCT, PET-MR) where it is even more challenging. Here, we propose an automatic feature-based matching that is robust to change in organ volume, subpar or no registration that can be done with very less computations. Traditional matching methods rely mostly on accurate image registration and applying the resulting deformation map on the findings coordinates. This has disadvantages when accurate registration is time-consuming or may not be possible due to vast organ volume differences between scans. Our novel matching proposes supervised learning by taking advantage of the underlying CAD features that are already present and considering the matching as a classification problem. In addition, the matching can be done extremely fast and at reasonable accuracy even when the image registration fails for some reason. Experimental results∗ on real-world multi-time point thoracic CT data showed an accuracy of above 90% with negligible false positives on a variety of registration scenarios.
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-10-20
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-01-01
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596
Automatic CT Brain Image Segmentation Using Two Level Multiresolution Mixture Model of EM
NASA Astrophysics Data System (ADS)
Jiji, G. Wiselin; Dehmeshki, Jamshid
2014-04-01
Tissue classification in computed tomography (CT) brain images is an important issue in the analysis of several brain dementias. A combination of different approaches for the segmentation of brain images is presented in this paper. A multi resolution algorithm is proposed along with scaled versions using Gaussian filter and wavelet analysis that extends expectation maximization (EM) algorithm. It is found that it is less sensitive to noise and got more accurate image segmentation than traditional EM. Moreover the algorithm has been applied on 20 sets of CT of the human brain and compared with other works. The segmentation results show the advantages of the proposed work have achieved more promising results and the results have been tested with Doctors.
1980-12-05
classification procedures that are common in speech processing. The anesthesia level classification by EEG time series population screening problem example is in...formance. The use of the KL number type metric in NN rule classification, in a delete-one subj ect ’s EE-at-a-time KL-NN and KL- kNN classification of the...17 individual labeled EEG sample population using KL-NN and KL- kNN rules. The results obtained are shown in Table 1. The entries in the table indicate
Mapping raised bogs with an iterative one-class classification approach
NASA Astrophysics Data System (ADS)
Mack, Benjamin; Roscher, Ribana; Stenzel, Stefanie; Feilhauer, Hannes; Schmidtlein, Sebastian; Waske, Björn
2016-10-01
Land use and land cover maps are one of the most commonly used remote sensing products. In many applications the user only requires a map of one particular class of interest, e.g. a specific vegetation type or an invasive species. One-class classifiers are appealing alternatives to common supervised classifiers because they can be trained with labeled training data of the class of interest only. However, training an accurate one-class classification (OCC) model is challenging, particularly when facing a large image, a small class and few training samples. To tackle these problems we propose an iterative OCC approach. The presented approach uses a biased Support Vector Machine as core classifier. In an iterative pre-classification step a large part of the pixels not belonging to the class of interest is classified. The remaining data is classified by a final classifier with a novel model and threshold selection approach. The specific objective of our study is the classification of raised bogs in a study site in southeast Germany, using multi-seasonal RapidEye data and a small number of training sample. Results demonstrate that the iterative OCC outperforms other state of the art one-class classifiers and approaches for model selection. The study highlights the potential of the proposed approach for an efficient and improved mapping of small classes such as raised bogs. Overall the proposed approach constitutes a feasible approach and useful modification of a regular one-class classifier.
2014-01-01
Background Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). Methods This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. Results The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. Conclusions A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients. PMID:24903422
Huang, Huifang; Liu, Jie; Zhu, Qiang; Wang, Ruiping; Hu, Guangshu
2014-06-05
Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients.
An online hybrid BCI system based on SSVEP and EMG
NASA Astrophysics Data System (ADS)
Lin, Ke; Cinetto, Andrea; Wang, Yijun; Chen, Xiaogang; Gao, Shangkai; Gao, Xiaorong
2016-04-01
Objective. A hybrid brain-computer interface (BCI) is a device combined with at least one other communication system that takes advantage of both parts to build a link between humans and machines. To increase the number of targets and the information transfer rate (ITR), electromyogram (EMG) and steady-state visual evoked potential (SSVEP) were combined to implement a hybrid BCI. A multi-choice selection method based on EMG was developed to enhance the system performance. Approach. A 60-target hybrid BCI speller was built in this study. A single trial was divided into two stages: a stimulation stage and an output selection stage. In the stimulation stage, SSVEP and EMG were used together. Every stimulus flickered at its given frequency to elicit SSVEP. All of the stimuli were divided equally into four sections with the same frequency set. The frequency of each stimulus in a section was different. SSVEPs were used to discriminate targets in the same section. Different sections were classified using EMG signals from the forearm. Subjects were asked to make different number of fists according to the target section. Canonical Correlation Analysis (CCA) and mean filtering was used to classify SSVEP and EMG separately. In the output selection stage, the top two optimal choices were given. The first choice with the highest probability of an accurate classification was the default output of the system. Subjects were required to make a fist to select the second choice only if the second choice was correct. Main results. The online results obtained from ten subjects showed that the mean accurate classification rate and ITR were 81.0% and 83.6 bits min-1 respectively only using the first choice selection. The ITR of the hybrid system was significantly higher than the ITR of any of the two single modalities (EMG: 30.7 bits min-1, SSVEP: 60.2 bits min-1). After the addition of the second choice selection and the correction task, the accurate classification rate and ITR was enhanced to 85.8% and 90.9 bit min-1. Significance. These results suggest that the hybrid system proposed here is suitable for practical use.
Aguiar Santos, Susana; Schlebusch, Thomas; Leonhardt, Steffen
2013-01-01
An accurate current source is one of the keys in the hardware of Electrical impedance Tomography systems. Limitations appear mainly at higher frequencies and for non-simple resistive loads. In this paper, we simulate an improved Howland current source with a Cole-Cole load. Simulations comparing two different op-amps (THS4021 and OPA843) were performed at 1 kHz to 1 MHz. Results show that the THS4021 performed better than the OPA843. The current source with THS4021 reaches an output impedance of 20 MΩ at 1 kHz and above 320 kΩ at 1 MHz, it provides a constant and stable output current up to 4 mA, in the complete range of frequencies, and for Cole-Cole (resistive and capacitive) load.
Border Lakes land-cover classification
Marvin Bauer; Brian Loeffelholz; Doug Shinneman
2009-01-01
This document contains metadata and description of land-cover classification of approximately 5.1 million acres of land bordering Minnesota, U.S.A. and Ontario, Canada. The classification focused on the separation and identification of specific forest-cover types. Some separation of the nonforest classes also was performed. The classification was derived from multi-...
Wu, Mingquan; Wang, Li
2018-01-01
Areas and spatial distribution information of paddy rice are important for managing food security, water use, and climate change. However, there are many difficulties in mapping paddy rice, especially mapping multi-season paddy rice in rainy regions, including differences in phenology, the influence of weather, and farmland fragmentation. To resolve these problems, a novel multi-season paddy rice mapping approach based on Sentinel-1A and Landsat-8 data is proposed. First, Sentinel-1A data were enhanced based on the fact that the backscattering coefficient of paddy rice varies according to its growth stage. Second, cropland information was enhanced based on the fact that the NDVI of cropland in winter is lower than that in the growing season. Then, paddy rice and cropland areas were extracted using a K-Means unsupervised classifier with enhanced images. Third, to further improve the paddy rice classification accuracy, cropland information was utilized to optimize distribution of paddy rice by the fact that paddy rice must be planted in cropland. Classification accuracy was validated based on ground-data from 25 field survey quadrats measuring 600 m × 600 m. The results show that: multi-season paddy rice planting areas effectively was extracted by the method and adjusted early rice area of 1630.84 km2, adjusted middle rice area of 556.21 km2, and adjusted late rice area of 3138.37 km2. The overall accuracy was 98.10%, with a kappa coefficient of 0.94. PMID:29324647
Improved head direction command classification using an optimised Bayesian neural network.
Nguyen, Son T; Nguyen, Hung T; Taylor, Philip B; Middleton, James
2006-01-01
Assistive technologies have recently emerged to improve the quality of life of severely disabled people by enhancing their independence in daily activities. Since many of those individuals have limited or non-existing control from the neck downward, alternative hands-free input modalities have become very important for these people to access assistive devices. In hands-free control, head movement has been proved to be a very effective user interface as it can provide a comfortable, reliable and natural way to access the device. Recently, neural networks have been shown to be useful not only for real-time pattern recognition but also for creating user-adaptive models. Since multi-layer perceptron neural networks trained using standard back-propagation may cause poor generalisation, the Bayesian technique has been proposed to improve the generalisation and robustness of these networks. This paper describes the use of Bayesian neural networks in developing a hands-free wheelchair control system. The experimental results show that with the optimised architecture, classification Bayesian neural networks can detect head commands of wheelchair users accurately irrespective to their levels of injuries.
Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin
2008-05-01
For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.
Robust pattern decoding in shape-coded structured light
NASA Astrophysics Data System (ADS)
Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai
2017-09-01
Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.
NASA Astrophysics Data System (ADS)
Roychowdhury, K.
2016-06-01
Landcover is the easiest detectable indicator of human interventions on land. Urban and peri-urban areas present a complex combination of landcover, which makes classification challenging. This paper assesses the different methods of classifying landcover using dual polarimetric Sentinel-1 data collected during monsoon (July) and winter (December) months of 2015. Four broad landcover classes such as built up areas, water bodies and wetlands, vegetation and open spaces of Kolkata and its surrounding regions were identified. Polarimetric analyses were conducted on Single Look Complex (SLC) data of the region while ground range detected (GRD) data were used for spectral and spatial classification. Unsupervised classification by means of K-Means clustering used backscatter values and was able to identify homogenous landcovers over the study area. The results produced an overall accuracy of less than 50% for both the seasons. Higher classification accuracy (around 70%) was achieved by adding texture variables as inputs along with the backscatter values. However, the accuracy of classification increased significantly with polarimetric analyses. The overall accuracy was around 80% in Wishart H-A-Alpha unsupervised classification. The method was useful in identifying urban areas due to their double-bounce scattering and vegetated areas, which have more random scattering. Normalized Difference Built-up index (NDBI) and Normalized Difference Vegetation Index (NDVI) obtained from Landsat 8 data over the study area were used to verify vegetation and urban classes. The study compares the accuracies of different methods of classifying landcover using medium resolution SAR data in a complex urban area and suggests that polarimetric analyses present the most accurate results for urban and suburban areas.
Höller, Yvonne; Bergmann, Jürgen; Thomschewski, Aljoscha; Kronbichler, Martin; Höller, Peter; Crone, Julia S.; Schmid, Elisabeth V.; Butz, Kevin; Nardone, Raffaele; Trinka, Eugen
2013-01-01
Current research aims at identifying voluntary brain activation in patients who are behaviorally diagnosed as being unconscious, but are able to perform commands by modulating their brain activity patterns. This involves machine learning techniques and feature extraction methods such as applied in brain computer interfaces. In this study, we try to answer the question if features/classification methods which show advantages in healthy participants are also accurate when applied to data of patients with disorders of consciousness. A sample of healthy participants (N = 22), patients in a minimally conscious state (MCS; N = 5), and with unresponsive wakefulness syndrome (UWS; N = 9) was examined with a motor imagery task which involved imagery of moving both hands and an instruction to hold both hands firm. We extracted a set of 20 features from the electroencephalogram and used linear discriminant analysis, k-nearest neighbor classification, and support vector machines (SVM) as classification methods. In healthy participants, the best classification accuracies were seen with coherences (mean = .79; range = .53−.94) and power spectra (mean = .69; range = .40−.85). The coherence patterns in healthy participants did not match the expectation of central modulated -rhythm. Instead, coherence involved mainly frontal regions. In healthy participants, the best classification tool was SVM. Five patients had at least one feature-classifier outcome with p0.05 (none of which were coherence or power spectra), though none remained significant after false-discovery rate correction for multiple comparisons. The present work suggests the use of coherences in patients with disorders of consciousness because they show high reliability among healthy subjects and patient groups. However, feature extraction and classification is a challenging task in unresponsive patients because there is no ground truth to validate the results. PMID:24282545
The effect of recording and analysis bandwidth on acoustic identification of delphinid species.
Oswald, Julie N; Rankin, Shannon; Barlow, Jay
2004-11-01
Because many cetacean species produce characteristic calls that propagate well under water, acoustic techniques can be used to detect and identify them. The ability to identify cetaceans to species using acoustic methods varies and may be affected by recording and analysis bandwidth. To examine the effect of bandwidth on species identification, whistles were recorded from four delphinid species (Delphinus delphis, Stenella attenuata, S. coeruleoalba, and S. longirostris) in the eastern tropical Pacific ocean. Four spectrograms, each with a different upper frequency limit (20, 24, 30, and 40 kHz), were created for each whistle (n = 484). Eight variables (beginning, ending, minimum, and maximum frequency; duration; number of inflection points; number of steps; and presence/absence of harmonics) were measured from the fundamental frequency of each whistle. The whistle repertoires of all four species contained fundamental frequencies extending above 20 kHz. Overall correct classification using discriminant function analysis ranged from 30% for the 20-kHz upper frequency limit data to 37% for the 40-kHz upper frequency limit data. For the four species included in this study, an upper bandwidth limit of at least 24 kHz is required for an accurate representation of fundamental whistle contours.
NASA Astrophysics Data System (ADS)
Pradhan, Biswajeet; Kabiri, Keivan
2012-07-01
This paper describes an assessment of coral reef mapping using multi sensor satellite images such as Landsat ETM, SPOT and IKONOS images for Tioman Island, Malaysia. The study area is known to be one of the best Islands in South East Asia for its unique collection of diversified coral reefs and serves host to thousands of tourists every year. For the coral reef identification, classification and analysis, Landsat ETM, SPOT and IKONOS images were collected processed and classified using hierarchical classification schemes. At first, Decision tree classification method was implemented to separate three main land cover classes i.e. water, rural and vegetation and then maximum likelihood supervised classification method was used to classify these main classes. The accuracy of the classification result is evaluated by a separated test sample set, which is selected based on the fieldwork survey and view interpretation from IKONOS image. Few types of ancillary data in used are: (a) DGPS ground control points; (b) Water quality parameters measured by Hydrolab DS4a; (c) Sea-bed substrates spectrum measured by Unispec and; (d) Landcover observation photos along Tioman island coastal area. The overall accuracy of the final classification result obtained was 92.25% with the kappa coefficient is 0.8940. Key words: Coral reef, Multi-spectral Segmentation, Pixel-Based Classification, Decision Tree, Tioman Island
Wang, Kun-Ching
2015-01-01
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. PMID:25594590
Hodyna, Diana; Kovalishyn, Vasyl; Rogalsky, Sergiy; Blagodatnyi, Volodymyr; Petko, Kirill; Metelytsia, Larisa
2016-09-01
Predictive QSAR models for the inhibitors of B. subtilis and Ps. aeruginosa among imidazolium-based ionic liquids were developed using literary data. The regression QSAR models were created through Artificial Neural Network and k-nearest neighbor procedures. The classification QSAR models were constructed using WEKA-RF (random forest) method. The predictive ability of the models was tested by fivefold cross-validation; giving q(2) = 0.77-0.92 for regression models and accuracy 83-88% for classification models. Twenty synthesized samples of 1,3-dialkylimidazolium ionic liquids with predictive value of activity level of antimicrobial potential were evaluated. For all asymmetric 1,3-dialkylimidazolium ionic liquids, only compounds containing at least one radical with alkyl chain length of 12 carbon atoms showed high antibacterial activity. However, the activity of symmetric 1,3-dialkylimidazolium salts was found to have opposite relationship with the length of aliphatic radical being maximum for compounds based on 1,3-dioctylimidazolium cation. The obtained experimental results suggested that the application of classification QSAR models is more accurate for the prediction of activity of new imidazolium-based ILs as potential antibacterials. © 2016 John Wiley & Sons A/S.
Within-brain classification for brain tumor segmentation.
Havaei, Mohammad; Larochelle, Hugo; Poulin, Philippe; Jodoin, Pierre-Marc
2016-05-01
In this paper, we investigate a framework for interactive brain tumor segmentation which, at its core, treats the problem of interactive brain tumor segmentation as a machine learning problem. This method has an advantage over typical machine learning methods for this task where generalization is made across brains. The problem with these methods is that they need to deal with intensity bias correction and other MRI-specific noise. In this paper, we avoid these issues by approaching the problem as one of within brain generalization. Specifically, we propose a semi-automatic method that segments a brain tumor by training and generalizing within that brain only, based on some minimum user interaction. We investigate how adding spatial feature coordinates (i.e., i, j, k) to the intensity features can significantly improve the performance of different classification methods such as SVM, kNN and random forests. This would only be possible within an interactive framework. We also investigate the use of a more appropriate kernel and the adaptation of hyper-parameters specifically for each brain. As a result of these experiments, we obtain an interactive method whose results reported on the MICCAI-BRATS 2013 dataset are the second most accurate compared to published methods, while using significantly less memory and processing power than most state-of-the-art methods.
Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.
Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu
2016-01-01
The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.
Xia, Lang; Mao, Kebiao; Ma, Ying; Zhao, Fen; Jiang, Lipeng; Shen, Xinyi; Qin, Zhihao
2014-01-01
A practical algorithm was proposed to retrieve land surface temperature (LST) from Visible Infrared Imager Radiometer Suite (VIIRS) data in mid-latitude regions. The key parameter transmittance is generally computed from water vapor content, while water vapor channel is absent in VIIRS data. In order to overcome this shortcoming, the water vapor content was obtained from Moderate Resolution Imaging Spectroradiometer (MODIS) data in this study. The analyses on the estimation errors of vapor content and emissivity indicate that when the water vapor errors are within the range of ±0.5 g/cm2, the mean retrieval error of the present algorithm is 0.634 K; while the land surface emissivity errors range from −0.005 to +0.005, the mean retrieval error is less than 1.0 K. Validation with the standard atmospheric simulation shows the average LST retrieval error for the twenty-three land types is 0.734 K, with a standard deviation value of 0.575 K. The comparison between the ground station LST data indicates the retrieval mean accuracy is −0.395 K, and the standard deviation value is 1.490 K in the regions with vegetation and water cover. Besides, the retrieval results of the test data have also been compared with the results measured by the National Oceanic and Atmospheric Administration (NOAA) VIIRS LST products, and the results indicate that 82.63% of the difference values are within the range of −1 to 1 K, and 17.37% of the difference values are within the range of ±2 to ±1 K. In a conclusion, with the advantages of multi-sensors taken fully exploited, more accurate results can be achieved in the retrieval of land surface temperature. PMID:25397919
Highly efficient classification and identification of human pathogenic bacteria by MALDI-TOF MS.
Hsieh, Sen-Yung; Tseng, Chiao-Li; Lee, Yun-Shien; Kuo, An-Jing; Sun, Chien-Feng; Lin, Yen-Hsiu; Chen, Jen-Kun
2008-02-01
Accurate and rapid identification of pathogenic microorganisms is of critical importance in disease treatment and public health. Conventional work flows are time-consuming, and procedures are multifaceted. MS can be an alternative but is limited by low efficiency for amino acid sequencing as well as low reproducibility for spectrum fingerprinting. We systematically analyzed the feasibility of applying MS for rapid and accurate bacterial identification. Directly applying bacterial colonies without further protein extraction to MALDI-TOF MS analysis revealed rich peak contents and high reproducibility. The MS spectra derived from 57 isolates comprising six human pathogenic bacterial species were analyzed using both unsupervised hierarchical clustering and supervised model construction via the Genetic Algorithm. Hierarchical clustering analysis categorized the spectra into six groups precisely corresponding to the six bacterial species. Precise classification was also maintained in an independently prepared set of bacteria even when the numbers of m/z values were reduced to six. In parallel, classification models were constructed via Genetic Algorithm analysis. A model containing 18 m/z values accurately classified independently prepared bacteria and identified those species originally not used for model construction. Moreover bacteria fewer than 10(4) cells and different species in bacterial mixtures were identified using the classification model approach. In conclusion, the application of MALDI-TOF MS in combination with a suitable model construction provides a highly accurate method for bacterial classification and identification. The approach can identify bacteria with low abundance even in mixed flora, suggesting that a rapid and accurate bacterial identification using MS techniques even before culture can be attained in the near future.
Object oriented classification of high resolution data for inventory of horticultural crops
NASA Astrophysics Data System (ADS)
Hebbar, R.; Ravishankar, H. M.; Trivedi, S.; Subramoniam, S. R.; Uday, R.; Dadhwal, V. K.
2014-11-01
High resolution satellite images are associated with large variance and thus, per pixel classifiers often result in poor accuracy especially in delineation of horticultural crops. In this context, object oriented techniques are powerful and promising methods for classification. In the present study, a semi-automatic object oriented feature extraction model has been used for delineation of horticultural fruit and plantation crops using Erdas Objective Imagine. Multi-resolution data from Resourcesat LISS-IV and Cartosat-1 have been used as source data in the feature extraction model. Spectral and textural information along with NDVI were used as inputs for generation of Spectral Feature Probability (SFP) layers using sample training pixels. The SFP layers were then converted into raster objects using threshold and clump function resulting in pixel probability layer. A set of raster and vector operators was employed in the subsequent steps for generating thematic layer in the vector format. This semi-automatic feature extraction model was employed for classification of major fruit and plantations crops viz., mango, banana, citrus, coffee and coconut grown under different agro-climatic conditions. In general, the classification accuracy of about 75-80 per cent was achieved for these crops using object based classification alone and the same was further improved using minimal visual editing of misclassified areas. A comparison of on-screen visual interpretation with object oriented approach showed good agreement. It was observed that old and mature plantations were classified more accurately while young and recently planted ones (3 years or less) showed poor classification accuracy due to mixed spectral signature, wider spacing and poor stands of plantations. The results indicated the potential use of object oriented approach for classification of high resolution data for delineation of horticultural fruit and plantation crops. The present methodology is applicable at local levels and future development is focused on up-scaling the methodology for generation of fruit and plantation crop maps at regional and national level which is important for creation of database for overall horticultural crop development.
A framework for global terrain classification using 250-m DEMs to predict geohazards
NASA Astrophysics Data System (ADS)
Iwahashi, J.; Matsuoka, M.; Yong, A.
2016-12-01
Geomorphology is key for identifying factors that control geohazards induced by landslides, liquefaction, and ground shaking. To systematically identify landforms that affect these hazards, Iwahashi and Pike (2007; IP07) introduced an automated terrain classification scheme using 1-km-scale Shuttle Radar Topography Mission (SRTM) digital elevation models (DEMs). The IP07 classes describe 16 categories of terrain types and were used as a proxy for predicting ground motion amplification (Yong et al., 2012; Seyhan et al., 2014; Stewart et al., 2014; Yong, 2016). These classes, however, were not sufficiently resolved because coarse-scaled SRTM DEMs were the basis for the categories (Yong, 2016). Thus, we develop a new framework consisting of more detailed polygonal global terrain classes to improve estimations of soil-type and material stiffness. We first prepare high resolution 250-m DEMs derived from the 2010 Global Multi-resolution Terrain Elevation Data (GMTED2010). As in IP07, we calculate three geometric signatures (slope, local convexity and surface texture) from the DEMs. We create additional polygons by using the same signatures and multi-resolution segmentation techniques on the GMTED2010. We consider two types of surface texture thresholds in different window sizes (3x3 and 13x13 pixels), in addition to slope and local convexity, to classify pixels within the DEM. Finally, we apply the k-means clustering and thresholding methods to the 250-m DEM and produce more detailed polygonal terrain classes. We compare the new terrain classification maps of Japan and California with geologic, aerial photography, and landslide distribution maps, and visually find good correspondence of key features. To predict ground motion amplification, we apply the Yong (2016) method for estimating VS30. The systematic classification of geomorphology has the potential to provide a better understanding of the susceptibility to geohazards, which is especially vital in populated areas.
Supervised pixel classification for segmenting geographic atrophy in fundus autofluorescene images
NASA Astrophysics Data System (ADS)
Hu, Zhihong; Medioni, Gerard G.; Hernandez, Matthias; Sadda, SriniVas R.
2014-03-01
Age-related macular degeneration (AMD) is the leading cause of blindness in people over the age of 65. Geographic atrophy (GA) is a manifestation of the advanced or late-stage of the AMD, which may result in severe vision loss and blindness. Techniques to rapidly and precisely detect and quantify GA lesions would appear to be of important value in advancing the understanding of the pathogenesis of GA and the management of GA progression. The purpose of this study is to develop an automated supervised pixel classification approach for segmenting GA including uni-focal and multi-focal patches in fundus autofluorescene (FAF) images. The image features include region wise intensity (mean and variance) measures, gray level co-occurrence matrix measures (angular second moment, entropy, and inverse difference moment), and Gaussian filter banks. A k-nearest-neighbor (k-NN) pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. A voting binary iterative hole filling filter is then applied to fill in the small holes. Sixteen randomly chosen FAF images were obtained from sixteen subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by certified graders. Two-fold cross-validation is applied for the evaluation of the classification performance. The mean Dice similarity coefficients (DSC) between the algorithm- and manually-defined GA regions are 0.84 +/- 0.06 for one test and 0.83 +/- 0.07 for the other test and the area correlations between them are 0.99 (p < 0.05) and 0.94 (p < 0.05) respectively.
Posture Detection Based on Smart Cushion for Wheelchair Users
Ma, Congcong; Li, Wenfeng; Gravina, Raffaele; Fortino, Giancarlo
2017-01-01
The postures of wheelchair users can reveal their sitting habit, mood, and even predict health risks such as pressure ulcers or lower back pain. Mining the hidden information of the postures can reveal their wellness and general health conditions. In this paper, a cushion-based posture recognition system is used to process pressure sensor signals for the detection of user’s posture in the wheelchair. The proposed posture detection method is composed of three main steps: data level classification for posture detection, backward selection of sensor configuration, and recognition results compared with previous literature. Five supervised classification techniques—Decision Tree (J48), Support Vector Machines (SVM), Multilayer Perceptron (MLP), Naive Bayes, and k-Nearest Neighbor (k-NN)—are compared in terms of classification accuracy, precision, recall, and F-measure. Results indicate that the J48 classifier provides the highest accuracy compared to other techniques. The backward selection method was used to determine the best sensor deployment configuration of the wheelchair. Several kinds of pressure sensor deployments are compared and our new method of deployment is shown to better detect postures of the wheelchair users. Performance analysis also took into account the Body Mass Index (BMI), useful for evaluating the robustness of the method across individual physical differences. Results show that our proposed sensor deployment is effective, achieving 99.47% posture recognition accuracy. Our proposed method is very competitive for posture recognition and robust in comparison with other former research. Accurate posture detection represents a fundamental basic block to develop several applications, including fatigue estimation and activity level assessment. PMID:28353684
A rapid and robust gradient measurement technique using dynamic single-point imaging.
Jang, Hyungseok; McMillan, Alan B
2017-09-01
We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Rivas-Ubach, A.; Liu, Y.; Bianchi, T. S.; Tolic, N.; Jansson, C.; Paša-Tolić, L.
2017-12-01
The role of nutrients in organisms, especially primary producers, has been a topic of special interest in ecosystem research for understanding the ecosystem structure and function. The majority of macro-elements in organisms, such as C, H, O, N and P, do not act as single elements but are components of organic compounds (lipids, peptides, carbohydrates, etc), which are more directly related to the physiology of organisms and thus to the ecosystem function. However, accurately deciphering the overall content of the main compound classes (lipids, proteins, carbohydrates,…) in organisms is still a major challenge. van Krevelen (vK) diagrams have been widely used as an estimation of the main compound categories present in environmental samples based on O:C vs H:C molecular ratios, but a stoichiometric classification based exclusively on O:C and H:C ratios is feeble. Different compound classes show large O:C and H:C ratio overlapping and other heteroatoms, such as N and P, should be considered to robustly distinguish the different classes. We propose a new compound classification for biological/environmental samples based on the C:H:O:N:P stoichiometric ratios of thousands of molecular formulas of characterized compounds from 6 different main categories: lipids, peptides, amino-sugars, carbohydrates, nucleotides and phytochemical compounds (oxy-aromatic compounds). This new multidimensional stoichiometric compound constraints classification (MSCC) can be applied to data obtained with high resolution mass spectrometry (HRMS), allowing an accurate overview of the relative abundances of the main compound categories present in organismal samples. The MSCC has been optimized for plants, but it could be also applied to different organisms and serve as a strong starting point to further investigate other environmental complex matrices (soils, aerosols, etc). The proposed MSCC advances environmental research, especially eco-metabolomics, ecophysiology and ecological stoichiometry studies, providing a new tool to understand the ecosystem structure and function at the molecular level.
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
The Classification of Romanian High-Schools
ERIC Educational Resources Information Center
Ivan, Ion; Milodin, Daniel; Naie, Lucian
2006-01-01
The article tries to tackle the issue of high-schools classification from one city, district or from Romania. The classification criteria are presented. The National Database of Education is also presented and the application of criteria is illustrated. An algorithm for high-school multi-rang classification is proposed in order to build classes of…
Korczowski, L; Congedo, M; Jutten, C
2015-08-01
The classification of electroencephalographic (EEG) data recorded from multiple users simultaneously is an important challenge in the field of Brain-Computer Interface (BCI). In this paper we compare different approaches for classification of single-trials Event-Related Potential (ERP) on two subjects playing a collaborative BCI game. The minimum distance to mean (MDM) classifier in a Riemannian framework is extended to use the diversity of the inter-subjects spatio-temporal statistics (MDM-hyper) or to merge multiple classifiers (MDM-multi). We show that both these classifiers outperform significantly the mean performance of the two users and analogous classifiers based on the step-wise linear discriminant analysis. More importantly, the MDM-multi outperforms the performance of the best player within the pair.
Mapping air quality zones for coastal urban centers.
Freeman, Brian; Gharabaghi, Bahram; Thé, Jesse; Munshed, Mohammad; Faisal, Shah; Abdullah, Meshal; Al Aseed, Athari
2017-05-01
This study presents a new method that incorporates modern air dispersion models allowing local terrain and land-sea breeze effects to be considered along with political and natural boundaries for more accurate mapping of air quality zones (AQZs) for coastal urban centers. This method uses local coastal wind patterns and key urban air pollution sources in each zone to more accurately calculate air pollutant concentration statistics. The new approach distributes virtual air pollution sources within each small grid cell of an area of interest and analyzes a puff dispersion model for a full year's worth of 1-hr prognostic weather data. The difference of wind patterns in coastal and inland areas creates significantly different skewness (S) and kurtosis (K) statistics for the annually averaged pollutant concentrations at ground level receptor points for each grid cell. Plotting the S-K data highlights grouping of sources predominantly impacted by coastal winds versus inland winds. The application of the new method is demonstrated through a case study for the nation of Kuwait by developing new AQZs to support local air management programs. The zone boundaries established by the S-K method were validated by comparing MM5 and WRF prognostic meteorological weather data used in the air dispersion modeling, a support vector machine classifier was trained to compare results with the graphical classification method, and final zones were compared with data collected from Earth observation satellites to confirm locations of high-exposure-risk areas. The resulting AQZs are more accurate and support efficient management strategies for air quality compliance targets effected by local coastal microclimates. A novel method to determine air quality zones in coastal urban areas is introduced using skewness (S) and kurtosis (K) statistics calculated from grid concentrations results of air dispersion models. The method identifies land-sea breeze effects that can be used to manage local air quality in areas of similar microclimates.
Kline, Timothy L; Korfiatis, Panagiotis; Edwards, Marie E; Blais, Jaime D; Czerwiec, Frank S; Harris, Peter C; King, Bernard F; Torres, Vicente E; Erickson, Bradley J
2017-08-01
Deep learning techniques are being rapidly applied to medical imaging tasks-from organ and lesion segmentation to tissue and tumor classification. These techniques are becoming the leading algorithmic approaches to solve inherently difficult image processing tasks. Currently, the most critical requirement for successful implementation lies in the need for relatively large datasets that can be used for training the deep learning networks. Based on our initial studies of MR imaging examinations of the kidneys of patients affected by polycystic kidney disease (PKD), we have generated a unique database of imaging data and corresponding reference standard segmentations of polycystic kidneys. In the study of PKD, segmentation of the kidneys is needed in order to measure total kidney volume (TKV). Automated methods to segment the kidneys and measure TKV are needed to increase measurement throughput and alleviate the inherent variability of human-derived measurements. We hypothesize that deep learning techniques can be leveraged to perform fast, accurate, reproducible, and fully automated segmentation of polycystic kidneys. Here, we describe a fully automated approach for segmenting PKD kidneys within MR images that simulates a multi-observer approach in order to create an accurate and robust method for the task of segmentation and computation of TKV for PKD patients. A total of 2000 cases were used for training and validation, and 400 cases were used for testing. The multi-observer ensemble method had mean ± SD percent volume difference of 0.68 ± 2.2% compared with the reference standard segmentations. The complete framework performs fully automated segmentation at a level comparable with interobserver variability and could be considered as a replacement for the task of segmentation of PKD kidneys by a human.
Accurate identification of microseismic P- and S-phase arrivals using the multi-step AIC algorithm
NASA Astrophysics Data System (ADS)
Zhu, Mengbo; Wang, Liguan; Liu, Xiaoming; Zhao, Jiaxuan; Peng, Ping'an
2018-03-01
Identification of P- and S-phase arrivals is the primary work in microseismic monitoring. In this study, a new multi-step AIC algorithm is proposed. This algorithm consists of P- and S-phase arrival pickers (P-picker and S-picker). The P-picker contains three steps: in step 1, a preliminary P-phase arrival window is determined by the waveform peak. Then a preliminary P-pick is identified using the AIC algorithm. Finally, the P-phase arrival window is narrowed based on the above P-pick. Thus the P-phase arrival can be identified accurately by using the AIC algorithm again. The S-picker contains five steps: in step 1, a narrow S-phase arrival window is determined based on the P-pick and the AIC curve of amplitude biquadratic time-series. In step 2, the S-picker automatically judges whether the S-phase arrival is clear to identify. In step 3 and 4, the AIC extreme points are extracted, and the relationship between the local minimum and the S-phase arrival is researched. In step 5, the S-phase arrival is picked based on the maximum probability criterion. To evaluate of the proposed algorithm, a P- and S-picks classification criterion is also established based on a source location numerical simulation. The field data tests show a considerable improvement of the multi-step AIC algorithm in comparison with the manual picks and the original AIC algorithm. Furthermore, the technique is independent of the kind of SNR. Even in the poor-quality signal group which the SNRs are below 5, the effective picking rates (the corresponding location error is <15 m) of P- and S-phase arrivals are still up to 80.9% and 76.4% respectively.
NASA Astrophysics Data System (ADS)
Lestari, A. W.; Rustam, Z.
2017-07-01
In the last decade, breast cancer has become the focus of world attention as this disease is one of the primary leading cause of death for women. Therefore, it is necessary to have the correct precautions and treatment. In previous studies, Fuzzy Kennel K-Medoid algorithm has been used for multi-class data. This paper proposes an algorithm to classify the high dimensional data of breast cancer using Fuzzy Possibilistic C-means (FPCM) and a new method based on clustering analysis using Normed Kernel Function-Based Fuzzy Possibilistic C-Means (NKFPCM). The objective of this paper is to obtain the best accuracy in classification of breast cancer data. In order to improve the accuracy of the two methods, the features candidates are evaluated using feature selection, where Laplacian Score is used. The results show the comparison accuracy and running time of FPCM and NKFPCM with and without feature selection.
National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium. NLCD 2011 provides - for the first time - the capability to assess wall-to-wall, spatially explicit, national land cover changes and trends across the United States from 2001 to 2011. As with two previous NLCD land cover products NLCD 2011 keeps the same 16-class land cover classification scheme that has been applied consistently across the United States at a spatial resolution of 30 meters. NLCD 2011 is based primarily on a decision-tree classification of circa 2011 Landsat satellite data. This dataset is associated with the following publication:Homer, C., J. Dewitz, L. Yang, S. Jin, P. Danielson, G. Xian, J. Coulston, N. Herold, J. Wickham , and K. Megown. Completion of the 2011 National Land Cover Database for the Conterminous United States – Representing a Decade of Land Cover Change Information. PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING. American Society for Photogrammetry and Remote Sensing, Bethesda, MD, USA, 81(0): 345-354, (2015).
Hybrid Radar Emitter Recognition Based on Rough k-Means Classifier and Relevance Vector Machine
Yang, Zhutian; Wu, Zhilu; Yin, Zhendong; Quan, Taifan; Sun, Hongjian
2013-01-01
Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for recognizing radar emitter signals. In this paper, a hybrid recognition approach is presented that classifies radar emitter signals by exploiting the different separability of samples. The proposed approach comprises two steps, namely the primary signal recognition and the advanced signal recognition. In the former step, a novel rough k-means classifier, which comprises three regions, i.e., certain area, rough area and uncertain area, is proposed to cluster the samples of radar emitter signals. In the latter step, the samples within the rough boundary are used to train the relevance vector machine (RVM). Then RVM is used to recognize the samples in the uncertain area; therefore, the classification accuracy is improved. Simulation results show that, for recognizing radar emitter signals, the proposed hybrid recognition approach is more accurate, and presents lower computational complexity than traditional approaches. PMID:23344380
Using Bayesian neural networks to classify forest scenes
NASA Astrophysics Data System (ADS)
Vehtari, Aki; Heikkonen, Jukka; Lampinen, Jouko; Juujarvi, Jouni
1998-10-01
We present results that compare the performance of Bayesian learning methods for neural networks on the task of classifying forest scenes into trees and background. Classification task is demanding due to the texture richness of the trees, occlusions of the forest scene objects and diverse lighting conditions under operation. This makes it difficult to determine which are optimal image features for the classification. A natural way to proceed is to extract many different types of potentially suitable features, and to evaluate their usefulness in later processing stages. One approach to cope with large number of features is to use Bayesian methods to control the model complexity. Bayesian learning uses a prior on model parameters, combines this with evidence from a training data, and the integrates over the resulting posterior to make predictions. With this method, we can use large networks and many features without fear of overfitting. For this classification task we compare two Bayesian learning methods for multi-layer perceptron (MLP) neural networks: (1) The evidence framework of MacKay uses a Gaussian approximation to the posterior weight distribution and maximizes with respect to hyperparameters. (2) In a Markov Chain Monte Carlo (MCMC) method due to Neal, the posterior distribution of the network parameters is numerically integrated using the MCMC method. As baseline classifiers for comparison we use (3) MLP early stop committee, (4) K-nearest-neighbor and (5) Classification And Regression Tree.
A unified material decomposition framework for quantitative dual- and triple-energy CT imaging.
Zhao, Wei; Vernekohl, Don; Han, Fei; Han, Bin; Peng, Hao; Yang, Yong; Xing, Lei; Min, James K
2018-04-21
Many clinical applications depend critically on the accurate differentiation and classification of different types of materials in patient anatomy. This work introduces a unified framework for accurate nonlinear material decomposition and applies it, for the first time, in the concept of triple-energy CT (TECT) for enhanced material differentiation and classification as well as dual-energy CT (DECT). We express polychromatic projection into a linear combination of line integrals of material-selective images. The material decomposition is then turned into a problem of minimizing the least-squares difference between measured and estimated CT projections. The optimization problem is solved iteratively by updating the line integrals. The proposed technique is evaluated by using several numerical phantom measurements under different scanning protocols. The triple-energy data acquisition is implemented at the scales of micro-CT and clinical CT imaging with commercial "TwinBeam" dual-source DECT configuration and a fast kV switching DECT configuration. Material decomposition and quantitative comparison with a photon counting detector and with the presence of a bow-tie filter are also performed. The proposed method provides quantitative material- and energy-selective images examining realistic configurations for both DECT and TECT measurements. Compared to the polychromatic kV CT images, virtual monochromatic images show superior image quality. For the mouse phantom, quantitative measurements show that the differences between gadodiamide and iodine concentrations obtained using TECT and idealized photon counting CT (PCCT) are smaller than 8 and 1 mg/mL, respectively. TECT outperforms DECT for multicontrast CT imaging and is robust with respect to spectrum estimation. For the thorax phantom, the differences between the concentrations of the contrast map and the corresponding true reference values are smaller than 7 mg/mL for all of the realistic configurations. A unified framework for both DECT and TECT imaging has been established for the accurate extraction of material compositions using currently available commercial DECT configurations. The novel technique is promising to provide an urgently needed solution for several CT-based diagnostic and therapy applications, especially for the diagnosis of cardiovascular and abdominal diseases where multicontrast imaging is involved. © 2018 American Association of Physicists in Medicine.
Jiao, Y; Chen, R; Ke, X; Cheng, L; Chu, K; Lu, Z; Herskovits, E H
2011-01-01
Autism spectrum disorder (ASD) is a neurodevelopmental disorder, of which Asperger syndrome and high-functioning autism are subtypes. Our goal is: 1) to determine whether a diagnostic model based on single-nucleotide polymorphisms (SNPs), brain regional thickness measurements, or brain regional volume measurements can distinguish Asperger syndrome from high-functioning autism; and 2) to compare the SNP, thickness, and volume-based diagnostic models. Our study included 18 children with ASD: 13 subjects with high-functioning autism and 5 subjects with Asperger syndrome. For each child, we obtained 25 SNPs for 8 ASD-related genes; we also computed regional cortical thicknesses and volumes for 66 brain structures, based on structural magnetic resonance (MR) examination. To generate diagnostic models, we employed five machine-learning techniques: decision stump, alternating decision trees, multi-class alternating decision trees, logistic model trees, and support vector machines. For SNP-based classification, three decision-tree-based models performed better than the other two machine-learning models. The performance metrics for three decision-tree-based models were similar: decision stump was modestly better than the other two methods, with accuracy = 90%, sensitivity = 0.95 and specificity = 0.75. All thickness and volume-based diagnostic models performed poorly. The SNP-based diagnostic models were superior to those based on thickness and volume. For SNP-based classification, rs878960 in GABRB3 (gamma-aminobutyric acid A receptor, beta 3) was selected by all tree-based models. Our analysis demonstrated that SNP-based classification was more accurate than morphometry-based classification in ASD subtype classification. Also, we found that one SNP--rs878960 in GABRB3--distinguishes Asperger syndrome from high-functioning autism.
Domain Adaptation for Alzheimer’s Disease Diagnostics
Wachinger, Christian; Reuter, Martin
2016-01-01
With the increasing prevalence of Alzheimer’s disease, research focuses on the early computer-aided diagnosis of dementia with the goal to understand the disease process, determine risk and preserving factors, and explore preventive therapies. By now, large amounts of data from multi-site studies have been made available for developing, training, and evaluating automated classifiers. Yet, their translation to the clinic remains challenging, in part due to their limited generalizability across different datasets. In this work, we describe a compact classification approach that mitigates overfitting by regularizing the multinomial regression with the mixed ℓ1/ℓ2 norm. We combine volume, thickness, and anatomical shape features from MRI scans to characterize neuroanatomy for the three-class classification of Alzheimer’s disease, mild cognitive impairment and healthy controls. We demonstrate high classification accuracy via independent evaluation within the scope of the CADDementia challenge. We, furthermore, demonstrate that variations between source and target datasets can substantially influence classification accuracy. The main contribution of this work addresses this problem by proposing an approach for supervised domain adaptation based on instance weighting. Integration of this method into our classifier allows us to assess different strategies for domain adaptation. Our results demonstrate (i) that training on only the target training set yields better results than the naïve combination (union) of source and target training sets, and (ii) that domain adaptation with instance weighting yields the best classification results, especially if only a small training component of the target dataset is available. These insights imply that successful deployment of systems for computer-aided diagnostics to the clinic depends not only on accurate classifiers that avoid overfitting, but also on a dedicated domain adaptation strategy. PMID:27262241
Kocevar, Gabriel; Stamile, Claudio; Hannoun, Salem; Cotton, François; Vukusic, Sandra; Durand-Dubief, Françoise; Sappey-Marinier, Dominique
2016-01-01
Purpose: In this work, we introduce a method to classify Multiple Sclerosis (MS) patients into four clinical profiles using structural connectivity information. For the first time, we try to solve this question in a fully automated way using a computer-based method. The main goal is to show how the combination of graph-derived metrics with machine learning techniques constitutes a powerful tool for a better characterization and classification of MS clinical profiles. Materials and Methods: Sixty-four MS patients [12 Clinical Isolated Syndrome (CIS), 24 Relapsing Remitting (RR), 24 Secondary Progressive (SP), and 17 Primary Progressive (PP)] along with 26 healthy controls (HC) underwent MR examination. T1 and diffusion tensor imaging (DTI) were used to obtain structural connectivity matrices for each subject. Global graph metrics, such as density and modularity, were estimated and compared between subjects' groups. These metrics were further used to classify patients using tuned Support Vector Machine (SVM) combined with Radial Basic Function (RBF) kernel. Results: When comparing MS patients to HC subjects, a greater assortativity, transitivity, and characteristic path length as well as a lower global efficiency were found. Using all graph metrics, the best F -Measures (91.8, 91.8, 75.6, and 70.6%) were obtained for binary (HC-CIS, CIS-RR, RR-PP) and multi-class (CIS-RR-SP) classification tasks, respectively. When using only one graph metric, the best F -Measures (83.6, 88.9, and 70.7%) were achieved for modularity with previous binary classification tasks. Conclusion: Based on a simple DTI acquisition associated with structural brain connectivity analysis, this automatic method allowed an accurate classification of different MS patients' clinical profiles.
The software application and classification algorithms for welds radiograms analysis
NASA Astrophysics Data System (ADS)
Sikora, R.; Chady, T.; Baniukiewicz, P.; Grzywacz, B.; Lopato, P.; Misztal, L.; Napierała, L.; Piekarczyk, B.; Pietrusewicz, T.; Psuj, G.
2013-01-01
The paper presents a software implementation of an Intelligent System for Radiogram Analysis (ISAR). The system has to support radiologists in welds quality inspection. The image processing part of software with a graphical user interface and a welds classification part are described with selected classification results. Classification was based on a few algorithms: an artificial neural network, a k-means clustering, a simplified k-means and a rough sets theory.
Surface water classification and monitoring using polarimetric synthetic aperture radar
NASA Astrophysics Data System (ADS)
Irwin, Katherine Elizabeth
Surface water classification using synthetic aperture radar (SAR) is an established practice for monitoring flood hazards due to the high temporal and spatial resolution it provides. Surface water change is a dynamic process that varies both spatially and temporally, and can occur on various scales resulting in significant impacts on affected areas. Small-scale flooding hazards, caused by beaver dam failure, is an example of surface water change, which can impact nearby infrastructure and ecosystems. Assessing these hazards is essential to transportation and infrastructure maintenance. With current satellite missions operating in multiple polarizations, spatio-temporal resolutions, and frequencies, a comprehensive comparison between SAR products for surface water monitoring is necessary. In this thesis, surface water extent models derived from high resolution single-polarization TerraSAR-X (TSX) data, medium resolution dual-polarization TSX data and low resolution quad-polarization RADARSAT-2 (RS-2) data are compared. There exists a compromise between acquiring SAR data with a high resolution or high information content. Multi-polarization data provides additional phase and intensity information, which makes it possible to better classify areas of flooded vegetation and wetlands. These locations are often where fluctuations in surface water occur and are essential for understanding dynamic underlying processes. However, often multi-polarized data is acquired at a low resolution, which cannot image these zones effectively. High spatial resolution, single-polarization TSX data provides the best model of open water. However, these single-polarization observations have limited information content and are affected by shadow and layover errors. This often hinders the classification of other land cover types. The dual-polarization TSX data allows for the classification of flooded vegetation, but classification is less accurate compared to the quad-polarization RS-2 data. The RS-2 data allows for the discrimination of open water, marshes/fields and forested areas. However, the RS-2 data is less applicable to small scale surface water monitoring (e.g. beaver dam failure), due to its low spatial resolution. By understanding the strengths and weaknesses of available SAR technology, an appropriate product can be chosen for a specific target application involving surface water change. This research benefits the eventual development of a space-based monitoring strategy over longer periods.
Machine-learned Identification of RR Lyrae Stars from Sparse, Multi-band Data: The PS1 Sample
NASA Astrophysics Data System (ADS)
Sesar, Branimir; Hernitschek, Nina; Mitrović, Sandra; Ivezić, Željko; Rix, Hans-Walter; Cohen, Judith G.; Bernard, Edouard J.; Grebel, Eva K.; Martin, Nicolas F.; Schlafly, Edward F.; Burgett, William S.; Draper, Peter W.; Flewelling, Heather; Kaiser, Nick; Kudritzki, Rolf P.; Magnier, Eugene A.; Metcalfe, Nigel; Tonry, John L.; Waters, Christopher
2017-05-01
RR Lyrae stars may be the best practical tracers of Galactic halo (sub-)structure and kinematics. The PanSTARRS1 (PS1) 3π survey offers multi-band, multi-epoch, precise photometry across much of the sky, but a robust identification of RR Lyrae stars in this data set poses a challenge, given PS1's sparse, asynchronous multi-band light curves (≲ 12 epochs in each of five bands, taken over a 4.5 year period). We present a novel template fitting technique that uses well-defined and physically motivated multi-band light curves of RR Lyrae stars, and demonstrate that we get accurate period estimates, precise to 2 s in > 80 % of cases. We augment these light-curve fits with other features from photometric time-series and provide them to progressively more detailed machine-learned classification models. From these models, we are able to select the widest (three-fourths of the sky) and deepest (reaching 120 kpc) sample of RR Lyrae stars to date. The PS1 sample of ˜45,000 RRab stars is pure (90%) and complete (80% at 80 kpc) at high galactic latitudes. It also provides distances that are precise to 3%, measured with newly derived period-luminosity relations for optical/near-infrared PS1 bands. With the addition of proper motions from Gaia and radial velocity measurements from multi-object spectroscopic surveys, we expect the PS1 sample of RR Lyrae stars to become the premier source for studying the structure, kinematics, and the gravitational potential of the Galactic halo. The techniques presented in this study should translate well to other sparse, multi-band data sets, such as those produced by the Dark Energy Survey and the upcoming Large Synoptic Survey Telescope Galactic plane sub-survey.
Jun, Min-Ho; Kim, Soochan; Ku, Boncho; Cho, JungHee; Kim, Kahye; Yoo, Ho-Ryong; Kim, Jaeuk U
2018-01-12
We investigated segmental phase angles (PAs) in the four limbs using a multi-frequency bioimpedance analysis (MF-BIA) technique for noninvasively diagnosing diabetes mellitus. We conducted a meal tolerance test (MTT) for 45 diabetic and 45 control subjects stratified by age, sex and body mass index (BMI). HbA1c and the waist-to-hip-circumference ratio (WHR) were measured before meal intake, and we measured the glucose levels and MF-BIA PAs 5 times for 2 hours after meal intake. We employed a t-test to examine the statistical significance and the area under the curve (AUC) of the receiver operating characteristics (ROC) to test the classification accuracy using segmental PAs at 5, 50, and 250 kHz. Segmental PAs were independent of the HbA1c or glucose levels, or their changes caused by the MTT. However, the segmental PAs were good indicators for noninvasively screening diabetes In particular, leg PAs in females and arm PAs in males showed best classification accuracy (AUC = 0.827 for males, AUC = 0.845 for females). Lastly, we introduced the PA at maximum reactance (PAmax), which is independent of measurement frequencies and can be obtained from any MF-BIA device using a Cole-Cole model, thus showing potential as a useful biomarker for diabetes.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.
Zhu, Xiangbin; Qiu, Huiling
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved. PMID:27893761
76 FR 9541 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-18
....S. Census Bureau. Title: 2012 Economic Census General Classification Report. OMB Control Number... Business Register is that establishments are assigned an accurate economic classification, based on the North American Industry Classification System (NAICS). The primary purpose of the ``2012 Economic Census...
Soh, Harold; Demiris, Yiannis
2014-01-01
Human beings not only possess the remarkable ability to distinguish objects through tactile feedback but are further able to improve upon recognition competence through experience. In this work, we explore tactile-based object recognition with learners capable of incremental learning. Using the sparse online infinite Echo-State Gaussian process (OIESGP), we propose and compare two novel discriminative and generative tactile learners that produce probability distributions over objects during object grasping/palpation. To enable iterative improvement, our online methods incorporate training samples as they become available. We also describe incremental unsupervised learning mechanisms, based on novelty scores and extreme value theory, when teacher labels are not available. We present experimental results for both supervised and unsupervised learning tasks using the iCub humanoid, with tactile sensors on its five-fingered anthropomorphic hand, and 10 different object classes. Our classifiers perform comparably to state-of-the-art methods (C4.5 and SVM classifiers) and findings indicate that tactile signals are highly relevant for making accurate object classifications. We also show that accurate "early" classifications are possible using only 20-30 percent of the grasp sequence. For unsupervised learning, our methods generate high quality clusterings relative to the widely-used sequential k-means and self-organising map (SOM), and we present analyses into the differences between the approaches.
Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab
2016-01-01
In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.
Larmer, S G; Sargolzaei, M; Schenkel, F S
2014-05-01
Genomic selection requires a large reference population to accurately estimate single nucleotide polymorphism (SNP) effects. In some Canadian dairy breeds, the available reference populations are not large enough for accurate estimation of SNP effects for traits of interest. If marker phase is highly consistent across multiple breeds, it is theoretically possible to increase the accuracy of genomic prediction for one or all breeds by pooling several breeds into a common reference population. This study investigated the extent of linkage disequilibrium (LD) in 5 major dairy breeds using a 50,000 (50K) SNP panel and 3 of the same breeds using the 777,000 (777K) SNP panel. Correlation of pair-wise SNP phase was also investigated on both panels. The level of LD was measured using the squared correlation of alleles at 2 loci (r(2)), and the consistency of SNP gametic phases was correlated using the signed square root of these values. Because of the high cost of the 777K panel, the accuracy of imputation from lower density marker panels [6,000 (6K) or 50K] was examined both within breed and using a multi-breed reference population in Holstein, Ayrshire, and Guernsey. Imputation was carried out using FImpute V2.2 and Beagle 3.3.2 software. Imputation accuracies were then calculated as both the proportion of correct SNP filled in (concordance rate) and allelic R(2). Computation time was also explored to determine the efficiency of the different algorithms for imputation. Analysis showed that LD values >0.2 were found in all breeds at distances at or shorter than the average adjacent pair-wise distance between SNP on the 50K panel. Correlations of r-values, however, did not reach high levels (<0.9) at these distances. High correlation values of SNP phase between breeds were observed (>0.94) when the average pair-wise distances using the 777K SNP panel were examined. High concordance rate (0.968-0.995) and allelic R(2) (0.946-0.991) were found for all breeds when imputation was carried out with FImpute from 50K to 777K. Imputation accuracy for Guernsey and Ayrshire was slightly lower when using the imputation method in Beagle. Computing time was significantly greater when using Beagle software, with all comparable procedures being 9 to 13 times less efficient, in terms of time, compared with FImpute. These findings suggest that use of a multi-breed reference population might increase prediction accuracy using the 777K SNP panel and that 777K genotypes can be efficiently and effectively imputed using the lower density 50K SNP panel. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Schmalz, M.; Ritter, G.
Accurate multispectral or hyperspectral signature classification is key to the nonimaging detection and recognition of space objects. Additionally, signature classification accuracy depends on accurate spectral endmember determination [1]. Previous approaches to endmember computation and signature classification were based on linear operators or neural networks (NNs) expressed in terms of the algebra (R, +, x) [1,2]. Unfortunately, class separation in these methods tends to be suboptimal, and the number of signatures that can be accurately classified often depends linearly on the number of NN inputs. This can lead to poor endmember distinction, as well as potentially significant classification errors in the presence of noise or densely interleaved signatures. In contrast to traditional CNNs, autoassociative morphological memories (AMM) are a construct similar to Hopfield autoassociatived memories defined on the (R, +, ?,?) lattice algebra [3]. Unlimited storage and perfect recall of noiseless real valued patterns has been proven for AMMs [4]. However, AMMs suffer from sensitivity to specific noise models, that can be characterized as erosive and dilative noise. On the other hand, the prior definition of a set of endmembers corresponds to material spectra lying on vertices of the minimum convex region covering the image data. These vertices can be characterized as morphologically independent patterns. It has further been shown that AMMs can be based on dendritic computation [3,6]. These techniques yield improved accuracy and class segmentation/separation ability in the presence of highly interleaved signature data. In this paper, we present a procedure for endmember determination based on AMM noise sensitivity, which employs morphological dendritic computation. We show that detected endmembers can be exploited by AMM based classification techniques, to achieve accurate signature classification in the presence of noise, closely spaced or interleaved signatures, and simulated camera optical distortions. In particular, we examine two critical cases: (1) classification of multiple closely spaced signatures that are difficult to separate using distance measures, and (2) classification of materials in simulated hyperspectral images of spaceborne satellites. In each case, test data are derived from a NASA database of space material signatures. Additional analysis pertains to computational complexity and noise sensitivity, which are superior to classical NN based techniques.
A narrow-band k-distribution model with single mixture gas assumption for radiative flows
NASA Astrophysics Data System (ADS)
Jo, Sung Min; Kim, Jae Won; Kwon, Oh Joon
2018-06-01
In the present study, the narrow-band k-distribution (NBK) model parameters for mixtures of H2O, CO2, and CO are proposed by utilizing the line-by-line (LBL) calculations with a single mixture gas assumption. For the application of the NBK model to radiative flows, a radiative transfer equation (RTE) solver based on a finite-volume method on unstructured meshes was developed. The NBK model and the RTE solver were verified by solving two benchmark problems including the spectral radiance distribution emitted from one-dimensional slabs and the radiative heat transfer in a truncated conical enclosure. It was shown that the results are accurate and physically reliable by comparing with available data. To examine the applicability of the methods to realistic multi-dimensional problems in non-isothermal and non-homogeneous conditions, radiation in an axisymmetric combustion chamber was analyzed, and then the infrared signature emitted from an aircraft exhaust plume was predicted. For modeling the plume flow involving radiative cooling, a flow-radiation coupled procedure was devised in a loosely coupled manner by adopting a Navier-Stokes flow solver based on unstructured meshes. It was shown that the predicted radiative cooling for the combustion chamber is physically more accurate than other predictions, and is as accurate as that by the LBL calculations. It was found that the infrared signature of aircraft exhaust plume can also be obtained accurately, equivalent to the LBL calculations, by using the present narrow-band approach with a much improved numerical efficiency.
Object-oriented recognition of high-resolution remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan
2016-01-01
With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .
Selih, Vid S; Sala, Martin; Drgan, Viktor
2014-06-15
Inductively coupled plasma mass spectrometry and optical emission were used to determine the multi-element composition of 272 bottled Slovenian wines. To achieve geographical classification of the wines by their elemental composition, principal component analysis (PCA) and counter-propagation artificial neural networks (CPANN) have been used. From 49 elements measured, 19 were used to build the final classification models. CPANN was used for the final predictions because of its superior results. The best model gave 82% correct predictions for external set of the white wine samples. Taking into account the small size of whole Slovenian wine growing regions, we consider the classification results were very good. For the red wines, which were mostly represented from one region, even-sub region classification was possible with great precision. From the level maps of the CPANN model, some of the most important elements for classification were identified. Copyright © 2013 Elsevier Ltd. All rights reserved.
Multi-source remotely sensed data fusion for improving land cover classification
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Bo; Xu, Bing
2017-02-01
Although many advances have been made in past decades, land cover classification of fine-resolution remotely sensed (RS) data integrating multiple temporal, angular, and spectral features remains limited, and the contribution of different RS features to land cover classification accuracy remains uncertain. We proposed to improve land cover classification accuracy by integrating multi-source RS features through data fusion. We further investigated the effect of different RS features on classification performance. The results of fusing Landsat-8 Operational Land Imager (OLI) data with Moderate Resolution Imaging Spectroradiometer (MODIS), China Environment 1A series (HJ-1A), and Advanced Spaceborne Thermal Emission and Reflection (ASTER) digital elevation model (DEM) data, showed that the fused data integrating temporal, spectral, angular, and topographic features achieved better land cover classification accuracy than the original RS data. Compared with the topographic feature, the temporal and angular features extracted from the fused data played more important roles in classification performance, especially those temporal features containing abundant vegetation growth information, which markedly increased the overall classification accuracy. In addition, the multispectral and hyperspectral fusion successfully discriminated detailed forest types. Our study provides a straightforward strategy for hierarchical land cover classification by making full use of available RS data. All of these methods and findings could be useful for land cover classification at both regional and global scales.
NASA Astrophysics Data System (ADS)
Skrzypek, N.; Warren, S. J.; Faherty, J. K.; Mortlock, D. J.; Burgasser, A. J.; Hewett, P. C.
2015-02-01
Aims: We present a method, named photo-type, to identify and accurately classify L and T dwarfs onto the standard spectral classification system using photometry alone. This enables the creation of large and deep homogeneous samples of these objects efficiently, without the need for spectroscopy. Methods: We created a catalogue of point sources with photometry in 8 bands, ranging from 0.75 to 4.6 μm, selected from an area of 3344 deg2, by combining SDSS, UKIDSS LAS, and WISE data. Sources with 13.0
NASA Astrophysics Data System (ADS)
Xiao Yong, Zhao; Xin, Ji Yong; Shuang Ying, Zuo
2018-03-01
In order to effectively classify the surrounding rock types of tunnels, a multi-factor tunnel surrounding rock classification method based on GPR and probability theory is proposed. Geological radar was used to identify the geology of the surrounding rock in front of the face and to evaluate the quality of the rock face. According to the previous survey data, the rock uniaxial compressive strength, integrity index, fissure and groundwater were selected for classification. The related theories combine them into a multi-factor classification method, and divide the surrounding rocks according to the great probability. Using this method to classify the surrounding rock of the Ma’anshan tunnel, the surrounding rock types obtained are basically the same as those of the actual surrounding rock, which proves that this method is a simple, efficient and practical rock classification method, which can be used for tunnel construction.
Tensor-driven extraction of developmental features from varying paediatric EEG datasets.
Kinney-Lang, Eli; Spyrou, Loukianos; Ebied, Ahmed; Chin, Richard Fm; Escudero, Javier
2018-05-21
Constant changes in developing children's brains can pose a challenge in EEG dependant technologies. Advancing signal processing methods to identify developmental differences in paediatric populations could help improve function and usability of such technologies. Taking advantage of the multi-dimensional structure of EEG data through tensor analysis may offer a framework for extracting relevant developmental features of paediatric datasets. A proof of concept is demonstrated through identifying latent developmental features in resting-state EEG. Approach. Three paediatric datasets (n = 50, 17, 44) were analyzed using a two-step constrained parallel factor (PARAFAC) tensor decomposition. Subject age was used as a proxy measure of development. Classification used support vector machines (SVM) to test if PARAFAC identified features could predict subject age. The results were cross-validated within each dataset. Classification analysis was complemented by visualization of the high-dimensional feature structures using t-distributed Stochastic Neighbour Embedding (t-SNE) maps. Main Results. Development-related features were successfully identified for the developmental conditions of each dataset. SVM classification showed the identified features could accurately predict subject at a significant level above chance for both healthy and impaired populations. t-SNE maps revealed suitable tensor factorization was key in extracting the developmental features. Significance. The described methods are a promising tool for identifying latent developmental features occurring throughout childhood EEG. © 2018 IOP Publishing Ltd.
Borchani, Hanen; Bielza, Concha; Martı Nez-Martı N, Pablo; Larrañaga, Pedro
2012-12-01
Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson's Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson's patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson's disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman
2018-02-01
The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.
NASA Astrophysics Data System (ADS)
Rishi, Rahul; Choudhary, Amit; Singh, Ravinder; Dhaka, Vijaypal Singh; Ahlawat, Savita; Rao, Mukta
2010-02-01
In this paper we propose a system for classification problem of handwritten text. The system is composed of preprocessing module, supervised learning module and recognition module on a very broad level. The preprocessing module digitizes the documents and extracts features (tangent values) for each character. The radial basis function network is used in the learning and recognition modules. The objective is to analyze and improve the performance of Multi Layer Perceptron (MLP) using RBF transfer functions over Logarithmic Sigmoid Function. The results of 35 experiments indicate that the Feed Forward MLP performs accurately and exhaustively with RBF. With the change in weight update mechanism and feature-drawn preprocessing module, the proposed system is competent with good recognition show.
Homogeneity study of fixed-point continuous marine environmental and meteorological data: a review
NASA Astrophysics Data System (ADS)
Yang, Jinkun; Yang, Yang; Miao, Qingsheng; Dong, Mingmei; Wan, Fangfang
2018-02-01
The principle of inhomogeneity and the classification of homogeneity test methods are briefly described, and several common inhomogeneity methods and relative merits are described in detail. Then based on the applications of the different homogeneity methods to the ground meteorological data and marine environment data, the present status and the progress are reviewed. At present, the homogeneity research of radiosonde and ground meteorological data is mature at home and abroad, and the research and application in the marine environmental data should also be given full attention. To carry out a variety of test and correction methods combined with the use of multi-mode test system, will make the results more reasonable and scientific, and also can be used to provide accurate first-hand information for the coastal climate change researches.
A drone detection with aircraft classification based on a camera array
NASA Astrophysics Data System (ADS)
Liu, Hao; Qu, Fangchao; Liu, Yingjian; Zhao, Wei; Chen, Yitong
2018-03-01
In recent years, because of the rapid popularity of drones, many people have begun to operate drones, bringing a range of security issues to sensitive areas such as airports and military locus. It is one of the important ways to solve these problems by realizing fine-grained classification and providing the fast and accurate detection of different models of drone. The main challenges of fine-grained classification are that: (1) there are various types of drones, and the models are more complex and diverse. (2) the recognition test is fast and accurate, in addition, the existing methods are not efficient. In this paper, we propose a fine-grained drone detection system based on the high resolution camera array. The system can quickly and accurately recognize the detection of fine grained drone based on hd camera.
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. Copyright © 2015 Elsevier Ltd. All rights reserved.
A multi-channel clogging-resistant lab-on-a-chip cell counter and analyzer
NASA Astrophysics Data System (ADS)
Dai, Jie; Chiu, Yu-Jui; Lian, Ian; Wu, Tsung-Feng; Yang, Kecheng; Lo, Yu-Hwa
2016-02-01
Early signs of diseases can be revealed from cell detection in biofluids, such as detection of white blood cells (WBCs) in the peritoneal fluid for peritonitis. A lab-on-a-chip microfluidic device offers an attractive platform for such applications because of its small size, low cost, and ease of use provided the device can meet the performance requirements which many existing LoC devices fail to satisfy. We report an integrated microfluidic device capable of accurately counting low concentration of white blood cells in peritoneal fluid at 150 μl min-1 to offer an accurate (<3% error) and fast (~10 min/run) WBC count. Utilizing the self-regulating hydrodynamic properties and a unique architecture in the design, the device can achieve higher flow rate (500-1000 μl min-1), continuous running for over 5 h without clogging, as well as excellent signal quality for unambiguous WBC count and WBC classification for certain diseases. These properties make the device a promising candidate for point-of-care applications.
A multi-label learning based kernel automatic recommendation method for support vector machine.
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
Danielson, Patrick; Yang, Limin; Jin, Suming; Homer, Collin G.; Napton, Darrell
2016-01-01
We developed a method that analyzes the quality of the cultivated cropland class mapped in the USA National Land Cover Database (NLCD) 2006. The method integrates multiple geospatial datasets and a Multi Index Integrated Change Analysis (MIICA) change detection method that captures spectral changes to identify the spatial distribution and magnitude of potential commission and omission errors for the cultivated cropland class in NLCD 2006. The majority of the commission and omission errors in NLCD 2006 are in areas where cultivated cropland is not the most dominant land cover type. The errors are primarily attributed to the less accurate training dataset derived from the National Agricultural Statistics Service Cropland Data Layer dataset. In contrast, error rates are low in areas where cultivated cropland is the dominant land cover. Agreement between model-identified commission errors and independently interpreted reference data was high (79%). Agreement was low (40%) for omission error comparison. The majority of the commission errors in the NLCD 2006 cultivated crops were confused with low-intensity developed classes, while the majority of omission errors were from herbaceous and shrub classes. Some errors were caused by inaccurate land cover change from misclassification in NLCD 2001 and the subsequent land cover post-classification process.
Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method
NASA Astrophysics Data System (ADS)
Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao
2016-09-01
To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.
Chiarelli, Antonio Maria; Croce, Pierpaolo; Merla, Arcangelo; Zappasodi, Filippo
2018-06-01
Brain-computer interface (BCI) refers to procedures that link the central nervous system to a device. BCI was historically performed using electroencephalography (EEG). In the last years, encouraging results were obtained by combining EEG with other neuroimaging technologies, such as functional near infrared spectroscopy (fNIRS). A crucial step of BCI is brain state classification from recorded signal features. Deep artificial neural networks (DNNs) recently reached unprecedented complex classification outcomes. These performances were achieved through increased computational power, efficient learning algorithms, valuable activation functions, and restricted or back-fed neurons connections. By expecting significant overall BCI performances, we investigated the capabilities of combining EEG and fNIRS recordings with state-of-the-art deep learning procedures. We performed a guided left and right hand motor imagery task on 15 subjects with a fixed classification response time of 1 s and overall experiment length of 10 min. Left versus right classification accuracy of a DNN in the multi-modal recording modality was estimated and it was compared to standalone EEG and fNIRS and other classifiers. At a group level we obtained significant increase in performance when considering multi-modal recordings and DNN classifier with synergistic effect. BCI performances can be significantly improved by employing multi-modal recordings that provide electrical and hemodynamic brain activity information, in combination with advanced non-linear deep learning classification procedures.
NASA Astrophysics Data System (ADS)
Chiarelli, Antonio Maria; Croce, Pierpaolo; Merla, Arcangelo; Zappasodi, Filippo
2018-06-01
Objective. Brain–computer interface (BCI) refers to procedures that link the central nervous system to a device. BCI was historically performed using electroencephalography (EEG). In the last years, encouraging results were obtained by combining EEG with other neuroimaging technologies, such as functional near infrared spectroscopy (fNIRS). A crucial step of BCI is brain state classification from recorded signal features. Deep artificial neural networks (DNNs) recently reached unprecedented complex classification outcomes. These performances were achieved through increased computational power, efficient learning algorithms, valuable activation functions, and restricted or back-fed neurons connections. By expecting significant overall BCI performances, we investigated the capabilities of combining EEG and fNIRS recordings with state-of-the-art deep learning procedures. Approach. We performed a guided left and right hand motor imagery task on 15 subjects with a fixed classification response time of 1 s and overall experiment length of 10 min. Left versus right classification accuracy of a DNN in the multi-modal recording modality was estimated and it was compared to standalone EEG and fNIRS and other classifiers. Main results. At a group level we obtained significant increase in performance when considering multi-modal recordings and DNN classifier with synergistic effect. Significance. BCI performances can be significantly improved by employing multi-modal recordings that provide electrical and hemodynamic brain activity information, in combination with advanced non-linear deep learning classification procedures.
Minimum Expected Risk Estimation for Near-neighbor Classification
2006-04-01
We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors ( kNN ...estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant...the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights
The LSST Data Mining Research Agenda
NASA Astrophysics Data System (ADS)
Borne, K.; Becla, J.; Davidson, I.; Szalay, A.; Tyson, J. A.
2008-12-01
We describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night) multi-resolution methods for exploration of petascale databases; indexing of multi-attribute multi-dimensional astronomical databases (beyond spatial indexing) for rapid querying of petabyte databases; and more.
ASERA: A spectrum eye recognition assistant for quasar spectra
NASA Astrophysics Data System (ADS)
Yuan, Hailong; Zhang, Haotong; Zhang, Yanxia; Lei, Yajuan; Dong, Yiqiao; Zhao, Yongheng
2013-11-01
Spectral type recognition is an important and fundamental step of large sky survey projects in the data reduction for further scientific research, like parameter measurement and statistic work. It tends out to be a huge job to manually inspect the low quality spectra produced from the massive spectroscopic survey, where the automatic pipeline may not provide confident type classification results. In order to improve the efficiency and effectiveness of spectral classification, we develop a semi-automated toolkit named ASERA, ASpectrum Eye Recognition Assistant. The main purpose of ASERA is to help the user in quasar spectral recognition and redshift measurement. Furthermore it can also be used to recognize various types of spectra of stars, galaxies and AGNs (Active Galactic Nucleus). It is an interactive software allowing the user to visualize observed spectra, superimpose template spectra from the Sloan Digital Sky Survey (SDSS), and interactively access related spectral line information. It is an efficient and user-friendly toolkit for the accurate classification of spectra observed by LAMOST (the Large Sky Area Multi-object Fiber Spectroscopic Telescope). The toolkit is available in two modes: a Java standalone application and a Java applet. ASERA has a few functions, such as wavelength and flux scale setting, zoom in and out, redshift estimation, spectral line identification, which helps user to improve the spectral classification accuracy especially for low quality spectra and reduce the labor of eyeball check. The function and performance of this tool is displayed through the recognition of several quasar spectra and a late type stellar spectrum from the LAMOST Pilot survey. Its future expansion capabilities are discussed.
Yoon, Jong H.; Tamir, Diana; Minzenberg, Michael J.; Ragland, J. Daniel; Ursu, Stefan; Carter, Cameron S.
2009-01-01
Background Multivariate pattern analysis is an alternative method of analyzing fMRI data, which is capable of decoding distributed neural representations. We applied this method to test the hypothesis of the impairment in distributed representations in schizophrenia. We also compared the results of this method with traditional GLM-based univariate analysis. Methods 19 schizophrenia and 15 control subjects viewed two runs of stimuli--exemplars of faces, scenes, objects, and scrambled images. To verify engagement with stimuli, subjects completed a 1-back matching task. A multi-voxel pattern classifier was trained to identify category-specific activity patterns on one run of fMRI data. Classification testing was conducted on the remaining run. Correlation of voxel-wise activity across runs evaluated variance over time in activity patterns. Results Patients performed the task less accurately. This group difference was reflected in the pattern analysis results with diminished classification accuracy in patients compared to controls, 59% and 72% respectively. In contrast, there was no group difference in GLM-based univariate measures. In both groups, classification accuracy was significantly correlated with behavioral measures. Both groups showed highly significant correlation between inter-run correlations and classification accuracy. Conclusions Distributed representations of visual objects are impaired in schizophrenia. This impairment is correlated with diminished task performance, suggesting that decreased integrity of cortical activity patterns is reflected in impaired behavior. Comparisons with univariate results suggest greater sensitivity of pattern analysis in detecting group differences in neural activity and reduced likelihood of non-specific factors driving these results. PMID:18822407
Genome-Wide Comparative Gene Family Classification
Frech, Christian; Chen, Nansheng
2010-01-01
Correct classification of genes into gene families is important for understanding gene function and evolution. Although gene families of many species have been resolved both computationally and experimentally with high accuracy, gene family classification in most newly sequenced genomes has not been done with the same high standard. This project has been designed to develop a strategy to effectively and accurately classify gene families across genomes. We first examine and compare the performance of computer programs developed for automated gene family classification. We demonstrate that some programs, including the hierarchical average-linkage clustering algorithm MC-UPGMA and the popular Markov clustering algorithm TRIBE-MCL, can reconstruct manual curation of gene families accurately. However, their performance is highly sensitive to parameter setting, i.e. different gene families require different program parameters for correct resolution. To circumvent the problem of parameterization, we have developed a comparative strategy for gene family classification. This strategy takes advantage of existing curated gene families of reference species to find suitable parameters for classifying genes in related genomes. To demonstrate the effectiveness of this novel strategy, we use TRIBE-MCL to classify chemosensory and ABC transporter gene families in C. elegans and its four sister species. We conclude that fully automated programs can establish biologically accurate gene families if parameterized accordingly. Comparative gene family classification finds optimal parameters automatically, thus allowing rapid insights into gene families of newly sequenced species. PMID:20976221
Turbulence effects on volatilization rates of liquids and solutes
Lee, J.-F.; Chao, H.-P.; Chiou, C.T.; Manes, M.
2004-01-01
Volatilization rates of neat liquids (benzene, toluene, fluorobenzene, bromobenzene, ethylbenzene, m-xylene, o-xylene, o-dichlorobenzene, and 1-methylnaphthalene) and of solutes (phenol, m-cresol, benzene, toluene, ethylbenzene, o-xylene, and ethylene dibromide) from dilute water solutions have been measured in the laboratory over a wide range of air speeds and water-stirring rates. The overall transfer coefficients (KL) for individual solutes are independent of whether they are in single- or multi-solute solutions. The gas-film transfer coefficients (kG) for solutes in the two-film model, which have hitherto been estimated by extrapolation from reference coefficients, can now be determined directly from the volatilization rates of neatliquids through anew algorithm. The associated liquid-film transfer coefficients (KL) can then be obtained from measured KL and kG values and solute Henry law constants (H). This approach provides a novel means for checking the precision of any kL and kG estimation methods for ultimate prediction of KL. The improved kG estimation enables accurate K L predictions for low-volatility (i.e., low-H) solutes where K L and kGH are essentially equal. In addition, the prediction of KL values for high-volatility (i.e., high-H) solutes, where KL ??? kL, is also improved by using appropriate reference kL values.
Yang, Xiaoyan; Chen, Longgao; Li, Yingkui; Xi, Wenjia; Chen, Longqian
2015-07-01
Land use/land cover (LULC) inventory provides an important dataset in regional planning and environmental assessment. To efficiently obtain the LULC inventory, we compared the LULC classifications based on single satellite imagery with a rule-based classification based on multi-seasonal imagery in Lianyungang City, a coastal city in China, using CBERS-02 (the 2nd China-Brazil Environmental Resource Satellites) images. The overall accuracies of the classification based on single imagery are 78.9, 82.8, and 82.0% in winter, early summer, and autumn, respectively. The rule-based classification improves the accuracy to 87.9% (kappa 0.85), suggesting that combining multi-seasonal images can considerably improve the classification accuracy over any single image-based classification. This method could also be used to analyze seasonal changes of LULC types, especially for those associated with tidal changes in coastal areas. The distribution and inventory of LULC types with an overall accuracy of 87.9% and a spatial resolution of 19.5 m can assist regional planning and environmental assessment efficiently in Lianyungang City. This rule-based classification provides a guidance to improve accuracy for coastal areas with distinct LULC temporal spectral features.
Slabbinck, Bram; Waegeman, Willem; Dawyndt, Peter; De Vos, Paul; De Baets, Bernard
2010-01-30
Machine learning techniques have shown to improve bacterial species classification based on fatty acid methyl ester (FAME) data. Nonetheless, FAME analysis has a limited resolution for discrimination of bacteria at the species level. In this paper, we approach the species classification problem from a taxonomic point of view. Such a taxonomy or tree is typically obtained by applying clustering algorithms on FAME data or on 16S rRNA gene data. The knowledge gained from the tree can then be used to evaluate FAME-based classifiers, resulting in a novel framework for bacterial species classification. In view of learning in a taxonomic framework, we consider two types of trees. First, a FAME tree is constructed with a supervised divisive clustering algorithm. Subsequently, based on 16S rRNA gene sequence analysis, phylogenetic trees are inferred by the NJ and UPGMA methods. In this second approach, the species classification problem is based on the combination of two different types of data. Herein, 16S rRNA gene sequence data is used for phylogenetic tree inference and the corresponding binary tree splits are learned based on FAME data. We call this learning approach 'phylogenetic learning'. Supervised Random Forest models are developed to train the classification tasks in a stratified cross-validation setting. In this way, better classification results are obtained for species that are typically hard to distinguish by a single or flat multi-class classification model. FAME-based bacterial species classification is successfully evaluated in a taxonomic framework. Although the proposed approach does not improve the overall accuracy compared to flat multi-class classification, it has some distinct advantages. First, it has better capabilities for distinguishing species on which flat multi-class classification fails. Secondly, the hierarchical classification structure allows to easily evaluate and visualize the resolution of FAME data for the discrimination of bacterial species. Summarized, by phylogenetic learning we are able to situate and evaluate FAME-based bacterial species classification in a more informative context.
2010-01-01
Background Machine learning techniques have shown to improve bacterial species classification based on fatty acid methyl ester (FAME) data. Nonetheless, FAME analysis has a limited resolution for discrimination of bacteria at the species level. In this paper, we approach the species classification problem from a taxonomic point of view. Such a taxonomy or tree is typically obtained by applying clustering algorithms on FAME data or on 16S rRNA gene data. The knowledge gained from the tree can then be used to evaluate FAME-based classifiers, resulting in a novel framework for bacterial species classification. Results In view of learning in a taxonomic framework, we consider two types of trees. First, a FAME tree is constructed with a supervised divisive clustering algorithm. Subsequently, based on 16S rRNA gene sequence analysis, phylogenetic trees are inferred by the NJ and UPGMA methods. In this second approach, the species classification problem is based on the combination of two different types of data. Herein, 16S rRNA gene sequence data is used for phylogenetic tree inference and the corresponding binary tree splits are learned based on FAME data. We call this learning approach 'phylogenetic learning'. Supervised Random Forest models are developed to train the classification tasks in a stratified cross-validation setting. In this way, better classification results are obtained for species that are typically hard to distinguish by a single or flat multi-class classification model. Conclusions FAME-based bacterial species classification is successfully evaluated in a taxonomic framework. Although the proposed approach does not improve the overall accuracy compared to flat multi-class classification, it has some distinct advantages. First, it has better capabilities for distinguishing species on which flat multi-class classification fails. Secondly, the hierarchical classification structure allows to easily evaluate and visualize the resolution of FAME data for the discrimination of bacterial species. Summarized, by phylogenetic learning we are able to situate and evaluate FAME-based bacterial species classification in a more informative context. PMID:20113515
Gao, Yaozong; Shao, Yeqin; Lian, Jun; Wang, Andrew Z.; Chen, Ronald C.
2016-01-01
Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a nonlocal external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation. PMID:26800531
Wetland Assessment Using Unmanned Aerial Vehicle (uav) Photogrammetry
NASA Astrophysics Data System (ADS)
Boon, M. A.; Greenfield, R.; Tesfamichael, S.
2016-06-01
The use of Unmanned Arial Vehicle (UAV) photogrammetry is a valuable tool to enhance our understanding of wetlands. Accurate planning derived from this technological advancement allows for more effective management and conservation of wetland areas. This paper presents results of a study that aimed at investigating the use of UAV photogrammetry as a tool to enhance the assessment of wetland ecosystems. The UAV images were collected during a single flight within 2½ hours over a 100 ha area at the Kameelzynkraal farm, Gauteng Province, South Africa. An AKS Y-6 MKII multi-rotor UAV and a digital camera on a motion compensated gimbal mount were utilised for the survey. Twenty ground control points (GCPs) were surveyed using a Trimble GPS to achieve geometrical precision and georeferencing accuracy. Structure-from-Motion (SfM) computer vision techniques were used to derive ultra-high resolution point clouds, orthophotos and 3D models from the multi-view photos. The geometric accuracy of the data based on the 20 GCP's were 0.018 m for the overall, 0.0025 m for the vertical root mean squared error (RMSE) and an over all root mean square reprojection error of 0.18 pixel. The UAV products were then edited and subsequently analysed, interpreted and key attributes extracted using a selection of tools/ software applications to enhance the wetland assessment. The results exceeded our expectations and provided a valuable and accurate enhancement to the wetland delineation, classification and health assessment which even with detailed field studies would have been difficult to achieve.
Evaluation of topographical and seasonal feature using GPM IMERG and TRMM 3B42 over Far-East Asia
NASA Astrophysics Data System (ADS)
Kim, Kiyoung; Park, Jongmin; Baik, Jongjin; Choi, Minha
2017-05-01
The acquisition of accurate precipitation data is essential for analyzing various hydrological phenomena and climate change. Recently, the Global Precipitation Measurement (GPM) satellites were launched as a next-generation rainfall mission for observing global precipitation characteristics. The main objective in this study is to assess precipitation products from GPM, especially the Integrated Multi-satellitE Retrievals (GPM-3IMERGHH) and the Tropical Rainfall Measurement Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA), using gauge-based precipitation data from Far-East Asia during the pre-monsoon and monsoon seasons. Evaluation was performed by focusing on three different factors: geographical aspects, seasonal factors, and spatial distributions. In both mountainous and coastal regions, the GPM-3IMERGHH product showed better performance than the TRMM 3B42 V7, although both rainfall products showed uncertainties caused by orographic convection and the land-ocean classification algorithm. GPM-3IMERGHH performed about 8% better than TRMM 3B42 V7 during the pre-monsoon and monsoon seasons due to the improvement of loaded sensor and reinforcement in capturing convective rainfall, respectively. In depicting the spatial distribution of precipitation, GPM-3IMERGHH was more accurate than TRMM 3B42 V7 because of its enhanced spatial and temporal resolutions of 10 km and 30 min, respectively. Based on these results, GPM-3IMERGHH would be helpful for not only understanding the characteristics of precipitation with high spatial and temporal resolution, but also for estimating near-real-time runoff patterns.
Statistical Inference on Optimal Points to Evaluate Multi-State Classification Systems
2014-09-18
vs2+ ( dbcm3 ˆ 2 ) *vm3+( dbcs3 ˆ 2 ) * vs3 80 VETA <−VBCA+VBC EETA<−EBCA−EBC 82 W<− (EETA−TV) / s q r t ( VETA ) # T e s t p−v a l u e − t o compare t o a...event set, E = (ε1, ε2, ..., εk) to k distinct elements of a label set, L = (l1, l2, ..., lk) . These partitions may be referred to as classes. For...set of features, F = ( f1, f2, ..., fm) . These features are then used to assign the different elements from E to the respective labels, L , (A : E → F
NASA Technical Reports Server (NTRS)
van den Bergh, Jarrett; Schutz, Joey; Li, Alan; Chirayath, Ved
2017-01-01
NeMO-Net, the NASA neural multi-modal observation and training network for global coral reef assessment, is an open-source deep convolutional neural network and interactive active learning training software aiming to accurately assess the present and past dynamics of coral reef ecosystems through determination of percent living cover and morphology as well as mapping of spatial distribution. We present an interactive video game prototype for tablet and mobile devices where users interactively label morphology classifications over mm-scale 3D coral reef imagery captured using fluid lensing to create a dataset that will be used to train NeMO-Nets convolutional neural network. The application currently allows for users to classify preselected regions of coral in the Pacific and will be expanded to include additional regions captured using our NASA FluidCam instrument, presently the highest-resolution remote sensing benthic imaging technology capable of removing ocean wave distortion, as well as lower-resolution airborne remote sensing data from the ongoing NASA CORAL campaign. Active learning applications present a novel methodology for efficiently training large-scale Neural Networks wherein variances in identification can be rapidly mitigated against control data. NeMO-Net periodically checks users input against pre-classified coral imagery to gauge their accuracy and utilize in-game mechanics to provide classification training. Users actively communicate with a server and are requested to classify areas of coral for which other users had conflicting classifications and contribute their input to a larger database for ranking. In partnering with Mission Blue and IUCN, NeMO-Net leverages an international consortium of subject matter experts to classify areas of confusion identified by NeMO-Net and generate additional labels crucial for identifying decision boundary locations in coral reef assessment.
NASA Astrophysics Data System (ADS)
van den Bergh, J.; Schutz, J.; Chirayath, V.; Li, A.
2017-12-01
NeMO-Net, the NASA neural multi-modal observation and training network for global coral reef assessment, is an open-source deep convolutional neural network and interactive active learning training software aiming to accurately assess the present and past dynamics of coral reef ecosystems through determination of percent living cover and morphology as well as mapping of spatial distribution. We present an interactive video game prototype for tablet and mobile devices where users interactively label morphology classifications over mm-scale 3D coral reef imagery captured using fluid lensing to create a dataset that will be used to train NeMO-Net's convolutional neural network. The application currently allows for users to classify preselected regions of coral in the Pacific and will be expanded to include additional regions captured using our NASA FluidCam instrument, presently the highest-resolution remote sensing benthic imaging technology capable of removing ocean wave distortion, as well as lower-resolution airborne remote sensing data from the ongoing NASA CORAL campaign.Active learning applications present a novel methodology for efficiently training large-scale Neural Networks wherein variances in identification can be rapidly mitigated against control data. NeMO-Net periodically checks users' input against pre-classified coral imagery to gauge their accuracy and utilizes in-game mechanics to provide classification training. Users actively communicate with a server and are requested to classify areas of coral for which other users had conflicting classifications and contribute their input to a larger database for ranking. In partnering with Mission Blue and IUCN, NeMO-Net leverages an international consortium of subject matter experts to classify areas of confusion identified by NeMO-Net and generate additional labels crucial for identifying decision boundary locations in coral reef assessment.
J-Plus: Morphological Classification Of Compact And Extended Sources By Pdf Analysis
NASA Astrophysics Data System (ADS)
López-Sanjuan, C.; Vázquez-Ramió, H.; Varela, J.; Spinoso, D.; Cristóbal-Hornillos, D.; Viironen, K.; Muniesa, D.; J-PLUS Collaboration
2017-10-01
We present a morphological classification of J-PLUS EDR sources into compact (i.e. stars) and extended (i.e. galaxies). Such classification is based on the Bayesian modelling of the concentration distribution, including observational errors and magnitude + sky position priors. We provide the star / galaxy probability of each source computed from the gri images. The comparison with the SDSS number counts support our classification up to r 21. The 31.7 deg² analised comprises 150k stars and 101k galaxies.
Classification of Instructional Programs: 2000 Edition.
ERIC Educational Resources Information Center
Morgan, Robert L.; Hunt, E. Stephen
This third revision of the Classification of Instructional Programs (CIP) updates and modifies education program classifications, providing a taxonomic scheme that supports the accurate tracking, assessment, and reporting of field of study and program completions activity. This edition has also been adopted as the standard field of study taxonomy…
NASA Astrophysics Data System (ADS)
Makowski, Christopher
The coastal (terrestrial) and benthic environments along the southeast Florida continental shelf show a unique biophysical succession of marine features from a highly urbanized, developed coastal region in the north (i.e. northern Miami-Dade County) to a protective marine sanctuary in the southeast (i.e. Florida Keys National Marine Sanctuary). However, the establishment of a standard bio-geomorphological classification scheme for this area of coastal and benthic environments is lacking. The purpose of this study was to test the hypothesis and answer the research question of whether new parameters of integrating geomorphological components with dominant biological covers could be developed and applied across multiple remote sensing platforms for an innovative way to identify, interpret, and classify diverse coastal and benthic environments along the southeast Florida continental shelf. An ordered manageable hierarchical classification scheme was developed to incorporate the categories of Physiographic Realm, Morphodynamic Zone, Geoform, Landform, Dominant Surface Sediment, and Dominant Biological Cover. Six different remote sensing platforms (i.e. five multi-spectral satellite image sensors and one high-resolution aerial orthoimagery) were acquired, delineated according to the new classification scheme, and compared to determine optimal formats for classifying the study area. Cognitive digital classification at a nominal scale of 1:6000 proved to be more accurate than autoclassification programs and therefore used to differentiate coastal marine environments based on spectral reflectance characteristics, such as color, tone, saturation, pattern, and texture of the seafloor topology. In addition, attribute tables were created in conjugation with interpretations to quantify and compare the spatial relationships between classificatory units. IKONOS-2 satellite imagery was determined to be the optimal platform for applying the hierarchical classification scheme. However, each remote sensing platform had beneficial properties depending on research goals, logistical restrictions, and financial support. This study concluded that a new hierarchical comprehensive classification scheme for identifying coastal marine environments along the southeast Florida continental shelf could be achieved by integrating geomorphological features with biological coverages. This newly developed scheme, which can be applied across multiple remote sensing platforms with GIS software, establishes an innovative classification protocol to be used in future research studies.
Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks.
Yu, Lequan; Chen, Hao; Dou, Qi; Qin, Jing; Heng, Pheng-Ann
2017-04-01
Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.
Oi, Shizuo
2011-10-01
Hydrocephalus is a complex pathophysiology with disturbed cerebrospinal fluid (CSF) circulation. There are numerous numbers of classification trials published focusing on various criteria, such as associated anomalies/underlying lesions, CSF circulation/intracranial pressure patterns, clinical features, and other categories. However, no definitive classification exists comprehensively to cover the variety of these aspects. The new classification of hydrocephalus, "Multi-categorical Hydrocephalus Classification" (Mc HC), was invented and developed to cover the entire aspects of hydrocephalus with all considerable classification items and categories. Ten categories include "Mc HC" category I: onset (age, phase), II: cause, III: underlying lesion, IV: symptomatology, V: pathophysiology 1-CSF circulation, VI: pathophysiology 2-ICP dynamics, VII: chronology, VII: post-shunt, VIII: post-endoscopic third ventriculostomy, and X: others. From a 100-year search of publication related to the classification of hydrocephalus, 14 representative publications were reviewed and divided into the 10 categories. The Baumkuchen classification graph made from the round o'clock classification demonstrated the historical tendency of deviation to the categories in pathophysiology, either CSF or ICP dynamics. In the preliminary clinical application, it was concluded that "Mc HC" is extremely effective in expressing the individual state with various categories in the past and present condition or among the compatible cases of hydrocephalus along with the possible chronological change in the future.
Geiger-mode avalanche photodiode focal plane arrays for three-dimensional imaging LADAR
NASA Astrophysics Data System (ADS)
Itzler, Mark A.; Entwistle, Mark; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir; Zalud, Peter F.; Senko, Tom; Tower, John; Ferraro, Joseph
2010-09-01
We report on the development of focal plane arrays (FPAs) employing two-dimensional arrays of InGaAsP-based Geiger-mode avalanche photodiodes (GmAPDs). These FPAs incorporate InP/InGaAs(P) Geiger-mode avalanche photodiodes (GmAPDs) to create pixels that detect single photons at shortwave infrared wavelengths with high efficiency and low dark count rates. GmAPD arrays are hybridized to CMOS read-out integrated circuits (ROICs) that enable independent laser radar (LADAR) time-of-flight measurements for each pixel, providing three-dimensional image data at frame rates approaching 200 kHz. Microlens arrays are used to maintain high fill factor of greater than 70%. We present full-array performance maps for two different types of sensors optimized for operation at 1.06 μm and 1.55 μm, respectively. For the 1.06 μm FPAs, overall photon detection efficiency of >40% is achieved at <20 kHz dark count rates with modest cooling to ~250 K using integrated thermoelectric coolers. We also describe the first evalution of these FPAs when multi-photon pulses are incident on single pixels. The effective detection efficiency for multi-photon pulses shows excellent agreement with predictions based on Poisson statistics. We also characterize the crosstalk as a function of pulse mean photon number. Relative to the intrinsic crosstalk contribution from hot carrier luminescence that occurs during avalanche current flows resulting from single incident photons, we find a modest rise in crosstalk for multi-photon incident pulses that can be accurately explained by direct optical scattering.
Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).
Bevilacqua, Marta; Marini, Federico
2014-08-01
The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.
A New Qualitative Typology to Classify Treading Water Movement Patterns
Schnitzler, Christophe; Button, Chris; Croft, James L.
2015-01-01
This study proposes a new qualitative typology that can be used to classify learners treading water into different skill-based categories. To establish the typology, 38 participants were videotaped while treading water and their movement patterns were qualitatively analyzed by two experienced biomechanists. 13 sport science students were then asked to classify eight of the original participants after watching a brief tutorial video about how to use the typology. To examine intra-rater consistency, each participant was presented in a random order three times. Generalizability (G) and Decision (D) studies were performed to estimate the importance variance due to rater, occasion, video and the interactions between them, and to determine the reliability of the raters’ answers. A typology of five general classes of coordination was defined amongst the original 38 participants. The G-study showed an accurate and reliable assessment of different pattern type, with a percentage of correct classification of 80.1%, an overall Fleiss’ Kappa coefficient K = 0.6, and an overall generalizability φ coefficient of 0.99. This study showed that the new typology proposed to characterize the behaviour of individuals treading water was both accurate and highly reliable. Movement pattern classification using the typology might help practitioners distinguish between different skill-based behaviours and potentially guide instruction of key aquatic survival skills. Key points Treading water behavioral adaptation can be classified along two dimensions: the type of force created (drag vs lift), and the frequency of the force impulses Based on these concepts, 9 behavioral types can be identified, providing the basis for a typology Provided with macroscopic descriptors (movements of the limb relative to the water, and synchronous vs asynchronous movements), analysts can characterize behavioral type accurately and reliably. PMID:26336339
An Improvement To The k-Nearest Neighbor Classifier For ECG Database
NASA Astrophysics Data System (ADS)
Jaafar, Haryati; Hidayah Ramli, Nur; Nasir, Aimi Salihah Abdul
2018-03-01
The k nearest neighbor (kNN) is a non-parametric classifier and has been widely used for pattern classification. However, in practice, the performance of kNN often tends to fail due to the lack of information on how the samples are distributed among them. Moreover, kNN is no longer optimal when the training samples are limited. Another problem observed in kNN is regarding the weighting issues in assigning the class label before classification. Thus, to solve these limitations, a new classifier called Mahalanobis fuzzy k-nearest centroid neighbor (MFkNCN) is proposed in this study. Here, a Mahalanobis distance is applied to avoid the imbalance of samples distribition. Then, a surrounding rule is employed to obtain the nearest centroid neighbor based on the distributions of training samples and its distance to the query point. Consequently, the fuzzy membership function is employed to assign the query point to the class label which is frequently represented by the nearest centroid neighbor Experimental studies from electrocardiogram (ECG) signal is applied in this study. The classification performances are evaluated in two experimental steps i.e. different values of k and different sizes of feature dimensions. Subsequently, a comparative study of kNN, kNCN, FkNN and MFkCNN classifier is conducted to evaluate the performances of the proposed classifier. The results show that the performance of MFkNCN consistently exceeds the kNN, kNCN and FkNN with the best classification rates of 96.5%.
Sun, Jing Chuan; Zhang, Bin; Shi, Jiangang; Sun, Kai Qiang; Huan, Le; Sun, Xiao Fei; Liu, Ning; Zheng, Bing; Wang, Hai Bo
2018-04-27
To analyze the correlation between the K-line-based classification of patients with ossification of the posterior longitudinal ligament (OPLL) and their outcome after anterior controllable antedisplacement and fusion (ACAF) surgery. A series of 24 patients with multisegmental OPLL were enrolled. All patients underwent ACAF surgery. First, the patients were classified into 2 groups according to their K-line classification. Then, we separated the patients into subgroups according to their OPLL thickness. The Japanese Orthopaedic Association scores before and 6 months after surgery were studied, and the recovery rate (RR) was calculated. The preoperative and postoperative radiologic parameters were also investigated. Clinical and radiographic assessments showed no significant correlation between the K-line-based classification of patients with OPLL and their outcome of ACAF surgery (P > 0.05). When the OPLL was ≤6 mm thick, K-line-based classification groups had a similar change of occupation ratio and RR (P > 0.05). However, when the OPLL was >6 mm thick, the mean RR was 61.8% ± 14.0% in the K-line (+) group and 78.3% ± 9.7% in the K-line (-) group (P < 0.05), and the mean was 16.0% ± 4.2% in the K-line (+) group and 28.0% ± 7.1% in the K-line (-) group (P < 0.05). This study shows that K-line can predict the clinical outcome of ACAF surgery for multisegmental OPLL in a different way from posterior decompression surgery. When the OPLL was thin, the outcome was satisfactory and there was no correlation with K-line-based classification of patients with OPLL. When the OPLL was >6 mm thick, the K-line (-) group patients had a better outcome than did K-line (+) group patients. Copyright © 2018 Elsevier Inc. All rights reserved.
A MapReduce approach to diminish imbalance parameters for big deoxyribonucleic acid dataset.
Kamal, Sarwar; Ripon, Shamim Hasnat; Dey, Nilanjan; Ashour, Amira S; Santhi, V
2016-07-01
In the age of information superhighway, big data play a significant role in information processing, extractions, retrieving and management. In computational biology, the continuous challenge is to manage the biological data. Data mining techniques are sometimes imperfect for new space and time requirements. Thus, it is critical to process massive amounts of data to retrieve knowledge. The existing software and automated tools to handle big data sets are not sufficient. As a result, an expandable mining technique that enfolds the large storage and processing capability of distributed or parallel processing platforms is essential. In this analysis, a contemporary distributed clustering methodology for imbalance data reduction using k-nearest neighbor (K-NN) classification approach has been introduced. The pivotal objective of this work is to illustrate real training data sets with reduced amount of elements or instances. These reduced amounts of data sets will ensure faster data classification and standard storage management with less sensitivity. However, general data reduction methods cannot manage very big data sets. To minimize these difficulties, a MapReduce-oriented framework is designed using various clusters of automated contents, comprising multiple algorithmic approaches. To test the proposed approach, a real DNA (deoxyribonucleic acid) dataset that consists of 90 million pairs has been used. The proposed model reduces the imbalance data sets from large-scale data sets without loss of its accuracy. The obtained results depict that MapReduce based K-NN classifier provided accurate results for big data of DNA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
A change detection experiment for an invasive species, saltcedar, near Lovelock, Nevada, was conducted with multi-date Compact Airborne Spectrographic Imager (CASI) hyperspectral datasets. Classification and NDVI differencing change detection methods were tested, In the classification strategy, a p...
Multiclass cancer diagnosis using tumor gene expression signatures
Ramaswamy, S.; Tamayo, P.; Rifkin, R.; ...
2001-12-11
The optimal treatment of patients with cancer depends on establishing accurate diagnoses by using a complex combination of clinical and histopathological data. In some instances, this task is difficult or impossible because of atypical clinical presentation or histopathology. To determine whether the diagnosis of multiple common adult malignancies could be achieved purely by molecular classification, we subjected 218 tumor samples, spanning 14 common tumor types, and 90 normal tissue samples to oligonucleotide microarray gene expression analysis. The expression levels of 16,063 genes and expressed sequence tags were used to evaluate the accuracy of a multiclass classifier based on a supportmore » vector machine algorithm. Overall classification accuracy was 78%, far exceeding the accuracy of random classification (9%). Poorly differentiated cancers resulted in low-confidence predictions and could not be accurately classified according to their tissue of origin, indicating that they are molecularly distinct entities with dramatically different gene expression patterns compared with their well differentiated counterparts. Taken together, these results demonstrate the feasibility of accurate, multiclass molecular cancer classification and suggest a strategy for future clinical implementation of molecular cancer diagnostics.« less
Patange Subba Rao, Sheethal Prasad; Lewis, James; Haddad, Ziad; Paringe, Vishal; Mohanty, Khitish
2014-10-01
The aim of the study was to evaluate inter-observer reliability and intra-observer reproducibility between the three-column classification and Schatzker classification systems using 2D and 3D CT models. Fifty-two consecutive patients with tibial plateau fractures were evaluated by five orthopaedic surgeons. All patients were classified into Schatzker and three-column classification systems using x-rays and 2D and 3D CT images. The inter-observer reliability was evaluated in the first round and the intra-observer reliability was determined during the second round 2 weeks later. The average intra-observer reproducibility for the three-column classification was from substantial to excellent in all sub classifications, as compared with Schatzker classification. The inter-observer kappa values increased from substantial to excellent in three-column classification and to moderate in Schatzker classification The average values for three-column classification for all the categories are as follows: (I-III) k2D = 0.718, 95% CI 0.554-0.864, p < 0.0001 and average 3D = 0.874, 95% CI 0.754-0.890, p < 0.0001. For Schatzker classification system, the average values for all six categories are as follows: (I-VI) k2D = 0.536, 95% CI 0.365-0.685, p < 0.0001 and average k3D = 0.552 95% CI 0.405-0.700, p < 0.0001. The values are statistically significant. Statistically significant inter-observer values in both rounds were noted with the three-column classification, making it statistically an excellent agreement. The intra-observer reproducibility for the three-column classification improved as compared with the Schatzker classification. The three-column classification seems to be an effective way to characterise and classify fractures of tibial plateau.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng
An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classificationmore » rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.« less
Multi-view L2-SVM and its multi-view core vector machine.
Huang, Chengquan; Chung, Fu-lai; Wang, Shitong
2016-03-01
In this paper, a novel L2-SVM based classifier Multi-view L2-SVM is proposed to address multi-view classification tasks. The proposed Multi-view L2-SVM classifier does not have any bias in its objective function and hence has the flexibility like μ-SVC in the sense that the number of the yielded support vectors can be controlled by a pre-specified parameter. The proposed Multi-view L2-SVM classifier can make full use of the coherence and the difference of different views through imposing the consensus among multiple views to improve the overall classification performance. Besides, based on the generalized core vector machine GCVM, the proposed Multi-view L2-SVM classifier is extended into its GCVM version MvCVM which can realize its fast training on large scale multi-view datasets, with its asymptotic linear time complexity with the sample size and its space complexity independent of the sample size. Our experimental results demonstrated the effectiveness of the proposed Multi-view L2-SVM classifier for small scale multi-view datasets and the proposed MvCVM classifier for large scale multi-view datasets. Copyright © 2015 Elsevier Ltd. All rights reserved.
Physical Human Activity Recognition Using Wearable Sensors.
Attal, Ferhat; Mohammed, Samer; Dedabrishvili, Mariam; Chamroukhi, Faicel; Oukhellou, Latifa; Amirat, Yacine
2015-12-11
This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors' placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject.
Physical Human Activity Recognition Using Wearable Sensors
Attal, Ferhat; Mohammed, Samer; Dedabrishvili, Mariam; Chamroukhi, Faicel; Oukhellou, Latifa; Amirat, Yacine
2015-01-01
This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors’ placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject. PMID:26690450
Computer-aided diagnosis system: a Bayesian hybrid classification method.
Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J
2013-10-01
A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Effects of stress typicality during speeded grammatical classification.
Arciuli, Joanne; Cupples, Linda
2003-01-01
The experiments reported here were designed to investigate the influence of stress typicality during speeded grammatical classification of disyllabic English words by native and non-native speakers. Trochaic nouns and iambic gram verbs were considered to be typically stressed, whereas iambic nouns and trochaic verbs were considered to be atypically stressed. Experiments 1a and 2a showed that while native speakers classified typically stressed words individual more quickly and more accurately than atypically stressed words during differences reading, there were no overall effects during classification of spoken stimuli. However, a subgroup of native speakers with high error rates did show a significant effect during classification of spoken stimuli. Experiments 1b and 2b showed that non-native speakers classified typically stressed words more quickly and more accurately than atypically stressed words during reading. Typically stressed words were classified more accurately than atypically stressed words when the stimuli were spoken. Importantly, there was a significant relationship between error rates, vocabulary size and the size of the stress typicality effect in each experiment. We conclude that participants use information about lexical stress to help them distinguish between disyllabic nouns and verbs during speeded grammatical classification. This is especially so for individuals with a limited vocabulary who lack other knowledge (e.g., semantic knowledge) about the differences between these grammatical categories.
Neural correlates of own- and other-race face perception: spatial and temporal response differences.
Natu, Vaidehi; Raboy, David; O'Toole, Alice J
2011-02-01
Humans show an "other-race effect" for face recognition, with more accurate recognition of own- versus other-race faces. We compared the neural representations of own- and other-race faces using functional magnetic resonance imaging (fMRI) data in combination with a multi-voxel pattern classifier. Neural activity was recorded while Asians and Caucasians viewed Asian and Caucasian faces. A pattern classifier, applied to voxels across a broad range of ventral temporal areas, discriminated the brain activity maps elicited in response to Asian versus Caucasian faces in the brains of both Asians and Caucasians. Classification was most accurate in the first few time points of the block and required the use of own-race faces in the localizer scan to select voxels for classifier input. Next, we examined differences in the time-course of neural responses to own- and other-race faces and found evidence for a temporal "other-race effect." Own-race faces elicited a larger neural response initially that attenuated rapidly. The response to other-race faces was weaker at first, but increased over time, ultimately surpassing the magnitude of the own-race response in the fusiform "face" area (FFA). A similar temporal response pattern held across a broad range of ventral temporal areas. The pattern-classification results indicate the early availability of categorical information about own- versus other-race face status in the spatial pattern of neural activity. The slower, more sustained, brain response to other-race faces may indicate the need to recruit additional neural resources to process other-race faces for identification. Copyright © 2010 Elsevier Inc. All rights reserved.
Determination of optimum "multi-channel surface wave method" field parameters.
DOT National Transportation Integrated Search
2012-12-01
Multi-channel surface wave methods (especially the multi-channel analyses of surface wave method; MASW) are routinely used to : determine the shear-wave velocity of the subsurface to depths of 100 feet for site classification purposes. Users are awar...
Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin
2015-12-01
One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .
The Area-Time Complexity of Sorting.
1984-12-01
suggests a classification of keys into short (k < logn), long (k > 2 logn), and of medium length. Optimal or near-optimal designs of VLSI sorters are...suggests a classification of keys into short (k 4 logn ), long (k > 21ogn ), and of medium length. Optimal or near-optimal designs of VLSI sorters are...ARCHITECTURES 79 5.1 Introduction 79 5.2 Parallel Algorithms for Sorting 80 . 5.3 Parallel Architectures 88 6 OPTIMAL VLSI SORTERS FOR KEYS OF LENGTH k - logn
A multi-sensor scenario for coastal surveillance
NASA Astrophysics Data System (ADS)
van den Broek, A. C.; van den Broek, S. P.; van den Heuvel, J. C.; Schwering, P. B. W.; van Heijningen, A. W. P.
2007-10-01
Maritime borders and coastal zones are susceptible to threats such as drug trafficking, piracy, undermining economical activities. At TNO Defence, Security and Safety various studies aim at improving situational awareness in a coastal zone. In this study we focus on multi-sensor surveillance of the coastal environment. We present a study on improving classification results for small sea surface targets using an advanced sensor suite and a scenario in which a small boat is approaching the coast. A next generation sensor suite mounted on a tower has been defined consisting of a maritime surveillance and tracking radar system, capable of producing range profiles and ISAR imagery of ships, an advanced infrared camera and a laser range profiler. For this suite we have developed a multi-sensor classification procedure, which is used to evaluate the capabilities for recognizing and identifying non-cooperative ships in coastal waters. We have found that the different sensors give complementary information. Each sensor has its own specific distance range in which it contributes most. A multi-sensor approach reduces the number of misclassifications and reliable classification results are obtained earlier compared to a single sensor approach.
LONGITUDINAL COHORT METHODS STUDIES
Accurate exposure classification tools are required to link exposure with health effects in epidemiological studies. Exposure classification for occupational studies is relatively easy compared to predicting residential childhood exposures. Recent NHEXAS (Maryland) study articl...
Acharya, U Rajendra; Bhat, Shreya; Koh, Joel E W; Bhandary, Sulatha V; Adeli, Hojjat
2017-09-01
Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system. Copyright © 2017 Elsevier Ltd. All rights reserved.
Shadow detection and removal in RGB VHR images for land use unsupervised classification
NASA Astrophysics Data System (ADS)
Movia, A.; Beinat, A.; Crosilla, F.
2016-09-01
Nowadays, high resolution aerial images are widely available thanks to the diffusion of advanced technologies such as UAVs (Unmanned Aerial Vehicles) and new satellite missions. Although these developments offer new opportunities for accurate land use analysis and change detection, cloud and terrain shadows actually limit benefits and possibilities of modern sensors. Focusing on the problem of shadow detection and removal in VHR color images, the paper proposes new solutions and analyses how they can enhance common unsupervised classification procedures for identifying land use classes related to the CO2 absorption. To this aim, an improved fully automatic procedure has been developed for detecting image shadows using exclusively RGB color information, and avoiding user interaction. Results show a significant accuracy enhancement with respect to similar methods using RGB based indexes. Furthermore, novel solutions derived from Procrustes analysis have been applied to remove shadows and restore brightness in the images. In particular, two methods implementing the so called "anisotropic Procrustes" and the "not-centered oblique Procrustes" algorithms have been developed and compared with the linear correlation correction method based on the Cholesky decomposition. To assess how shadow removal can enhance unsupervised classifications, results obtained with classical methods such as k-means, maximum likelihood, and self-organizing maps, have been compared to each other and with a supervised clustering procedure.
Tissue discrimination in magnetic resonance imaging of the rotator cuff
NASA Astrophysics Data System (ADS)
Meschino, G. J.; Comas, D. S.; González, M. A.; Capiel, C.; Ballarin, V. L.
2016-04-01
Evaluation and diagnosis of diseases of the muscles within the rotator cuff can be done using different modalities, being the Magnetic Resonance the method more widely used. There are criteria to evaluate the degree of fat infiltration and muscle atrophy, but these have low accuracy and show great variability inter and intra observer. In this paper, an analysis of the texture features of the rotator cuff muscles is performed to classify them and other tissues. A general supervised classification approach was used, combining forward-search as feature selection method with kNN as classification rule. Sections of Magnetic Resonance Images of the tissues of interest were selected by specialist doctors and they were considered as Gold Standard. Accuracies obtained were of 93% for T1-weighted images and 92% for T2-weighted images. As an immediate future work, the combination of both sequences of images will be considered, expecting to improve the results, as well as the use of other sequences of Magnetic Resonance Images. This work represents an initial point for the classification and quantification of fat infiltration and muscle atrophy degree. From this initial point, it is expected to make an accurate and objective system which will result in benefits for future research and for patients’ health.
NASA Astrophysics Data System (ADS)
Pham, Dzung L.; Han, Xiao; Rettmann, Maryam E.; Xu, Chenyang; Tosun, Duygu; Resnick, Susan; Prince, Jerry L.
2002-05-01
In previous work, the authors presented a multi-stage procedure for the semi-automatic reconstruction of the cerebral cortex from magnetic resonance images. This method suffered from several disadvantages. First, the tissue classification algorithm used can be sensitive to noise within the image. Second, manual interaction was required for masking out undesired regions of the brain image, such as the ventricles and putamen. Third, iterated median filters were used to perform a topology correction on the initial cortical surface, resulting in an overly smoothed initial surface. Finally, the deformable surface used to converge to the cortex had difficulty capturing narrow gyri. In this work, all four disadvantages of the procedure have been addressed. A more robust tissue classification algorithm is employed and the manual masking step is replaced by an automatic method involving level set deformable models. Instead of iterated median filters, an algorithm developed specifically for topology correction is used. The last disadvantage is addressed using an algorithm that artificially separates adjacent sulcal banks. The new procedure is more automated but also more accurate than the previous one. Its utility is demonstrated by performing a preliminary study on data from the Baltimore Longitudinal Study of Aging.
Accelerating the Original Profile Kernel.
Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard
2013-01-01
One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.
NASA Astrophysics Data System (ADS)
Schmalz, M.; Ritter, G.; Key, R.
Accurate and computationally efficient spectral signature classification is a crucial step in the nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications using linear mixing models, signature classification accuracy depends on accurate spectral endmember discrimination [1]. If the endmembers cannot be classified correctly, then the signatures cannot be classified correctly, and object recognition from hyperspectral data will be inaccurate. In practice, the number of endmembers accurately classified often depends linearly on the number of inputs. This can lead to potentially severe classification errors in the presence of noise or densely interleaved signatures. In this paper, we present an comparison of emerging technologies for nonimaging spectral signature classfication based on a highly accurate, efficient search engine called Tabular Nearest Neighbor Encoding (TNE) [3,4] and a neural network technology called Morphological Neural Networks (MNNs) [5]. Based on prior results, TNE can optimize its classifier performance to track input nonergodicities, as well as yield measures of confidence or caution for evaluation of classification results. Unlike neural networks, TNE does not have a hidden intermediate data structure (e.g., the neural net weight matrix). Instead, TNE generates and exploits a user-accessible data structure called the agreement map (AM), which can be manipulated by Boolean logic operations to effect accurate classifier refinement algorithms. The open architecture and programmability of TNE's agreement map processing allows a TNE programmer or user to determine classification accuracy, as well as characterize in detail the signatures for which TNE did not obtain classification matches, and why such mis-matches occurred. In this study, we will compare TNE and MNN based endmember classification, using performance metrics such as probability of correct classification (Pd) and rate of false detections (Rfa). As proof of principle, we analyze classification of multiple closely spaced signatures from a NASA database of space material signatures. Additional analysis pertains to computational complexity and noise sensitivity, which are superior to Bayesian techniques based on classical neural networks. [1] Winter, M.E. "Fast autonomous spectral end-member determination in hyperspectral data," in Proceedings of the 13th International Conference On Applied Geologic Remote Sensing, Vancouver, B.C., Canada, pp. 337-44 (1999). [2] N. Keshava, "A survey of spectral unmixing algorithms," Lincoln Laboratory Journal 14:55-78 (2003). [3] Key, G., M.S. SCHMALZ, F.M. Caimi, and G.X. Ritter. "Performance analysis of tabular nearest neighbor encoding algorithm for joint compression and ATR", in Proceedings SPIE 3814:115-126 (1999). [4] Schmalz, M.S. and G. Key. "Algorithms for hyperspectral signature classification in unresolved object detection using tabular nearest neighbor encoding" in Proceedings of the 2007 AMOS Conference, Maui HI (2007). [5] Ritter, G.X., G. Urcid, and M.S. Schmalz. "Autonomous single-pass endmember approximation using lattice auto-associative memories", Neurocomputing (Elsevier), accepted (June 2008).
Kim, Hahnsung; Park, Suhyung; Kim, Eung Yeop; Park, Jaeseok
2018-09-01
To develop a novel, retrospective multi-phase non-contrast-enhanced MRA (ROMANCE MRA) in a single acquisition for robust angiogram separation even in the presence of cardiac arrhythmia. In the proposed ROMANCE MRA, data were continuously acquired over all cardiac phases using retrospective, multi-phase flow-sensitive single-slab 3D fast spin echo (FSE) with variable refocusing flip angles, while an external pulse oximeter was in sync with pulse repetitions in FSE to record real-time information on cardiac cycles. Data were then sorted into k-bin space using the real-time cardiac information. Angiograms were reconstructed directly from k-bin space by solving a constrained optimization problem with both subtraction-induced sparsity and low rank priors. Peripheral MRA was performed in normal volunteers with/without caffeine consumption and a volunteer with cardiac arrhythmia using conventional fresh blood imaging (FBI) and the proposed ROMANCE MRA for comparison. The proposed ROMANCE MRA shows superior performance in accurately delineating both major and small vessel branches with robust background suppression if compared with conventional FBI. Even in the presence of irregular heartbeats, the proposed method exhibits clear depiction of angiograms over conventional methods within clinically reasonable imaging time. We successfully demonstrated the feasibility of the proposed ROMANCE MRA in generating robust angiograms with background suppression. © 2018 International Society for Magnetic Resonance in Medicine.
2016-07-01
reconstruction, video synchronization, multi - view tracking, action recognition, reasoning with uncertainty 16. SECURITY CLASSIFICATION OF: 17...3.4.2. Human action recognition across multi - views ......................................................................................... 44 3.4.3...68 4.2.1. Multi - view Multi -object Tracking with 3D cues
Theoretical modeling of laser-induced plasmas using the ATOMIC code
NASA Astrophysics Data System (ADS)
Colgan, James; Johns, Heather; Kilcrease, David; Judge, Elizabeth; Barefield, James, II; Clegg, Samuel; Hartig, Kyle
2014-10-01
We report on efforts to model the emission spectra generated from laser-induced breakdown spectroscopy (LIBS). LIBS is a popular and powerful method of quickly and accurately characterizing unknown samples in a remote manner. In particular, LIBS is utilized by the ChemCam instrument on the Mars Science Laboratory. We model the LIBS plasma using the Los Alamos suite of atomic physics codes. Since LIBS plasmas generally have temperatures of somewhere between 3000 K and 12000 K, the emission spectra typically result from the neutral and singly ionized stages of the target atoms. We use the Los Alamos atomic structure and collision codes to generate sets of atomic data and use the plasma kinetics code ATOMIC to perform LTE or non-LTE calculations that generate level populations and an emission spectrum for the element of interest. In this presentation we compare the emission spectrum from ATOMIC with an Fe LIBS laboratory-generated plasma as well as spectra from the ChemCam instrument. We also discuss various physics aspects of the modeling of LIBS plasmas that are necessary for accurate characterization of the plasma, such as multi-element target composition effects, radiation transport effects, and accurate line shape treatments. The Los Alamos National Laboratory is operated by Los Alamos National Security, LLC for the National Nuclear Security Administration of the U.S. Department of Energy under Contract No. DE-AC5206NA25396.
Bayesian multi-task learning for decoding multi-subject neuroimaging data.
Marquand, Andre F; Brammer, Michael; Williams, Steven C R; Doyle, Orla M
2014-05-15
Decoding models based on pattern recognition (PR) are becoming increasingly important tools for neuroimaging data analysis. In contrast to alternative (mass-univariate) encoding approaches that use hierarchical models to capture inter-subject variability, inter-subject differences are not typically handled efficiently in PR. In this work, we propose to overcome this problem by recasting the decoding problem in a multi-task learning (MTL) framework. In MTL, a single PR model is used to learn different but related "tasks" simultaneously. The primary advantage of MTL is that it makes more efficient use of the data available and leads to more accurate models by making use of the relationships between tasks. In this work, we construct MTL models where each subject is modelled by a separate task. We use a flexible covariance structure to model the relationships between tasks and induce coupling between them using Gaussian process priors. We present an MTL method for classification problems and demonstrate a novel mapping method suitable for PR models. We apply these MTL approaches to classifying many different contrasts in a publicly available fMRI dataset and show that the proposed MTL methods produce higher decoding accuracy and more consistent discriminative activity patterns than currently used techniques. Our results demonstrate that MTL provides a promising method for multi-subject decoding studies by focusing on the commonalities between a group of subjects rather than the idiosyncratic properties of different subjects. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Khondoker, Mizanur R; Bachmann, Till T; Mewissen, Muriel; Dickinson, Paul; Dobrzelecki, Bartosz; Campbell, Colin J; Mount, Andrew R; Walton, Anthony J; Crain, Jason; Schulze, Holger; Giraud, Gerard; Ross, Alan J; Ciani, Ilenia; Ember, Stuart W J; Tlili, Chaker; Terry, Jonathan G; Grant, Eilidh; McDonnell, Nicola; Ghazal, Peter
2010-12-01
Machine learning and statistical model based classifiers have increasingly been used with more complex and high dimensional biological data obtained from high-throughput technologies. Understanding the impact of various factors associated with large and complex microarray datasets on the predictive performance of classifiers is computationally intensive, under investigated, yet vital in determining the optimal number of biomarkers for various classification purposes aimed towards improved detection, diagnosis, and therapeutic monitoring of diseases. We investigate the impact of microarray based data characteristics on the predictive performance for various classification rules using simulation studies. Our investigation using Random Forest, Support Vector Machines, Linear Discriminant Analysis and k-Nearest Neighbour shows that the predictive performance of classifiers is strongly influenced by training set size, biological and technical variability, replication, fold change and correlation between biomarkers. Optimal number of biomarkers for a classification problem should therefore be estimated taking account of the impact of all these factors. A database of average generalization errors is built for various combinations of these factors. The database of generalization errors can be used for estimating the optimal number of biomarkers for given levels of predictive accuracy as a function of these factors. Examples show that curves from actual biological data resemble that of simulated data with corresponding levels of data characteristics. An R package optBiomarker implementing the method is freely available for academic use from the Comprehensive R Archive Network (http://www.cran.r-project.org/web/packages/optBiomarker/).
Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung
2017-07-08
Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods.
Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung
2017-01-01
Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods. PMID:28698466
Multi-region analysis of longitudinal FDG-PET for the classification of Alzheimer’s disease
Gray, Katherine R.; Wolz, Robin; Heckemann, Rolf A.; Aljabar, Paul; Hammers, Alexander; Rueckert, Daniel
2012-01-01
Imaging biomarkers for Alzheimer’s disease are desirable for improved diagnosis and monitoring, as well as drug discovery. Automated image-based classification of individual patients could provide valuable diagnostic support for clinicians, when considered alongside cognitive assessment scores. We investigate the value of combining cross-sectional and longitudinal multi-region FDG-PET information for classification, using clinical and imaging data from the Alzheimer’s Disease Neuroimaging Initiative. Whole-brain segmentations into 83 anatomically defined regions were automatically generated for baseline and 12-month FDG-PET images. Regional signal intensities were extracted at each timepoint, as well as changes in signal intensity over the follow-up period. Features were provided to a support vector machine classifier. By combining 12-month signal intensities and changes over 12 months, we achieve significantly increased classification performance compared with using any of the three feature sets independently. Based on this combined feature set, we report classification accuracies of 88% between patients with Alzheimer’s disease and elderly healthy controls, and 65% between patients with stable mild cognitive impairment and those who subsequently progressed to Alzheimer’s disease. We demonstrate that information extracted from serial FDG-PET through regional analysis can be used to achieve state-of-the-art classification of diagnostic groups in a realistic multi-centre setting. This finding may be usefully applied in the diagnosis of Alzheimer’s disease, predicting disease course in individuals with mild cognitive impairment, and in the selection of participants for clinical trials. PMID:22236449
Castro, Eduardo; Martínez-Ramón, Manel; Pearlson, Godfrey; Sui, Jing; Calhoun, Vince D.
2011-01-01
Pattern classification of brain imaging data can enable the automatic detection of differences in cognitive processes of specific groups of interest. Furthermore, it can also give neuroanatomical information related to the regions of the brain that are most relevant to detect these differences by means of feature selection procedures, which are also well-suited to deal with the high dimensionality of brain imaging data. This work proposes the application of recursive feature elimination using a machine learning algorithm based on composite kernels to the classification of healthy controls and patients with schizophrenia. This framework, which evaluates nonlinear relationships between voxels, analyzes whole-brain fMRI data from an auditory task experiment that is segmented into anatomical regions and recursively eliminates the uninformative ones based on their relevance estimates, thus yielding the set of most discriminative brain areas for group classification. The collected data was processed using two analysis methods: the general linear model (GLM) and independent component analysis (ICA). GLM spatial maps as well as ICA temporal lobe and default mode component maps were then input to the classifier. A mean classification accuracy of up to 95% estimated with a leave-two-out cross-validation procedure was achieved by doing multi-source data classification. In addition, it is shown that the classification accuracy rate obtained by using multi-source data surpasses that reached by using single-source data, hence showing that this algorithm takes advantage of the complimentary nature of GLM and ICA. PMID:21723948
NASA Astrophysics Data System (ADS)
Teye, Ernest; Huang, Xingyi; Dai, Huang; Chen, Quansheng
2013-10-01
Quick, accurate and reliable technique for discrimination of cocoa beans according to geographical origin is essential for quality control and traceability management. This current study presents the application of Near Infrared Spectroscopy technique and multivariate classification for the differentiation of Ghana cocoa beans. A total of 194 cocoa bean samples from seven cocoa growing regions were used. Principal component analysis (PCA) was used to extract relevant information from the spectral data and this gave visible cluster trends. The performance of four multivariate classification methods: Linear discriminant analysis (LDA), K-nearest neighbors (KNN), Back propagation artificial neural network (BPANN) and Support vector machine (SVM) were compared. The performances of the models were optimized by cross validation. The results revealed that; SVM model was superior to all the mathematical methods with a discrimination rate of 100% in both the training and prediction set after preprocessing with Mean centering (MC). BPANN had a discrimination rate of 99.23% for the training set and 96.88% for prediction set. While LDA model had 96.15% and 90.63% for the training and prediction sets respectively. KNN model had 75.01% for the training set and 72.31% for prediction set. The non-linear classification methods used were superior to the linear ones. Generally, the results revealed that NIR Spectroscopy coupled with SVM model could be used successfully to discriminate cocoa beans according to their geographical origins for effective quality assurance.
WND-CHARM: Multi-purpose image classification using compound image transforms
Orlov, Nikita; Shamir, Lior; Macura, Tomasz; Johnston, Josiah; Eckley, D. Mark; Goldberg, Ilya G.
2008-01-01
We describe a multi-purpose image classifier that can be applied to a wide variety of image classification tasks without modifications or fine-tuning, and yet provide classification accuracy comparable to state-of-the-art task-specific image classifiers. The proposed image classifier first extracts a large set of 1025 image features including polynomial decompositions, high contrast features, pixel statistics, and textures. These features are computed on the raw image, transforms of the image, and transforms of transforms of the image. The feature values are then used to classify test images into a set of pre-defined image classes. This classifier was tested on several different problems including biological image classification and face recognition. Although we cannot make a claim of universality, our experimental results show that this classifier performs as well or better than classifiers developed specifically for these image classification tasks. Our classifier’s high performance on a variety of classification problems is attributed to (i) a large set of features extracted from images; and (ii) an effective feature selection and weighting algorithm sensitive to specific image classification problems. The algorithms are available for free download from openmicroscopy.org. PMID:18958301
2002-01-01
their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested
Wang, Xinglong; Rak, Rafal; Restificar, Angelo; Nobata, Chikashi; Rupp, C J; Batista-Navarro, Riza Theresa B; Nawaz, Raheel; Ananiadou, Sophia
2011-10-03
The selection of relevant articles for curation, and linking those articles to experimental techniques confirming the findings became one of the primary subjects of the recent BioCreative III contest. The contest's Protein-Protein Interaction (PPI) task consisted of two sub-tasks: Article Classification Task (ACT) and Interaction Method Task (IMT). ACT aimed to automatically select relevant documents for PPI curation, whereas the goal of IMT was to recognise the methods used in experiments for identifying the interactions in full-text articles. We proposed and compared several classification-based methods for both tasks, employing rich contextual features as well as features extracted from external knowledge sources. For IMT, a new method that classifies pair-wise relations between every text phrase and candidate interaction method obtained promising results with an F1 score of 64.49%, as tested on the task's development dataset. We also explored ways to combine this new approach and more conventional, multi-label document classification methods. For ACT, our classifiers exploited automatically detected named entities and other linguistic information. The evaluation results on the BioCreative III PPI test datasets showed that our systems were very competitive: one of our IMT methods yielded the best performance among all participants, as measured by F1 score, Matthew's Correlation Coefficient and AUC iP/R; whereas for ACT, our best classifier was ranked second as measured by AUC iP/R, and also competitive according to other metrics. Our novel approach that converts the multi-class, multi-label classification problem to a binary classification problem showed much promise in IMT. Nevertheless, on the test dataset the best performance was achieved by taking the union of the output of this method and that of a multi-class, multi-label document classifier, which indicates that the two types of systems complement each other in terms of recall. For ACT, our system exploited a rich set of features and also obtained encouraging results. We examined the features with respect to their contributions to the classification results, and concluded that contextual words surrounding named entities, as well as the MeSH headings associated with the documents were among the main contributors to the performance.
Types of Seizures Affecting Individuals with TSC
... Cannabis you can review. *New Terms for Seizure Classifications The International League Against Epilepsy has approved a ... seizures. This new system will make diagnosis and classification of seizures easier and more accurate. Learn more ...