Younghak Shin; Balasingham, Ilangko
2017-07-01
Colonoscopy is a standard method for screening polyps by highly trained physicians. Miss-detected polyps in colonoscopy are potential risk factor for colorectal cancer. In this study, we investigate an automatic polyp classification framework. We aim to compare two different approaches named hand-craft feature method and convolutional neural network (CNN) based deep learning method. Combined shape and color features are used for hand craft feature extraction and support vector machine (SVM) method is adopted for classification. For CNN approach, three convolution and pooling based deep learning framework is used for classification purpose. The proposed framework is evaluated using three public polyp databases. From the experimental results, we have shown that the CNN based deep learning framework shows better classification performance than the hand-craft feature based methods. It achieves over 90% of classification accuracy, sensitivity, specificity and precision.
The Blurred Line between Form and Process: A Comparison of Stream Channel Classification Frameworks
Kasprak, Alan; Hough-Snee, Nate
2016-01-01
Stream classification provides a means to understand the diversity and distribution of channels and floodplains that occur across a landscape while identifying links between geomorphic form and process. Accordingly, stream classification is frequently employed as a watershed planning, management, and restoration tool. At the same time, there has been intense debate and criticism of particular frameworks, on the grounds that these frameworks classify stream reaches based largely on their physical form, rather than direct measurements of their component hydrogeomorphic processes. Despite this debate surrounding stream classifications, and their ongoing use in watershed management, direct comparisons of channel classification frameworks are rare. Here we implement four stream classification frameworks and explore the degree to which each make inferences about hydrogeomorphic process from channel form within the Middle Fork John Day Basin, a watershed of high conservation interest within the Columbia River Basin, U.S.A. We compare the results of the River Styles Framework, Natural Channel Classification, Rosgen Classification System, and a channel form-based statistical classification at 33 field-monitored sites. We found that the four frameworks consistently classified reach types into similar groups based on each reach or segment’s dominant hydrogeomorphic elements. Where classified channel types diverged, differences could be attributed to the (a) spatial scale of input data used, (b) the requisite metrics and their order in completing a framework’s decision tree and/or, (c) whether the framework attempts to classify current or historic channel form. Divergence in framework agreement was also observed at reaches where channel planform was decoupled from valley setting. Overall, the relative agreement between frameworks indicates that criticism of individual classifications for their use of form in grouping stream channels may be overstated. These form-based criticisms may also ignore the geomorphic tenet that channel form reflects formative hydrogeomorphic processes across a given landscape. PMID:26982076
CLASSIFICATION FRAMEWORK FOR COASTAL ECOSYSTEM RESPONSES TO AQUATIC STRESSORS
Many classification schemes have been developed to group ecosystems based on similar characteristics. To date, however, no single scheme has addressed coastal ecosystem responses to multiple stressors. We developed a classification framework for coastal ecosystems to improve the ...
An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Zhang, Zhou; Pasolli, Edoardo; Crawford, Melba M.; Tilton, James C.
2015-01-01
Augmenting spectral data with spatial information for image classification has recently gained significant attention, as classification accuracy can often be improved by extracting spatial information from neighboring pixels. In this paper, we propose a new framework in which active learning (AL) and hierarchical segmentation (HSeg) are combined for spectral-spatial classification of hyperspectral images. The spatial information is extracted from a best segmentation obtained by pruning the HSeg tree using a new supervised strategy. The best segmentation is updated at each iteration of the AL process, thus taking advantage of informative labeled samples provided by the user. The proposed strategy incorporates spatial information in two ways: 1) concatenating the extracted spatial features and the original spectral features into a stacked vector and 2) extending the training set using a self-learning-based semi-supervised learning (SSL) approach. Finally, the two strategies are combined within an AL framework. The proposed framework is validated with two benchmark hyperspectral datasets. Higher classification accuracies are obtained by the proposed framework with respect to five other state-of-the-art spectral-spatial classification approaches. Moreover, the effectiveness of the proposed pruning strategy is also demonstrated relative to the approaches based on a fixed segmentation.
Reduction from cost-sensitive ordinal ranking to weighted binary classification.
Lin, Hsuan-Tien; Li, Ling
2012-05-01
We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.
Towards an International Classification for Patient Safety: the conceptual framework.
Sherman, Heather; Castro, Gerard; Fletcher, Martin; Hatlie, Martin; Hibbert, Peter; Jakob, Robert; Koss, Richard; Lewalle, Pierre; Loeb, Jerod; Perneger, Thomas; Runciman, William; Thomson, Richard; Van Der Schaaf, Tjerk; Virtanen, Martti
2009-02-01
Global advances in patient safety have been hampered by the lack of a uniform classification of patient safety concepts. This is a significant barrier to developing strategies to reduce risk, performing evidence-based research and evaluating existing healthcare policies relevant to patient safety. Since 2005, the World Health Organization's World Alliance for Patient Safety has undertaken the Project to Develop an International Classification for Patient Safety (ICPS) to devise a classification which transforms patient safety information collected from disparate systems into a common format to facilitate aggregation, analysis and learning across disciplines, borders and time. A drafting group, comprised of experts from the fields of patient safety, classification theory, health informatics, consumer/patient advocacy, law and medicine, identified and defined key patient safety concepts and developed an internationally agreed conceptual framework for the ICPS based upon existing patient safety classifications. The conceptual framework was iteratively improved through technical expert meetings and a two-stage web-based modified Delphi survey of over 250 international experts. This work culminated in a conceptual framework consisting of ten high level classes: incident type, patient outcomes, patient characteristics, incident characteristics, contributing factors/hazards, organizational outcomes, detection, mitigating factors, ameliorating actions and actions taken to reduce risk. While the framework for the ICPS is in place, several challenges remain. Concepts need to be defined, guidance for using the classification needs to be provided, and further real-world testing needs to occur to progressively refine the ICPS to ensure it is fit for purpose.
Towards an International Classification for Patient Safety: the conceptual framework
Sherman, Heather; Castro, Gerard; Fletcher, Martin; Hatlie, Martin; Hibbert, Peter; Jakob, Robert; Koss, Richard; Lewalle, Pierre; Loeb, Jerod; Perneger, Thomas; Runciman, William; Thomson, Richard; Van Der Schaaf, Tjerk; Virtanen, Martti
2009-01-01
Global advances in patient safety have been hampered by the lack of a uniform classification of patient safety concepts. This is a significant barrier to developing strategies to reduce risk, performing evidence-based research and evaluating existing healthcare policies relevant to patient safety. Since 2005, the World Health Organization's World Alliance for Patient Safety has undertaken the Project to Develop an International Classification for Patient Safety (ICPS) to devise a classification which transforms patient safety information collected from disparate systems into a common format to facilitate aggregation, analysis and learning across disciplines, borders and time. A drafting group, comprised of experts from the fields of patient safety, classification theory, health informatics, consumer/patient advocacy, law and medicine, identified and defined key patient safety concepts and developed an internationally agreed conceptual framework for the ICPS based upon existing patient safety classifications. The conceptual framework was iteratively improved through technical expert meetings and a two-stage web-based modified Delphi survey of over 250 international experts. This work culminated in a conceptual framework consisting of ten high level classes: incident type, patient outcomes, patient characteristics, incident characteristics, contributing factors/hazards, organizational outcomes, detection, mitigating factors, ameliorating actions and actions taken to reduce risk. While the framework for the ICPS is in place, several challenges remain. Concepts need to be defined, guidance for using the classification needs to be provided, and further real-world testing needs to occur to progressively refine the ICPS to ensure it is fit for purpose. PMID:19147595
Catchment Classification: Connecting Climate, Structure and Function
NASA Astrophysics Data System (ADS)
Sawicz, K. A.; Wagener, T.; Sivapalan, M.; Troch, P. A.; Carrillo, G. A.
2010-12-01
Hydrology does not yet possess a generally accepted catchment classification framework. Such a classification framework needs to: [1] give names to things, i.e. the main classification step, [2] permit transfer of information, i.e. regionalization of information, [3] permit development of generalizations, i.e. to develop new theory, and [4] provide a first order environmental change impact assessment, i.e., the hydrologic implications of climate, land use and land cover change. One strategy is to create a catchment classification framework based on the notion of catchment functions (partitioning, storage, and release). Results of an empirical study presented here connects climate and structure to catchment function (in the form of select hydrologic signatures), based on analyzing over 300 US catchments. Initial results indicate a wide assortment of signature relationships with properties of climate, geology, and vegetation. The uncertainty in the different regionalized signatures varies widely, and therefore there is variability in the robustness of classifying ungauged basins. This research provides insight into the controls of hydrologic behavior of a catchment, and enables a classification framework applicable to gauged and ungauged across the study domain. This study sheds light on what we can expect to achieve in mapping climate, structure and function in a top-down manner. Results of this study complement work done using a bottom-up physically-based modeling framework to generalize this approach (Carrillo et al., this session).
NASA Technical Reports Server (NTRS)
Jung, Jinha; Pasolli, Edoardo; Prasad, Saurabh; Tilton, James C.; Crawford, Melba M.
2014-01-01
Acquiring current, accurate land-use information is critical for monitoring and understanding the impact of anthropogenic activities on natural environments.Remote sensing technologies are of increasing importance because of their capability to acquire information for large areas in a timely manner, enabling decision makers to be more effective in complex environments. Although optical imagery has demonstrated to be successful for land cover classification, active sensors, such as light detection and ranging (LiDAR), have distinct capabilities that can be exploited to improve classification results. However, utilization of LiDAR data for land cover classification has not been fully exploited. Moreover, spatial-spectral classification has recently gained significant attention since classification accuracy can be improved by extracting additional information from the neighboring pixels. Although spatial information has been widely used for spectral data, less attention has been given to LiDARdata. In this work, a new framework for land cover classification using discrete return LiDAR data is proposed. Pseudo-waveforms are generated from the LiDAR data and processed by hierarchical segmentation. Spatial featuresare extracted in a region-based way using a new unsupervised strategy for multiple pruning of the segmentation hierarchy. The proposed framework is validated experimentally on a real dataset acquired in an urban area. Better classification results are exhibited by the proposed framework compared to the cases in which basic LiDAR products such as digital surface model and intensity image are used. Moreover, the proposed region-based feature extraction strategy results in improved classification accuracies in comparison with a more traditional window-based approach.
Ethnicity identification from face images
NASA Astrophysics Data System (ADS)
Lu, Xiaoguang; Jain, Anil K.
2004-08-01
Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.
Unsupervised active learning based on hierarchical graph-theoretic clustering.
Hu, Weiming; Hu, Wei; Xie, Nianhua; Maybank, Steve
2009-10-01
Most existing active learning approaches are supervised. Supervised active learning has the following problems: inefficiency in dealing with the semantic gap between the distribution of samples in the feature space and their labels, lack of ability in selecting new samples that belong to new categories that have not yet appeared in the training samples, and lack of adaptability to changes in the semantic interpretation of sample categories. To tackle these problems, we propose an unsupervised active learning framework based on hierarchical graph-theoretic clustering. In the framework, two promising graph-theoretic clustering algorithms, namely, dominant-set clustering and spectral clustering, are combined in a hierarchical fashion. Our framework has some advantages, such as ease of implementation, flexibility in architecture, and adaptability to changes in the labeling. Evaluations on data sets for network intrusion detection, image classification, and video classification have demonstrated that our active learning framework can effectively reduce the workload of manual classification while maintaining a high accuracy of automatic classification. It is shown that, overall, our framework outperforms the support-vector-machine-based supervised active learning, particularly in terms of dealing much more efficiently with new samples whose categories have not yet appeared in the training samples.
Stratified random selection of watersheds allowed us to compare geographically-independent classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme within the Northern Lakes a...
PCA based feature reduction to improve the accuracy of decision tree c4.5 classification
NASA Astrophysics Data System (ADS)
Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.
2018-03-01
Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.
Towards a robust framework for catchment classification
NASA Astrophysics Data System (ADS)
Deshmukh, A.; Samal, A.; Singh, R.
2017-12-01
Classification of catchments based on various measures of similarity has emerged as an important technique to understand regional scale hydrologic behavior. Classification of catchment characteristics and/or streamflow response has been used reveal which characteristics are more likely to explain the observed variability of hydrologic response. However, numerous algorithms for supervised or unsupervised classification are available, making it hard to identify the algorithm most suitable for the dataset at hand. Consequently, existing catchment classification studies vary significantly in the classification algorithms employed with no previous attempt at understanding the degree of uncertainty in classification due to this algorithmic choice. This hinders the generalizability of interpretations related to hydrologic behavior. Our goal is to develop a protocol that can be followed while classifying hydrologic datasets. We focus on a classification framework for unsupervised classification and provide a step-by-step classification procedure. The steps include testing the clusterabiltiy of original dataset prior to classification, feature selection, validation of clustered data, and quantification of similarity of two clusterings. We test several commonly available methods within this framework to understand the level of similarity of classification results across algorithms. We apply the proposed framework on recently developed datasets for India to analyze to what extent catchment properties can explain observed catchment response. Our testing dataset includes watershed characteristics for over 200 watersheds which comprise of both natural (physio-climatic) characteristics and socio-economic characteristics. This framework allows us to understand the controls on observed hydrologic variability across India.
Slabbinck, Bram; Waegeman, Willem; Dawyndt, Peter; De Vos, Paul; De Baets, Bernard
2010-01-30
Machine learning techniques have shown to improve bacterial species classification based on fatty acid methyl ester (FAME) data. Nonetheless, FAME analysis has a limited resolution for discrimination of bacteria at the species level. In this paper, we approach the species classification problem from a taxonomic point of view. Such a taxonomy or tree is typically obtained by applying clustering algorithms on FAME data or on 16S rRNA gene data. The knowledge gained from the tree can then be used to evaluate FAME-based classifiers, resulting in a novel framework for bacterial species classification. In view of learning in a taxonomic framework, we consider two types of trees. First, a FAME tree is constructed with a supervised divisive clustering algorithm. Subsequently, based on 16S rRNA gene sequence analysis, phylogenetic trees are inferred by the NJ and UPGMA methods. In this second approach, the species classification problem is based on the combination of two different types of data. Herein, 16S rRNA gene sequence data is used for phylogenetic tree inference and the corresponding binary tree splits are learned based on FAME data. We call this learning approach 'phylogenetic learning'. Supervised Random Forest models are developed to train the classification tasks in a stratified cross-validation setting. In this way, better classification results are obtained for species that are typically hard to distinguish by a single or flat multi-class classification model. FAME-based bacterial species classification is successfully evaluated in a taxonomic framework. Although the proposed approach does not improve the overall accuracy compared to flat multi-class classification, it has some distinct advantages. First, it has better capabilities for distinguishing species on which flat multi-class classification fails. Secondly, the hierarchical classification structure allows to easily evaluate and visualize the resolution of FAME data for the discrimination of bacterial species. Summarized, by phylogenetic learning we are able to situate and evaluate FAME-based bacterial species classification in a more informative context.
2010-01-01
Background Machine learning techniques have shown to improve bacterial species classification based on fatty acid methyl ester (FAME) data. Nonetheless, FAME analysis has a limited resolution for discrimination of bacteria at the species level. In this paper, we approach the species classification problem from a taxonomic point of view. Such a taxonomy or tree is typically obtained by applying clustering algorithms on FAME data or on 16S rRNA gene data. The knowledge gained from the tree can then be used to evaluate FAME-based classifiers, resulting in a novel framework for bacterial species classification. Results In view of learning in a taxonomic framework, we consider two types of trees. First, a FAME tree is constructed with a supervised divisive clustering algorithm. Subsequently, based on 16S rRNA gene sequence analysis, phylogenetic trees are inferred by the NJ and UPGMA methods. In this second approach, the species classification problem is based on the combination of two different types of data. Herein, 16S rRNA gene sequence data is used for phylogenetic tree inference and the corresponding binary tree splits are learned based on FAME data. We call this learning approach 'phylogenetic learning'. Supervised Random Forest models are developed to train the classification tasks in a stratified cross-validation setting. In this way, better classification results are obtained for species that are typically hard to distinguish by a single or flat multi-class classification model. Conclusions FAME-based bacterial species classification is successfully evaluated in a taxonomic framework. Although the proposed approach does not improve the overall accuracy compared to flat multi-class classification, it has some distinct advantages. First, it has better capabilities for distinguishing species on which flat multi-class classification fails. Secondly, the hierarchical classification structure allows to easily evaluate and visualize the resolution of FAME data for the discrimination of bacterial species. Summarized, by phylogenetic learning we are able to situate and evaluate FAME-based bacterial species classification in a more informative context. PMID:20113515
Feature selection and classification of multiparametric medical images using bagging and SVM
NASA Astrophysics Data System (ADS)
Fan, Yong; Resnick, Susan M.; Davatzikos, Christos
2008-03-01
This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.
A Systematic Approach to Subgroup Classification in Intellectual Disability
ERIC Educational Resources Information Center
Schalock, Robert L.; Luckasson, Ruth
2015-01-01
This article describes a systematic approach to subgroup classification based on a classification framework and sequential steps involved in the subgrouping process. The sequential steps are stating the purpose of the classification, identifying the classification elements, using relevant information, and using clearly stated and purposeful…
Hierarchy-associated semantic-rule inference framework for classifying indoor scenes
NASA Astrophysics Data System (ADS)
Yu, Dan; Liu, Peng; Ye, Zhipeng; Tang, Xianglong; Zhao, Wei
2016-03-01
Typically, the initial task of classifying indoor scenes is challenging, because the spatial layout and decoration of a scene can vary considerably. Recent efforts at classifying object relationships commonly depend on the results of scene annotation and predefined rules, making classification inflexible. Furthermore, annotation results are easily affected by external factors. Inspired by human cognition, a scene-classification framework was proposed using the empirically based annotation (EBA) and a match-over rule-based (MRB) inference system. The semantic hierarchy of images is exploited by EBA to construct rules empirically for MRB classification. The problem of scene classification is divided into low-level annotation and high-level inference from a macro perspective. Low-level annotation involves detecting the semantic hierarchy and annotating the scene with a deformable-parts model and a bag-of-visual-words model. In high-level inference, hierarchical rules are extracted to train the decision tree for classification. The categories of testing samples are generated from the parts to the whole. Compared with traditional classification strategies, the proposed semantic hierarchy and corresponding rules reduce the effect of a variable background and improve the classification performance. The proposed framework was evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.
NASA Astrophysics Data System (ADS)
Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric
2011-03-01
Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.
A Characteristics-Based Approach to Radioactive Waste Classification in Advanced Nuclear Fuel Cycles
NASA Astrophysics Data System (ADS)
Djokic, Denia
The radioactive waste classification system currently used in the United States primarily relies on a source-based framework. This has lead to numerous issues, such as wastes that are not categorized by their intrinsic risk, or wastes that do not fall under a category within the framework and therefore are without a legal imperative for responsible management. Furthermore, in the possible case that advanced fuel cycles were to be deployed in the United States, the shortcomings of the source-based classification system would be exacerbated: advanced fuel cycles implement processes such as the separation of used nuclear fuel, which introduce new waste streams of varying characteristics. To be able to manage and dispose of these potential new wastes properly, development of a classification system that would assign appropriate level of management to each type of waste based on its physical properties is imperative. This dissertation explores how characteristics from wastes generated from potential future nuclear fuel cycles could be coupled with a characteristics-based classification framework. A static mass flow model developed under the Department of Energy's Fuel Cycle Research & Development program, called the Fuel-cycle Integration and Tradeoffs (FIT) model, was used to calculate the composition of waste streams resulting from different nuclear fuel cycle choices: two modified open fuel cycle cases (recycle in MOX reactor) and two different continuous-recycle fast reactor recycle cases (oxide and metal fuel fast reactors). This analysis focuses on the impact of waste heat load on waste classification practices, although future work could involve coupling waste heat load with metrics of radiotoxicity and longevity. The value of separation of heat-generating fission products and actinides in different fuel cycles and how it could inform long- and short-term disposal management is discussed. It is shown that the benefits of reducing the short-term fission-product heat load of waste destined for geologic disposal are neglected under the current source-based radioactive waste classification system, and that it is useful to classify waste streams based on how favorable the impact of interim storage is on increasing repository capacity. The need for a more diverse set of waste classes is discussed, and it is shown that the characteristics-based IAEA classification guidelines could accommodate wastes created from advanced fuel cycles more comprehensively than the U.S. classification framework.
Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan
2018-02-01
In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
NASA Astrophysics Data System (ADS)
Liu, Tao; Abd-Elrahman, Amr
2018-05-01
Deep convolutional neural network (DCNN) requires massive training datasets to trigger its image classification power, while collecting training samples for remote sensing application is usually an expensive process. When DCNN is simply implemented with traditional object-based image analysis (OBIA) for classification of Unmanned Aerial systems (UAS) orthoimage, its power may be undermined if the number training samples is relatively small. This research aims to develop a novel OBIA classification approach that can take advantage of DCNN by enriching the training dataset automatically using multi-view data. Specifically, this study introduces a Multi-View Object-based classification using Deep convolutional neural network (MODe) method to process UAS images for land cover classification. MODe conducts the classification on multi-view UAS images instead of directly on the orthoimage, and gets the final results via a voting procedure. 10-fold cross validation results show the mean overall classification accuracy increasing substantially from 65.32%, when DCNN was applied on the orthoimage to 82.08% achieved when MODe was implemented. This study also compared the performances of the support vector machine (SVM) and random forest (RF) classifiers with DCNN under traditional OBIA and the proposed multi-view OBIA frameworks. The results indicate that the advantage of DCNN over traditional classifiers in terms of accuracy is more obvious when these classifiers were applied with the proposed multi-view OBIA framework than when these classifiers were applied within the traditional OBIA framework.
Yi, Chucai; Tian, Yingli
2012-09-01
In this paper, we propose a novel framework to extract text regions from scene images with complex backgrounds and multiple text appearances. This framework consists of three main steps: boundary clustering (BC), stroke segmentation, and string fragment classification. In BC, we propose a new bigram-color-uniformity-based method to model both text and attachment surface, and cluster edge pixels based on color pairs and spatial positions into boundary layers. Then, stroke segmentation is performed at each boundary layer by color assignment to extract character candidates. We propose two algorithms to combine the structural analysis of text stroke with color assignment and filter out background interferences. Further, we design a robust string fragment classification based on Gabor-based text features. The features are obtained from feature maps of gradient, stroke distribution, and stroke width. The proposed framework of text localization is evaluated on scene images, born-digital images, broadcast video images, and images of handheld objects captured by blind persons. Experimental results on respective datasets demonstrate that the framework outperforms state-of-the-art localization algorithms.
A Unified Classification Framework for FP, DP and CP Data at X-Band in Southern China
NASA Astrophysics Data System (ADS)
Xie, Lei; Zhang, Hong; Li, Hhongzhong; Wang, Chao
2015-04-01
The main objective of this paper is to introduce an unified framework for crop classification in Southern China using data in fully polarimetric (FP), dual-pol (DP) and compact polarimetric (CP) modes. The TerraSAR-X data acquired over the Leizhou Peninsula, South China are used in our experiments. The study site involves four main crops (rice, banana, sugarcane eucalyptus). Through exploring the similarities between data in these three modes, a knowledge-based characteristic space is created and the unified framework is presented. The overall classification accuracies for data in the FP, coherent HH/VV are about 95%, and is about 91% in CP modes, which suggests that the proposed classification scheme is effective and promising. Compared with the Wishart Maximum Likelihood (ML) classifier, the proposed method exhibits higher classification accuracy.
Meta-learning framework applied in bioinformatics inference system design.
Arredondo, Tomás; Ormazábal, Wladimir
2015-01-01
This paper describes a meta-learner inference system development framework which is applied and tested in the implementation of bioinformatic inference systems. These inference systems are used for the systematic classification of the best candidates for inclusion in bacterial metabolic pathway maps. This meta-learner-based approach utilises a workflow where the user provides feedback with final classification decisions which are stored in conjunction with analysed genetic sequences for periodic inference system training. The inference systems were trained and tested with three different data sets related to the bacterial degradation of aromatic compounds. The analysis of the meta-learner-based framework involved contrasting several different optimisation methods with various different parameters. The obtained inference systems were also contrasted with other standard classification methods with accurate prediction capabilities observed.
Hierarchical Higher Order Crf for the Classification of Airborne LIDAR Point Clouds in Urban Areas
NASA Astrophysics Data System (ADS)
Niemeyer, J.; Rottensteiner, F.; Soergel, U.; Heipke, C.
2016-06-01
We propose a novel hierarchical approach for the classification of airborne 3D lidar points. Spatial and semantic context is incorporated via a two-layer Conditional Random Field (CRF). The first layer operates on a point level and utilises higher order cliques. Segments are generated from the labelling obtained in this way. They are the entities of the second layer, which incorporates larger scale context. The classification result of the segments is introduced as an energy term for the next iteration of the point-based layer. This framework iterates and mutually propagates context to improve the classification results. Potentially wrong decisions can be revised at later stages. The output is a labelled point cloud as well as segments roughly corresponding to object instances. Moreover, we present two new contextual features for the segment classification: the distance and the orientation of a segment with respect to the closest road. It is shown that the classification benefits from these features. In our experiments the hierarchical framework improve the overall accuracies by 2.3% on a point-based level and by 3.0% on a segment-based level, respectively, compared to a purely point-based classification.
Designing and Implementation of River Classification Assistant Management System
NASA Astrophysics Data System (ADS)
Zhao, Yinjun; Jiang, Wenyuan; Yang, Rujun; Yang, Nan; Liu, Haiyan
2018-03-01
In an earlier publication, we proposed a new Decision Classifier (DCF) for Chinese river classification based on their structures. To expand, enhance and promote the application of the DCF, we build a computer system to support river classification named River Classification Assistant Management System. Based on ArcEngine and ArcServer platform, this system implements many functions such as data management, extraction of river network, river classification, and results publication under combining Client / Server with Browser / Server framework.
Griffiths, Jason I.; Fronhofer, Emanuel A.; Garnier, Aurélie; Seymour, Mathew; Altermatt, Florian; Petchey, Owen L.
2017-01-01
The development of video-based monitoring methods allows for rapid, dynamic and accurate monitoring of individuals or communities, compared to slower traditional methods, with far reaching ecological and evolutionary applications. Large amounts of data are generated using video-based methods, which can be effectively processed using machine learning (ML) algorithms into meaningful ecological information. ML uses user defined classes (e.g. species), derived from a subset (i.e. training data) of video-observed quantitative features (e.g. phenotypic variation), to infer classes in subsequent observations. However, phenotypic variation often changes due to environmental conditions, which may lead to poor classification, if environmentally induced variation in phenotypes is not accounted for. Here we describe a framework for classifying species under changing environmental conditions based on the random forest classification. A sliding window approach was developed that restricts temporal and environmentally conditions to improve the classification. We tested our approach by applying the classification framework to experimental data. The experiment used a set of six ciliate species to monitor changes in community structure and behavior over hundreds of generations, in dozens of species combinations and across a temperature gradient. Differences in biotic and abiotic conditions caused simplistic classification approaches to be unsuccessful. In contrast, the sliding window approach allowed classification to be highly successful, as phenotypic differences driven by environmental change, could be captured by the classifier. Importantly, classification using the random forest algorithm showed comparable success when validated against traditional, slower, manual identification. Our framework allows for reliable classification in dynamic environments, and may help to improve strategies for long-term monitoring of species in changing environments. Our classification pipeline can be applied in fields assessing species community dynamics, such as eco-toxicology, ecology and evolutionary ecology. PMID:28472193
Guo, Lei; Abbosh, Amin
2018-05-01
For any chance for stroke patients to survive, the stroke type should be classified to enable giving medication within a few hours of the onset of symptoms. In this paper, a microwave-based stroke localization and classification framework is proposed. It is based on microwave tomography, k-means clustering, and a support vector machine (SVM) method. The dielectric profile of the brain is first calculated using the Born iterative method, whereas the amplitude of the dielectric profile is then taken as the input to k-means clustering. The cluster is selected as the feature vector for constructing and testing the SVM. A database of MRI-derived realistic head phantoms at different signal-to-noise ratios is used in the classification procedure. The performance of the proposed framework is evaluated using the receiver operating characteristic (ROC) curve. The results based on a two-dimensional framework show that 88% classification accuracy, with a sensitivity of 91% and a specificity of 87%, can be achieved. Bioelectromagnetics. 39:312-324, 2018. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Li, Yachun; Charalampaki, Patra; Liu, Yong; Yang, Guang-Zhong; Giannarou, Stamatia
2018-06-13
Probe-based confocal laser endomicroscopy (pCLE) enables in vivo, in situ tissue characterisation without changes in the surgical setting and simplifies the oncological surgical workflow. The potential of this technique in identifying residual cancer tissue and improving resection rates of brain tumours has been recently verified in pilot studies. The interpretation of endomicroscopic information is challenging, particularly for surgeons who do not themselves routinely review histopathology. Also, the diagnosis can be examiner-dependent, leading to considerable inter-observer variability. Therefore, automatic tissue characterisation with pCLE would support the surgeon in establishing diagnosis as well as guide robot-assisted intervention procedures. The aim of this work is to propose a deep learning-based framework for brain tissue characterisation for context aware diagnosis support in neurosurgical oncology. An efficient representation of the context information of pCLE data is presented by exploring state-of-the-art CNN models with different tuning configurations. A novel video classification framework based on the combination of convolutional layers with long-range temporal recursion has been proposed to estimate the probability of each tumour class. The video classification accuracy is compared for different network architectures and data representation and video segmentation methods. We demonstrate the application of the proposed deep learning framework to classify Glioblastoma and Meningioma brain tumours based on endomicroscopic data. Results show significant improvement of our proposed image classification framework over state-of-the-art feature-based methods. The use of video data further improves the classification performance, achieving accuracy equal to 99.49%. This work demonstrates that deep learning can provide an efficient representation of pCLE data and accurately classify Glioblastoma and Meningioma tumours. The performance evaluation analysis shows the potential clinical value of the technique.
ERIC Educational Resources Information Center
Amershi, Saleema; Conati, Cristina
2009-01-01
In this paper, we present a data-based user modeling framework that uses both unsupervised and supervised classification to build student models for exploratory learning environments. We apply the framework to build student models for two different learning environments and using two different data sources (logged interface and eye-tracking data).…
A Generalized Mixture Framework for Multi-label Classification
Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos
2015-01-01
We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd|X) using a product of posterior distributions over components of the output space. Our approach captures different input–output and output–output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods. PMID:26613069
A Framework for Concept-Based Digital Course Libraries
ERIC Educational Resources Information Center
Dicheva, Darina; Dichev, Christo
2004-01-01
This article presents a general framework for building conceptbased digital course libraries. The framework is based on the idea of using a conceptual structure that represents a subject domain ontology for classification of the course library content. Two aspects, domain conceptualization, which supports findability and ontologies, which support…
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
A management-oriented classification of pinyon-juniper woodlands of the Great Basin
Neil E. West; Robin J. Tausch; Paul T. Tueller
1998-01-01
A hierarchical framework for the classification of Great Basin pinyon-juniper woodlands was based on a systematic sample of 426 stands from a random selection of 66 of the 110 mountain ranges in the region. That is, mountain ranges were randomly selected, but stands were systematically located on mountain ranges. The National Hierarchical Framework of Ecological Units...
Koopman Operator Framework for Time Series Modeling and Analysis
NASA Astrophysics Data System (ADS)
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
Classification Framework for ICT-Based Learning Technologies for Disabled People
ERIC Educational Resources Information Center
Hersh, Marion
2017-01-01
The paper presents the first systematic approach to the classification of inclusive information and communication technologies (ICT)-based learning technologies and ICT-based learning technologies for disabled people which covers both assistive and general learning technologies, is valid for all disabled people and considers the full range of…
Brain tumor classification and segmentation using sparse coding and dictionary learning.
Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo
2016-08-01
This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.
Kianmehr, Keivan; Alhajj, Reda
2008-09-01
In this study, we aim at building a classification framework, namely the CARSVM model, which integrates association rule mining and support vector machine (SVM). The goal is to benefit from advantages of both, the discriminative knowledge represented by class association rules and the classification power of the SVM algorithm, to construct an efficient and accurate classifier model that improves the interpretability problem of SVM as a traditional machine learning technique and overcomes the efficiency issues of associative classification algorithms. In our proposed framework: instead of using the original training set, a set of rule-based feature vectors, which are generated based on the discriminative ability of class association rules over the training samples, are presented to the learning component of the SVM algorithm. We show that rule-based feature vectors present a high-qualified source of discrimination knowledge that can impact substantially the prediction power of SVM and associative classification techniques. They provide users with more conveniences in terms of understandability and interpretability as well. We have used four datasets from UCI ML repository to evaluate the performance of the developed system in comparison with five well-known existing classification methods. Because of the importance and popularity of gene expression analysis as real world application of the classification model, we present an extension of CARSVM combined with feature selection to be applied to gene expression data. Then, we describe how this combination will provide biologists with an efficient and understandable classifier model. The reported test results and their biological interpretation demonstrate the applicability, efficiency and effectiveness of the proposed model. From the results, it can be concluded that a considerable increase in classification accuracy can be obtained when the rule-based feature vectors are integrated in the learning process of the SVM algorithm. In the context of applicability, according to the results obtained from gene expression analysis, we can conclude that the CARSVM system can be utilized in a variety of real world applications with some adjustments.
Couple Graph Based Label Propagation Method for Hyperspectral Remote Sensing Data Classification
NASA Astrophysics Data System (ADS)
Wang, X. P.; Hu, Y.; Chen, J.
2018-04-01
Graph based semi-supervised classification method are widely used for hyperspectral image classification. We present a couple graph based label propagation method, which contains both the adjacency graph and the similar graph. We propose to construct the similar graph by using the similar probability, which utilize the label similarity among examples probably. The adjacency graph was utilized by a common manifold learning method, which has effective improve the classification accuracy of hyperspectral data. The experiments indicate that the couple graph Laplacian which unite both the adjacency graph and the similar graph, produce superior classification results than other manifold Learning based graph Laplacian and Sparse representation based graph Laplacian in label propagation framework.
NASA Technical Reports Server (NTRS)
Bryant, N. A.; Mcleod, R. G.; Zobrist, A. L.; Johnson, H. B.
1979-01-01
Procedures for adjustment of brightness values between frames and the digital mosaicking of Landsat frames to standard map projections are developed for providing a continuous data base for multispectral thematic classification. A combination of local terrain variations in the Californian deserts and a global sampling strategy based on transects provided the framework for accurate classification throughout the entire geographic region.
Advances in the Application of Decision Theory to Test-Based Decision Making.
ERIC Educational Resources Information Center
van der Linden, Wim J.
This paper reviews recent research in the Netherlands on the application of decision theory to test-based decision making about personnel selection and student placement. The review is based on an earlier model proposed for the classification of decision problems, and emphasizes an empirical Bayesian framework. Classification decisions with…
Kiranyaz, Serkan; Mäkinen, Toni; Gabbouj, Moncef
2012-10-01
In this paper, we propose a novel framework based on a collective network of evolutionary binary classifiers (CNBC) to address the problems of feature and class scalability. The main goal of the proposed framework is to achieve a high classification performance over dynamic audio and video repositories. The proposed framework adopts a "Divide and Conquer" approach in which an individual network of binary classifiers (NBC) is allocated to discriminate each audio class. An evolutionary search is applied to find the best binary classifier in each NBC with respect to a given criterion. Through the incremental evolution sessions, the CNBC framework can dynamically adapt to each new incoming class or feature set without resorting to a full-scale re-training or re-configuration. Therefore, the CNBC framework is particularly designed for dynamically varying databases where no conventional static classifiers can adapt to such changes. In short, it is entirely a novel topology, an unprecedented approach for dynamic, content/data adaptive and scalable audio classification. A large set of audio features can be effectively used in the framework, where the CNBCs make appropriate selections and combinations so as to achieve the highest discrimination among individual audio classes. Experiments demonstrate a high classification accuracy (above 90%) and efficiency of the proposed framework over large and dynamic audio databases. Copyright © 2012 Elsevier Ltd. All rights reserved.
Jane, Nancy Yesudhas; Nehemiah, Khanna Harichandran; Arputharaj, Kannan
2016-01-01
Clinical time-series data acquired from electronic health records (EHR) are liable to temporal complexities such as irregular observations, missing values and time constrained attributes that make the knowledge discovery process challenging. This paper presents a temporal rough set induced neuro-fuzzy (TRiNF) mining framework that handles these complexities and builds an effective clinical decision-making system. TRiNF provides two functionalities namely temporal data acquisition (TDA) and temporal classification. In TDA, a time-series forecasting model is constructed by adopting an improved double exponential smoothing method. The forecasting model is used in missing value imputation and temporal pattern extraction. The relevant attributes are selected using a temporal pattern based rough set approach. In temporal classification, a classification model is built with the selected attributes using a temporal pattern induced neuro-fuzzy classifier. For experimentation, this work uses two clinical time series dataset of hepatitis and thrombosis patients. The experimental result shows that with the proposed TRiNF framework, there is a significant reduction in the error rate, thereby obtaining the classification accuracy on an average of 92.59% for hepatitis and 91.69% for thrombosis dataset. The obtained classification results prove the efficiency of the proposed framework in terms of its improved classification accuracy.
NASA Astrophysics Data System (ADS)
Han, Xiaopeng; Huang, Xin; Li, Jiayi; Li, Yansheng; Yang, Michael Ying; Gong, Jianya
2018-04-01
In recent years, the availability of high-resolution imagery has enabled more detailed observation of the Earth. However, it is imperative to simultaneously achieve accurate interpretation and preserve the spatial details for the classification of such high-resolution data. To this aim, we propose the edge-preservation multi-classifier relearning framework (EMRF). This multi-classifier framework is made up of support vector machine (SVM), random forest (RF), and sparse multinomial logistic regression via variable splitting and augmented Lagrangian (LORSAL) classifiers, considering their complementary characteristics. To better characterize complex scenes of remote sensing images, relearning based on landscape metrics is proposed, which iteratively quantizes both the landscape composition and spatial configuration by the use of the initial classification results. In addition, a novel tri-training strategy is proposed to solve the over-smoothing effect of relearning by means of automatic selection of training samples with low classification certainties, which always distribute in or near the edge areas. Finally, EMRF flexibly combines the strengths of relearning and tri-training via the classification certainties calculated by the probabilistic output of the respective classifiers. It should be noted that, in order to achieve an unbiased evaluation, we assessed the classification accuracy of the proposed framework using both edge and non-edge test samples. The experimental results obtained with four multispectral high-resolution images confirm the efficacy of the proposed framework, in terms of both edge and non-edge accuracy.
Heiens, R A; Pleshko, L P
1997-01-01
The present article applies the customer loyalty classification framework developed by Dick and Basu (1994) to the health care industry. Based on a two factor classification, consisting of repeat patronage and relative attitude, four categories of patient loyalty are proposed and examined, including true loyalty, latent loyalty, spurious loyalty, and no loyalty. Data is collected and the four patient loyalty categories are profiled and compared on the basis of perceived risk, product class importance, provider decision importance, provider awareness, provider consideration, number of providers visited, and self-reported loyalty.
NASA Astrophysics Data System (ADS)
Hess, M. R.; Petrovic, V.; Kuester, F.
2017-08-01
Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.
Kumar, Shiu; Mamun, Kabir; Sharma, Alok
2017-12-01
Classification of electroencephalography (EEG) signals for motor imagery based brain computer interface (MI-BCI) is an exigent task and common spatial pattern (CSP) has been extensively explored for this purpose. In this work, we focused on developing a new framework for classification of EEG signals for MI-BCI. We propose a single band CSP framework for MI-BCI that utilizes the concept of tangent space mapping (TSM) in the manifold of covariance matrices. The proposed method is named CSP-TSM. Spatial filtering is performed on the bandpass filtered MI EEG signal. Riemannian tangent space is utilized for extracting features from the spatial filtered signal. The TSM features are then fused with the CSP variance based features and feature selection is performed using Lasso. Linear discriminant analysis (LDA) is then applied to the selected features and finally classification is done using support vector machine (SVM) classifier. The proposed framework gives improved performance for MI EEG signal classification in comparison with several competing methods. Experiments conducted shows that the proposed framework reduces the overall classification error rate for MI-BCI by 3.16%, 5.10% and 1.70% (for BCI Competition III dataset IVa, BCI Competition IV Dataset I and BCI Competition IV Dataset IIb, respectively) compared to the conventional CSP method under the same experimental settings. The proposed CSP-TSM method produces promising results when compared with several competing methods in this paper. In addition, the computational complexity is less compared to that of TSM method. Our proposed CSP-TSM framework can be potentially used for developing improved MI-BCI systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
Saha, Monjoy; Chakraborty, Chandan
2018-05-01
We present an efficient deep learning framework for identifying, segmenting, and classifying cell membranes and nuclei from human epidermal growth factor receptor-2 (HER2)-stained breast cancer images with minimal user intervention. This is a long-standing issue for pathologists because the manual quantification of HER2 is error-prone, costly, and time-consuming. Hence, we propose a deep learning-based HER2 deep neural network (Her2Net) to solve this issue. The convolutional and deconvolutional parts of the proposed Her2Net framework consisted mainly of multiple convolution layers, max-pooling layers, spatial pyramid pooling layers, deconvolution layers, up-sampling layers, and trapezoidal long short-term memory (TLSTM). A fully connected layer and a softmax layer were also used for classification and error estimation. Finally, HER2 scores were calculated based on the classification results. The main contribution of our proposed Her2Net framework includes the implementation of TLSTM and a deep learning framework for cell membrane and nucleus detection, segmentation, and classification and HER2 scoring. Our proposed Her2Net achieved 96.64% precision, 96.79% recall, 96.71% F-score, 93.08% negative predictive value, 98.33% accuracy, and a 6.84% false-positive rate. Our results demonstrate the high accuracy and wide applicability of the proposed Her2Net in the context of HER2 scoring for breast cancer evaluation.
A Fast, Open EEG Classification Framework Based on Feature Compression and Channel Ranking
Han, Jiuqi; Zhao, Yuwei; Sun, Hongji; Chen, Jiayun; Ke, Ang; Xu, Gesen; Zhang, Hualiang; Zhou, Jin; Wang, Changyong
2018-01-01
Superior feature extraction, channel selection and classification methods are essential for designing electroencephalography (EEG) classification frameworks. However, the performance of most frameworks is limited by their improper channel selection methods and too specifical design, leading to high computational complexity, non-convergent procedure and narrow expansibility. In this paper, to remedy these drawbacks, we propose a fast, open EEG classification framework centralized by EEG feature compression, low-dimensional representation, and convergent iterative channel ranking. First, to reduce the complexity, we use data clustering to compress the EEG features channel-wise, packing the high-dimensional EEG signal, and endowing them with numerical signatures. Second, to provide easy access to alternative superior methods, we structurally represent each EEG trial in a feature vector with its corresponding numerical signature. Thus, the recorded signals of many trials shrink to a low-dimensional structural matrix compatible with most pattern recognition methods. Third, a series of effective iterative feature selection approaches with theoretical convergence is introduced to rank the EEG channels and remove redundant ones, further accelerating the EEG classification process and ensuring its stability. Finally, a classical linear discriminant analysis (LDA) model is employed to classify a single EEG trial with selected channels. Experimental results on two real world brain-computer interface (BCI) competition datasets demonstrate the promising performance of the proposed framework over state-of-the-art methods. PMID:29713262
Dijemeni, Esuabom; D'Amone, Gabriele; Gbati, Israel
2017-12-01
Drug-induced sedation endoscopy (DISE) classification systems have been used to assess anatomical findings on upper airway obstruction, and decide and plan surgical treatments and act as a predictor for surgical treatment outcome for obstructive sleep apnoea management. The first objective is to identify if there is a universally accepted DISE grading and classification system for analysing DISE findings. The second objective is to identify if there is one DISE grading and classification treatment planning framework for deciding appropriate surgical treatment for obstructive sleep apnoea (OSA). The third objective is to identify if there is one DISE grading and classification treatment outcome framework for determining the likelihood of success for a given OSA surgical intervention. A systematic review was performed to identify new and significantly modified DISE classification systems: concept, advantages and disadvantages. Fourteen studies proposing a new DISE classification system and three studies proposing a significantly modified DISE classification were identified. None of the studies were based on randomised control trials. DISE is an objective method for visualising upper airway obstruction. The classification and assessment of clinical findings based on DISE is highly subjective due to the increasing number of DISE classification systems. Hence, this creates a growing divergence in surgical treatment planning and treatment outcome. Further research on a universally accepted objective DISE assessment is critically needed.
Classification framework for partially observed dynamical systems
NASA Astrophysics Data System (ADS)
Shen, Yuan; Tino, Peter; Tsaneva-Atanasova, Krasimira
2017-04-01
We present a general framework for classifying partially observed dynamical systems based on the idea of learning in the model space. In contrast to the existing approaches using point estimates of model parameters to represent individual data items, we employ posterior distributions over model parameters, thus taking into account in a principled manner the uncertainty due to both the generative (observational and/or dynamic noise) and observation (sampling in time) processes. We evaluate the framework on two test beds: a biological pathway model and a stochastic double-well system. Crucially, we show that the classification performance is not impaired when the model structure used for inferring posterior distributions is much more simple than the observation-generating model structure, provided the reduced-complexity inferential model structure captures the essential characteristics needed for the given classification task.
Review article: A systematic review of emergency department incident classification frameworks.
Murray, Matthew; McCarthy, Sally
2018-06-01
As in any part of the hospital system, safety incidents can occur in the ED. These incidents arguably have a distinct character, as the ED involves unscheduled flows of urgent patients who require disparate services. To aid understanding of safety issues and support risk management of the ED, a comparison of published ED specific incident classification frameworks was performed. A review of emergency medicine, health management and general medical publications, using Ovid SP to interrogate Medline (1976-2016) was undertaken to identify any type of taxonomy or classification-like framework for ED related incidents. These frameworks were then analysed and compared. The review identified 17 publications containing an incident classification framework. Comparison of factors and themes making up the classification constituent elements revealed some commonality, but no overall consistency, nor evolution towards an ideal framework. Inconsistency arises from differences in the evidential basis and design methodology of classifications, with design itself being an inherently subjective process. It was not possible to identify an 'ideal' incident classification framework for ED risk management, and there is significant variation in the selection of categories used by frameworks. The variation in classification could risk an unbalanced emphasis in findings through application of a particular framework. Design of an ED specific, ideal incident classification framework should be informed by a much wider range of theories of how organisations and systems work, in addition to clinical and human factors. © 2017 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
ERIC Educational Resources Information Center
Zero to Three: National Center for Infants, Toddlers and Families, Washington, DC.
The diagnostic framework presented in this manual seeks to address the need for a systematic, multi-disciplinary, developmentally based approach to the classification of mental health and developmental difficulties in the first 4 years of life. An introduction discusses clinical approaches to assessment and diagnosis, gives an overview of the…
ERIC Educational Resources Information Center
Wieder, Serena, Ed.
The diagnostic framework presented in this manual seeks to address the need for a systematic, multidisciplinary, developmentally based approach to the classification of mental health and developmental difficulties in the first 4 years of life. An introduction discusses clinical approaches to assessment and diagnosis, gives an overview of the…
Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A
2009-06-01
In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.
A probabilistic watershed-based framework was developed to encompass wadeable streams within all three ecoregions of West Virginia, with the exclusion noted below. In Phase I of the project (year 2001), we developed and applied a probabilistic watershed-based sampling framework ...
DOT National Transportation Integrated Search
2016-07-31
This report presents a novel framework for promptly assessing the probability of barge-bridge : collision damage of piers based on probabilistic-based classification through machine learning. The main : idea of the presented framework is to divide th...
Scheme, Erik J; Englehart, Kevin B
2013-07-01
When controlling a powered upper limb prosthesis it is important not only to know how to move the device, but also when not to move. A novel approach to pattern recognition control, using a selective multiclass one-versus-one classification scheme has been shown to be capable of rejecting unintended motions. This method was shown to outperform other popular classification schemes when presented with muscle contractions that did not correspond to desired actions. In this work, a 3-D Fitts' Law test is proposed as a suitable alternative to using virtual limb environments for evaluating real-time myoelectric control performance. The test is used to compare the selective approach to a state-of-the-art linear discriminant analysis classification based scheme. The framework is shown to obey Fitts' Law for both control schemes, producing linear regression fittings with high coefficients of determination (R(2) > 0.936). Additional performance metrics focused on quality of control are discussed and incorporated in the evaluation. Using this framework the selective classification based scheme is shown to produce significantly higher efficiency and completion rates, and significantly lower overshoot and stopping distances, with no significant difference in throughput.
Short-Term Global Horizontal Irradiance Forecasting Based on Sky Imaging and Pattern Recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Feng, Cong; Cui, Mingjian
Accurate short-term forecasting is crucial for solar integration in the power grid. In this paper, a classification forecasting framework based on pattern recognition is developed for 1-hour-ahead global horizontal irradiance (GHI) forecasting. Three sets of models in the forecasting framework are trained by the data partitioned from the preprocessing analysis. The first two sets of models forecast GHI for the first four daylight hours of each day. Then the GHI values in the remaining hours are forecasted by an optimal machine learning model determined based on a weather pattern classification model in the third model set. The weather pattern ismore » determined by a support vector machine (SVM) classifier. The developed framework is validated by the GHI and sky imaging data from the National Renewable Energy Laboratory (NREL). Results show that the developed short-term forecasting framework outperforms the persistence benchmark by 16% in terms of the normalized mean absolute error and 25% in terms of the normalized root mean square error.« less
Waring, R; Knight, R
2013-01-01
Children with speech sound disorders (SSD) form a heterogeneous group who differ in terms of the severity of their condition, underlying cause, speech errors, involvement of other aspects of the linguistic system and treatment response. To date there is no universal and agreed-upon classification system. Instead, a number of theoretically differing classification systems have been proposed based on either an aetiological (medical) approach, a descriptive-linguistic approach or a processing approach. To describe and review the supporting evidence, and to provide a critical evaluation of the current childhood SSD classification systems. Descriptions of the major specific approaches to classification are reviewed and research papers supporting the reliability and validity of the systems are evaluated. Three specific paediatric SSD classification systems; the aetiologic-based Speech Disorders Classification System, the descriptive-linguistic Differential Diagnosis system, and the processing-based Psycholinguistic Framework are identified as potentially useful in classifying children with SSD into homogeneous subgroups. The Differential Diagnosis system has a growing body of empirical support from clinical population studies, across language error pattern studies and treatment efficacy studies. The Speech Disorders Classification System is currently a research tool with eight proposed subgroups. The Psycholinguistic Framework is a potential bridge to linking cause and surface level speech errors. There is a need for a universally agreed-upon classification system that is useful to clinicians and researchers. The resulting classification system needs to be robust, reliable and valid. A universal classification system would allow for improved tailoring of treatments to subgroups of SSD which may, in turn, lead to improved treatment efficacy. © 2012 Royal College of Speech and Language Therapists.
Cunningham, Barbara Jane; Hidecker, Mary Jo Cooley; Thomas-Stonell, Nancy; Rosenbaum, Peter
2018-05-01
In this paper, we present our experiences - both successes and challenges - in implementing evidence-based classification tools into clinical practice. We also make recommendations for others wanting to promote the uptake and application of new research-based assessment tools. We first describe classification systems and the benefits of using them in both research and practice. We then present a theoretical framework from Implementation Science to report strategies we have used to implement two research-based classification tools into practice. We also illustrate some of the challenges we have encountered by reporting results from an online survey investigating 58 Speech-language Pathologists' knowledge and use of the Communication Function Classification System (CFCS), a new tool to classify children's functional communication skills. We offer recommendations for researchers wanting to promote the uptake of new tools in clinical practice. Specifically, we identify structural, organizational, innovation, practitioner, and patient-related factors that we recommend researchers address in the design of implementation interventions. Roles and responsibilities of both researchers and clinicians in making implementations science a success are presented. Implications for rehabilitation Promoting uptake of new and evidence-based tools into clinical practice is challenging. Implementation science can help researchers to close the knowledge-to-practice gap. Using concrete examples, we discuss our experiences in implementing evidence-based classification tools into practice within a theoretical framework. Recommendations are provided for researchers wanting to implement new tools in clinical practice. Implications for researchers and clinicians are presented.
a Novel Framework for Remote Sensing Image Scene Classification
NASA Astrophysics Data System (ADS)
Jiang, S.; Zhao, H.; Wu, W.; Tan, Q.
2018-04-01
High resolution remote sensing (HRRS) images scene classification aims to label an image with a specific semantic category. HRRS images contain more details of the ground objects and their spatial distribution patterns than low spatial resolution images. Scene classification can bridge the gap between low-level features and high-level semantics. It can be applied in urban planning, target detection and other fields. This paper proposes a novel framework for HRRS images scene classification. This framework combines the convolutional neural network (CNN) and XGBoost, which utilizes CNN as feature extractor and XGBoost as a classifier. Then, this framework is evaluated on two different HRRS images datasets: UC-Merced dataset and NWPU-RESISC45 dataset. Our framework achieved satisfying accuracies on two datasets, which is 95.57 % and 83.35 % respectively. From the experiments result, our framework has been proven to be effective for remote sensing images classification. Furthermore, we believe this framework will be more practical for further HRRS scene classification, since it costs less time on training stage.
Realistic Expectations for Rock Identification.
ERIC Educational Resources Information Center
Westerback, Mary Elizabeth; Azer, Nazmy
1991-01-01
Presents a rock classification scheme for use by beginning students. The scheme is based on rock textures (glassy, crystalline, clastic, and organic framework) and observable structures (vesicles and graded bedding). Discusses problems in other rock classification schemes which may produce confusion, misidentification, and anxiety. (10 references)…
ERIC Educational Resources Information Center
Cardenas-Claros, Monica Stella; Gruba, Paul A.
2013-01-01
This paper proposes a theoretical framework for the conceptualization and design of help options in computer-based second language (L2) listening. Based on four empirical studies, it aims at clarifying both conceptualization and design (CoDe) components. The elements of conceptualization consist of a novel four-part classification of help options:…
Two Approaches to Estimation of Classification Accuracy Rate under Item Response Theory
ERIC Educational Resources Information Center
Lathrop, Quinn N.; Cheng, Ying
2013-01-01
Within the framework of item response theory (IRT), there are two recent lines of work on the estimation of classification accuracy (CA) rate. One approach estimates CA when decisions are made based on total sum scores, the other based on latent trait estimates. The former is referred to as the Lee approach, and the latter, the Rudner approach,…
Property Specification Patterns for intelligence building software
NASA Astrophysics Data System (ADS)
Chun, Seungsu
2018-03-01
In this paper, through the property specification pattern research for Modal MU(μ) logical aspects present a single framework based on the pattern of intelligence building software. In this study, broken down by state property specification pattern classification of Dwyer (S) and action (A) and was subdivided into it again strong (A) and weaknesses (E). Through these means based on a hierarchical pattern classification of the property specification pattern analysis of logical aspects Mu(μ) was applied to the pattern classification of the examples used in the actual model checker. As a result, not only can a more accurate classification than the existing classification systems were easy to create and understand the attributes specified.
DOT National Transportation Integrated Search
2001-02-01
The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...
NASA Astrophysics Data System (ADS)
Haaf, Ezra; Barthel, Roland
2016-04-01
When assessing hydrogeological conditions at the regional scale, the analyst is often confronted with uncertainty of structures, inputs and processes while having to base inference on scarce and patchy data. Haaf and Barthel (2015) proposed a concept for handling this predicament by developing a groundwater systems classification framework, where information is transferred from similar, but well-explored and better understood to poorly described systems. The concept is based on the central hypothesis that similar systems react similarly to the same inputs and vice versa. It is conceptually related to PUB (Prediction in ungauged basins) where organization of systems and processes by quantitative methods is intended and used to improve understanding and prediction. Furthermore, using the framework it is expected that regional conceptual and numerical models can be checked or enriched by ensemble generated data from neighborhood-based estimators. In a first step, groundwater hydrographs from a large dataset in Southern Germany are compared in an effort to identify structural similarity in groundwater dynamics. A number of approaches to group hydrographs, mostly based on a similarity measure - which have previously only been used in local-scale studies, can be found in the literature. These are tested alongside different global feature extraction techniques. The resulting classifications are then compared to a visual "expert assessment"-based classification which serves as a reference. A ranking of the classification methods is carried out and differences shown. Selected groups from the classifications are related to geological descriptors. Here we present the most promising results from a comparison of classifications based on series correlation, different series distances and series features, such as the coefficients of the discrete Fourier transform and the intrinsic mode functions of empirical mode decomposition. Additionally, we show examples of classes corresponding to geological descriptors. Haaf, E., Barthel, R., 2015. Methods for assessing hydrogeological similarity and for classification of groundwater systems on the regional scale, EGU General Assembly 2015, Vienna, Austria.
Improving condition severity classification with an efficient active learning based framework
Nissim, Nir; Boland, Mary Regina; Tatonetti, Nicholas P.; Elovici, Yuval; Hripcsak, George; Shahar, Yuval; Moskovitch, Robert
2017-01-01
Classification of condition severity can be useful for discriminating among sets of conditions or phenotypes, for example when prioritizing patient care or for other healthcare purposes. Electronic Health Records (EHRs) represent a rich source of labeled information that can be harnessed for severity classification. The labeling of EHRs is expensive and in many cases requires employing professionals with high level of expertise. In this study, we demonstrate the use of Active Learning (AL) techniques to decrease expert labeling efforts. We employ three AL methods and demonstrate their ability to reduce labeling efforts while effectively discriminating condition severity. We incorporate three AL methods into a new framework based on the original CAESAR (Classification Approach for Extracting Severity Automatically from Electronic Health Records) framework to create the Active Learning Enhancement framework (CAESAR-ALE). We applied CAESAR-ALE to a dataset containing 516 conditions of varying severity levels that were manually labeled by seven experts. Our dataset, called the “CAESAR dataset,” was created from the medical records of 1.9 million patients treated at Columbia University Medical Center (CUMC). All three AL methods decreased labelers’ efforts compared to the learning methods applied by the original CAESER framework in which the classifier was trained on the entire set of conditions; depending on the AL strategy used in the current study, the reduction ranged from 48% to 64% that can result in significant savings, both in time and money. As for the PPV (precision) measure, CAESAR-ALE achieved more than 13% absolute improvement in the predictive capabilities of the framework when classifying conditions as severe. These results demonstrate the potential of AL methods to decrease the labeling efforts of medical experts, while increasing accuracy given the same (or even a smaller) number of acquired conditions. We also demonstrated that the methods included in the CAESAR-ALE framework (Exploitation and Combination_XA) are more robust to the use of human labelers with different levels of professional expertise. PMID:27016383
Improving condition severity classification with an efficient active learning based framework.
Nissim, Nir; Boland, Mary Regina; Tatonetti, Nicholas P; Elovici, Yuval; Hripcsak, George; Shahar, Yuval; Moskovitch, Robert
2016-06-01
Classification of condition severity can be useful for discriminating among sets of conditions or phenotypes, for example when prioritizing patient care or for other healthcare purposes. Electronic Health Records (EHRs) represent a rich source of labeled information that can be harnessed for severity classification. The labeling of EHRs is expensive and in many cases requires employing professionals with high level of expertise. In this study, we demonstrate the use of Active Learning (AL) techniques to decrease expert labeling efforts. We employ three AL methods and demonstrate their ability to reduce labeling efforts while effectively discriminating condition severity. We incorporate three AL methods into a new framework based on the original CAESAR (Classification Approach for Extracting Severity Automatically from Electronic Health Records) framework to create the Active Learning Enhancement framework (CAESAR-ALE). We applied CAESAR-ALE to a dataset containing 516 conditions of varying severity levels that were manually labeled by seven experts. Our dataset, called the "CAESAR dataset," was created from the medical records of 1.9 million patients treated at Columbia University Medical Center (CUMC). All three AL methods decreased labelers' efforts compared to the learning methods applied by the original CAESER framework in which the classifier was trained on the entire set of conditions; depending on the AL strategy used in the current study, the reduction ranged from 48% to 64% that can result in significant savings, both in time and money. As for the PPV (precision) measure, CAESAR-ALE achieved more than 13% absolute improvement in the predictive capabilities of the framework when classifying conditions as severe. These results demonstrate the potential of AL methods to decrease the labeling efforts of medical experts, while increasing accuracy given the same (or even a smaller) number of acquired conditions. We also demonstrated that the methods included in the CAESAR-ALE framework (Exploitation and Combination_XA) are more robust to the use of human labelers with different levels of professional expertise. Copyright © 2016 Elsevier Inc. All rights reserved.
Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui
2015-10-30
Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.
McElroy, L. M.; Woods, D. M.; Yanes, A. F.; Skaro, A. I.; Daud, A.; Curtis, T.; Wymore, E.; Holl, J. L.; Abecassis, M. M.; Ladner, D. P.
2016-01-01
Objective Efforts to improve patient safety are challenged by the lack of universally agreed upon terms. The International Classification for Patient Safety (ICPS) was developed by the World Health Organization for this purpose. This study aimed to test the applicability of the ICPS to a surgical population. Design A web-based safety debriefing was sent to clinicians involved in surgical care of abdominal organ transplant patients. A multidisciplinary team of patient safety experts, surgeons and researchers used the data to develop a system of classification based on the ICPS. Disagreements were reconciled via consensus, and a codebook was developed for future use by researchers. Results A total of 320 debriefing responses were used for the initial review and codebook development. In total, the 320 debriefing responses contained 227 patient safety incidents (range: 0–7 per debriefing) and 156 contributing factors/hazards (0–5 per response). The most common severity classification was ‘reportable circumstance,’ followed by ‘near miss.’ The most common incident types were ‘resources/organizational management,’ followed by ‘medical device/equipment.’ Several aspects of surgical care were encompassed by more than one classification, including operating room scheduling, delays in care, trainee-related incidents, interruptions and handoffs. Conclusions This study demonstrates that a framework for patient safety can be applied to facilitate the organization and analysis of surgical safety data. Several unique aspects of surgical care require consideration, and by using a standardized framework for describing concepts, research findings can be compared and disseminated across surgical specialties. The codebook is intended for use as a framework for other specialties and institutions. PMID:26803539
Self-organizing ontology of biochemically relevant small molecules
2012-01-01
Background The advent of high-throughput experimentation in biochemistry has led to the generation of vast amounts of chemical data, necessitating the development of novel analysis, characterization, and cataloguing techniques and tools. Recently, a movement to publically release such data has advanced biochemical structure-activity relationship research, while providing new challenges, the biggest being the curation, annotation, and classification of this information to facilitate useful biochemical pattern analysis. Unfortunately, the human resources currently employed by the organizations supporting these efforts (e.g. ChEBI) are expanding linearly, while new useful scientific information is being released in a seemingly exponential fashion. Compounding this, currently existing chemical classification and annotation systems are not amenable to automated classification, formal and transparent chemical class definition axiomatization, facile class redefinition, or novel class integration, thus further limiting chemical ontology growth by necessitating human involvement in curation. Clearly, there is a need for the automation of this process, especially for novel chemical entities of biological interest. Results To address this, we present a formal framework based on Semantic Web technologies for the automatic design of chemical ontology which can be used for automated classification of novel entities. We demonstrate the automatic self-assembly of a structure-based chemical ontology based on 60 MeSH and 40 ChEBI chemical classes. This ontology is then used to classify 200 compounds with an accuracy of 92.7%. We extend these structure-based classes with molecular feature information and demonstrate the utility of our framework for classification of functionally relevant chemicals. Finally, we discuss an iterative approach that we envision for future biochemical ontology development. Conclusions We conclude that the proposed methodology can ease the burden of chemical data annotators and dramatically increase their productivity. We anticipate that the use of formal logic in our proposed framework will make chemical classification criteria more transparent to humans and machines alike and will thus facilitate predictive and integrative bioactivity model development. PMID:22221313
NASA Astrophysics Data System (ADS)
Hong, Liang
2013-10-01
The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.
A Web-Based Framework For a Time-Domain Warehouse
NASA Astrophysics Data System (ADS)
Brewer, J. M.; Bloom, J. S.; Kennedy, R.; Starr, D. L.
2009-09-01
The Berkeley Transients Classification Pipeline (TCP) uses a machine-learning classifier to automatically categorize transients from large data torrents and provide automated notification of astronomical events of scientific interest. As part of the training process, we created a large warehouse of light-curve sources with well-labelled classes that serve as priors to the classification engine. This web-based interactive framework, which we are now making public via DotAstro.org (http://dotastro.org/), allows us to ingest time-variable source data in a wide variety of formats and store it in a common internal data model. Data is passed between pipeline modules in a prototype XML representation of time-series format (VOTimeseries), which can also be emitted to collaborators through dotastro.org. After import, the sources can be visualized using Google Sky, light curves can be inspected interactively, and classifications can be manually adjusted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth R.; Hodas, Nathan O.; Baker, Nathan A.
Forensic analysis of nanoparticles is often conducted through the collection and identifi- cation of electron microscopy images to determine the origin of suspected nuclear material. Each image is carefully studied by experts for classification of materials based on texture, shape, and size. Manually inspecting large image datasets takes enormous amounts of time. However, automatic classification of large image datasets is a challenging problem due to the complexity involved in choosing image features, the lack of training data available for effective machine learning methods, and the availability of user interfaces to parse through images. Therefore, a significant need exists for automatedmore » and semi-automated methods to help analysts perform accurate image classification in large image datasets. We present INStINCt, our Intelligent Signature Canvas, as a framework for quickly organizing image data in a web based canvas framework. Images are partitioned using small sets of example images, chosen by users, and presented in an optimal layout based on features derived from convolutional neural networks.« less
Interactive classification and content-based retrieval of tissue images
NASA Astrophysics Data System (ADS)
Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof
2002-11-01
We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.
Contemplating case mix: A primer on case mix classification and management.
Costa, Andrew P; Poss, Jeffery W; McKillop, Ian
2015-01-01
Case mix classifications are the frameworks that underlie many healthcare funding schemes, including the so-called activity-based funding. Now more than ever, Canadian healthcare administrators are evaluating case mix-based funding and deciphering how they will influence their organization. Case mix is a topic fraught with technical jargon and largely relegated to government agencies or private industries. This article provides an abridged review of case mix classification as well as its implications for management in healthcare. © 2015 The Canadian College of Health Leaders.
Gu, Yingxin; Brown, Jesslyn F.; Miura, Tomoaki; van Leeuwen, Willem J.D.; Reed, Bradley C.
2010-01-01
This study introduces a new geographic framework, phenological classification, for the conterminous United States based on Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) time-series data and a digital elevation model. The resulting pheno-class map is comprised of 40 pheno-classes, each having unique phenological and topographic characteristics. Cross-comparison of the pheno-classes with the 2001 National Land Cover Database indicates that the new map contains additional phenological and climate information. The pheno-class framework may be a suitable basis for the development of an Advanced Very High Resolution Radiometer (AVHRR)-MODIS NDVI translation algorithm and for various biogeographic studies.
A proposed food breakdown classification system to predict food behavior during gastric digestion.
Bornhorst, Gail M; Ferrua, Maria J; Singh, R Paul
2015-05-01
The pharmaceutical industry has implemented the Biopharmaceutics Classification System (BCS), which is used to classify drug products based on their solubility and intestinal permeability. The BCS can help predict drug behavior in vivo, the rate-limiting mechanism of absorption, and the likelihood of an in vitro-in vivo correlation. Based on this analysis, we have proposed a Food Breakdown Classification System (FBCS) framework that can be used to classify solid foods according to their initial hardness and their rate of softening during physiological gastric conditions. The proposed FBCS will allow for prediction of food behavior during gastric digestion. The applicability of the FBCS framework in differentiating between dissimilar solid foods was demonstrated using four example foods: raw carrot, boiled potato, white rice, and brown rice. The initial hardness and rate of softening parameter (softening half time) were determined for these foods as well as their hypothesized FBCS class. In addition, we have provided future suggestions as to the methodological and analytical challenges that need to be overcome prior to widespread use and adoption of this classification system. The FBCS gives a framework that may be used to classify food products based on their material properties and their behavior during in vitro gastric digestion, and may also be used to predict in vivo food behavior. As consumer demand increases for functional and "pharma" food products, the food industry will need widespread testing of food products for their structural and functional performance during digestion. © 2015 Institute of Food Technologists®
Wiegmann, D A; Shappell, S A
2001-11-01
The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. The HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA. Investigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains.
Using Computational Text Classification for Qualitative Research and Evaluation in Extension
ERIC Educational Resources Information Center
Smith, Justin G.; Tissing, Reid
2018-01-01
This article introduces a process for computational text classification that can be used in a variety of qualitative research and evaluation settings. The process leverages supervised machine learning based on an implementation of a multinomial Bayesian classifier. Applied to a community of inquiry framework, the algorithm was used to identify…
Consensus classification of posterior cortical atrophy
Crutch, Sebastian J.; Schott, Jonathan M.; Rabinovici, Gil D.; Murray, Melissa; Snowden, Julie S.; van der Flier, Wiesje M.; Dickerson, Bradford C.; Vandenberghe, Rik; Ahmed, Samrah; Bak, Thomas H.; Boeve, Bradley F.; Butler, Christopher; Cappa, Stefano F.; Ceccaldi, Mathieu; de Souza, Leonardo Cruz; Dubois, Bruno; Felician, Olivier; Galasko, Douglas; Graff-Radford, Jonathan; Graff-Radford, Neill R.; Hof, Patrick R.; Krolak-Salmon, Pierre; Lehmann, Manja; Magnin, Eloi; Mendez, Mario F.; Nestor, Peter J.; Onyike, Chiadi U.; Pelak, Victoria S.; Pijnenburg, Yolande; Primativo, Silvia; Rossor, Martin N.; Ryan, Natalie S.; Scheltens, Philip; Shakespeare, Timothy J.; González, Aida Suárez; Tang-Wai, David F.; Yong, Keir X. X.; Carrillo, Maria; Fox, Nick C.
2017-01-01
Introduction A classification framework for posterior cortical atrophy (PCA) is proposed to improve the uniformity of definition of the syndrome in a variety of research settings. Methods Consensus statements about PCA were developed through a detailed literature review, the formation of an international multidisciplinary working party which convened on four occasions, and a Web-based quantitative survey regarding symptom frequency and the conceptualization of PCA. Results A three-level classification framework for PCA is described comprising both syndrome- and disease-level descriptions. Classification level 1 (PCA) defines the core clinical, cognitive, and neuroimaging features and exclusion criteria of the clinico-radiological syndrome. Classification level 2 (PCA-pure, PCA-plus) establishes whether, in addition to the core PCA syndrome, the core features of any other neurodegenerative syndromes are present. Classification level 3 (PCA attributable to AD [PCA-AD], Lewy body disease [PCA-LBD], corticobasal degeneration [PCA-CBD], prion disease [PCA-prion]) provides a more formal determination of the underlying cause of the PCA syndrome, based on available pathophysiological biomarker evidence. The issue of additional syndrome-level descriptors is discussed in relation to the challenges of defining stages of syndrome severity and characterizing phenotypic heterogeneity within the PCA spectrum. Discussion There was strong agreement regarding the definition of the core clinico-radiological syndrome, meaning that the current consensus statement should be regarded as a refinement, development, and extension of previous single-center PCA criteria rather than any wholesale alteration or redescription of the syndrome. The framework and terminology may facilitate the interpretation of research data across studies, be applicable across a broad range of research scenarios (e.g., behavioral interventions, pharmacological trials), and provide a foundation for future collaborative work. PMID:28259709
Consensus classification of posterior cortical atrophy.
Crutch, Sebastian J; Schott, Jonathan M; Rabinovici, Gil D; Murray, Melissa; Snowden, Julie S; van der Flier, Wiesje M; Dickerson, Bradford C; Vandenberghe, Rik; Ahmed, Samrah; Bak, Thomas H; Boeve, Bradley F; Butler, Christopher; Cappa, Stefano F; Ceccaldi, Mathieu; de Souza, Leonardo Cruz; Dubois, Bruno; Felician, Olivier; Galasko, Douglas; Graff-Radford, Jonathan; Graff-Radford, Neill R; Hof, Patrick R; Krolak-Salmon, Pierre; Lehmann, Manja; Magnin, Eloi; Mendez, Mario F; Nestor, Peter J; Onyike, Chiadi U; Pelak, Victoria S; Pijnenburg, Yolande; Primativo, Silvia; Rossor, Martin N; Ryan, Natalie S; Scheltens, Philip; Shakespeare, Timothy J; Suárez González, Aida; Tang-Wai, David F; Yong, Keir X X; Carrillo, Maria; Fox, Nick C
2017-08-01
A classification framework for posterior cortical atrophy (PCA) is proposed to improve the uniformity of definition of the syndrome in a variety of research settings. Consensus statements about PCA were developed through a detailed literature review, the formation of an international multidisciplinary working party which convened on four occasions, and a Web-based quantitative survey regarding symptom frequency and the conceptualization of PCA. A three-level classification framework for PCA is described comprising both syndrome- and disease-level descriptions. Classification level 1 (PCA) defines the core clinical, cognitive, and neuroimaging features and exclusion criteria of the clinico-radiological syndrome. Classification level 2 (PCA-pure, PCA-plus) establishes whether, in addition to the core PCA syndrome, the core features of any other neurodegenerative syndromes are present. Classification level 3 (PCA attributable to AD [PCA-AD], Lewy body disease [PCA-LBD], corticobasal degeneration [PCA-CBD], prion disease [PCA-prion]) provides a more formal determination of the underlying cause of the PCA syndrome, based on available pathophysiological biomarker evidence. The issue of additional syndrome-level descriptors is discussed in relation to the challenges of defining stages of syndrome severity and characterizing phenotypic heterogeneity within the PCA spectrum. There was strong agreement regarding the definition of the core clinico-radiological syndrome, meaning that the current consensus statement should be regarded as a refinement, development, and extension of previous single-center PCA criteria rather than any wholesale alteration or redescription of the syndrome. The framework and terminology may facilitate the interpretation of research data across studies, be applicable across a broad range of research scenarios (e.g., behavioral interventions, pharmacological trials), and provide a foundation for future collaborative work. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Classification of urine sediment based on convolution neural network
NASA Astrophysics Data System (ADS)
Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian
2018-04-01
By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.
NASA Astrophysics Data System (ADS)
Davies, J. S.; Guillaumont, B.; Tempera, F.; Vertino, A.; Beuck, L.; Ólafsdóttir, S. H.; Smith, C. J.; Fosså, J. H.; van den Beld, I. M. J.; Savini, A.; Rengstorf, A.; Bayle, C.; Bourillet, J.-F.; Arnaud-Haond, S.; Grehan, A.
2017-11-01
Cold-water corals (CWC) can form complex structures which provide refuge, nursery grounds and physical support for a diversity of other living organisms. However, irrespectively from such ecological significance, CWCs are still vulnerable to human pressures such as fishing, pollution, ocean acidification and global warming Providing coherent and representative conservation of vulnerable marine ecosystems including CWCs is one of the aims of the Marine Protected Areas networks being implemented across European seas and oceans under the EC Habitats Directive, the Marine Strategy Framework Directive and the OSPAR Convention. In order to adequately represent ecosystem diversity, these initiatives require a standardised habitat classification that organises the variety of biological assemblages and provides consistent and functional criteria to map them across European Seas. One such classification system, EUNIS, enables a broad level classification of the deep sea based on abiotic and geomorphological features. More detailed lower biotope-related levels are currently under-developed, particularly with regards to deep-water habitats (>200 m depth). This paper proposes a hierarchical CWC biotope classification scheme that could be incorporated by existing classification schemes such as EUNIS. The scheme was developed within the EU FP7 project CoralFISH to capture the variability of CWC habitats identified using a wealth of seafloor imagery datasets from across the Northeast Atlantic and Mediterranean. Depending on the resolution of the imagery being interpreted, this hierarchical scheme allows data to be recorded from broad CWC biotope categories down to detailed taxonomy-based levels, thereby providing a flexible yet valuable information level for management. The CWC biotope classification scheme identifies 81 biotopes and highlights the limitations of the classification framework and guidance provided by EUNIS, the EC Habitats Directive, OSPAR and FAO; which largely underrepresent CWC habitats.
A trait-based framework to understand life history of mycorrhizal fungi.
Chagnon, Pierre-Luc; Bradley, Robert L; Maherali, Hafiz; Klironomos, John N
2013-09-01
Despite the growing appreciation for the functional diversity of arbuscular mycorrhizal (AM) fungi, our understanding of the causes and consequences of this diversity is still poor. In this opinion article, we review published data on AM fungal functional traits and attempt to identify major axes of life history variation. We propose that a life history classification system based on the grouping of functional traits, such as Grime's C-S-R (competitor, stress tolerator, ruderal) framework, can help to explain life history diversification in AM fungi, successional dynamics, and the spatial structure of AM fungal assemblages. Using a common life history classification framework for both plants and AM fungi could also help in predicting probable species associations in natural communities and increase our fundamental understanding of the interaction between land plants and AM fungi. Copyright © 2013 Elsevier Ltd. All rights reserved.
2018-01-01
Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods. PMID:29304512
Optimized extreme learning machine for urban land cover classification using hyperspectral imagery
NASA Astrophysics Data System (ADS)
Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam
2017-12-01
This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.
Hyperspectral image classification based on local binary patterns and PCANet
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang
2018-04-01
Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Maslanik, J. A.; Key, J.
1992-01-01
An expert system framework has been developed to classify sea ice types using satellite passive microwave data, an operational classification algorithm, spatial and temporal information, ice types estimated from a dynamic-thermodynamic model, output from a neural network that detects the onset of melt, and knowledge about season and region. The rule base imposes boundary conditions upon the ice classification, modifies parameters in the ice algorithm, determines a `confidence' measure for the classified data, and under certain conditions, replaces the algorithm output with model output. Results demonstrate the potential power of such a system for minimizing overall error in the classification and for providing non-expert data users with a means of assessing the usefulness of the classification results for their applications.
Uav-Based Crops Classification with Joint Features from Orthoimage and Dsm Data
NASA Astrophysics Data System (ADS)
Liu, B.; Shi, Y.; Duan, Y.; Wu, W.
2018-04-01
Accurate crops classification remains a challenging task due to the same crop with different spectra and different crops with same spectrum phenomenon. Recently, UAV-based remote sensing approach gains popularity not only for its high spatial and temporal resolution, but also for its ability to obtain spectraand spatial data at the same time. This paper focus on how to take full advantages of spatial and spectrum features to improve crops classification accuracy, based on an UAV platform equipped with a general digital camera. Texture and spatial features extracted from the RGB orthoimage and the digital surface model of the monitoring area are analysed and integrated within a SVM classification framework. Extensive experiences results indicate that the overall classification accuracy is drastically improved from 72.9 % to 94.5 % when the spatial features are combined together, which verified the feasibility and effectiveness of the proposed method.
Parker, Michael T.
2016-01-01
Recent advances in sequencing technologies have opened the door for the classification of the human virome. While taxonomic classification can be applied to the viruses identified in such studies, this gives no information as to the type of interaction the virus has with the host. As follow-up studies are performed to address these questions, the description of these virus-host interactions would be greatly enriched by applying a standard set of definitions that typify them. This paper describes a framework with which all members of the human virome can be classified based on principles of ecology. The scaffold not only enables categorization of the human virome, but can also inform research aimed at identifying novel virus-host interactions. PMID:27698618
Diverse Region-Based CNN for Hyperspectral Image Classification.
Zhang, Mengmeng; Li, Wei; Du, Qian
2018-06-01
Convolutional neural network (CNN) is of great interest in machine learning and has demonstrated excellent performance in hyperspectral image classification. In this paper, we propose a classification framework, called diverse region-based CNN, which can encode semantic context-aware representation to obtain promising features. With merging a diverse set of discriminative appearance factors, the resulting CNN-based representation exhibits spatial-spectral context sensitivity that is essential for accurate pixel classification. The proposed method exploiting diverse region-based inputs to learn contextual interactional features is expected to have more discriminative power. The joint representation containing rich spectral and spatial information is then fed to a fully connected network and the label of each pixel vector is predicted by a softmax layer. Experimental results with widely used hyperspectral image data sets demonstrate that the proposed method can surpass any other conventional deep learning-based classifiers and other state-of-the-art classifiers.
Combining High Spatial Resolution Optical and LIDAR Data for Object-Based Image Classification
NASA Astrophysics Data System (ADS)
Li, R.; Zhang, T.; Geng, R.; Wang, L.
2018-04-01
In order to classify high spatial resolution images more accurately, in this research, a hierarchical rule-based object-based classification framework was developed based on a high-resolution image with airborne Light Detection and Ranging (LiDAR) data. The eCognition software is employed to conduct the whole process. In detail, firstly, the FBSP optimizer (Fuzzy-based Segmentation Parameter) is used to obtain the optimal scale parameters for different land cover types. Then, using the segmented regions as basic units, the classification rules for various land cover types are established according to the spectral, morphological and texture features extracted from the optical images, and the height feature from LiDAR respectively. Thirdly, the object classification results are evaluated by using the confusion matrix, overall accuracy and Kappa coefficients. As a result, a method using the combination of an aerial image and the airborne Lidar data shows higher accuracy.
Classification as clustering: a Pareto cooperative-competitive GP approach.
McIntyre, Andrew R; Heywood, Malcolm I
2011-01-01
Intuitively population based algorithms such as genetic programming provide a natural environment for supporting solutions that learn to decompose the overall task between multiple individuals, or a team. This work presents a framework for evolving teams without recourse to prespecifying the number of cooperating individuals. To do so, each individual evolves a mapping to a distribution of outcomes that, following clustering, establishes the parameterization of a (Gaussian) local membership function. This gives individuals the opportunity to represent subsets of tasks, where the overall task is that of classification under the supervised learning domain. Thus, rather than each team member representing an entire class, individuals are free to identify unique subsets of the overall classification task. The framework is supported by techniques from evolutionary multiobjective optimization (EMO) and Pareto competitive coevolution. EMO establishes the basis for encouraging individuals to provide accurate yet nonoverlaping behaviors; whereas competitive coevolution provides the mechanism for scaling to potentially large unbalanced datasets. Benchmarking is performed against recent examples of nonlinear SVM classifiers over 12 UCI datasets with between 150 and 200,000 training instances. Solutions from the proposed coevolutionary multiobjective GP framework appear to provide a good balance between classification performance and model complexity, especially as the dataset instance count increases.
Tong, Tong; Ledig, Christian; Guerrero, Ricardo; Schuh, Andreas; Koikkalainen, Juha; Tolonen, Antti; Rhodius, Hanneke; Barkhof, Frederik; Tijms, Betty; Lemstra, Afina W; Soininen, Hilkka; Remes, Anne M; Waldemar, Gunhild; Hasselbalch, Steen; Mecocci, Patrizia; Baroni, Marta; Lötjönen, Jyrki; Flier, Wiesje van der; Rueckert, Daniel
2017-01-01
Differentiating between different types of neurodegenerative diseases is not only crucial in clinical practice when treatment decisions have to be made, but also has a significant potential for the enrichment of clinical trials. The purpose of this study is to develop a classification framework for distinguishing the four most common neurodegenerative diseases, including Alzheimer's disease, frontotemporal lobe degeneration, Dementia with Lewy bodies and vascular dementia, as well as patients with subjective memory complaints. Different biomarkers including features from images (volume features, region-wise grading features) and non-imaging features (CSF measures) were extracted for each subject. In clinical practice, the prevalence of different dementia types is imbalanced, posing challenges for learning an effective classification model. Therefore, we propose the use of the RUSBoost algorithm in order to train classifiers and to handle the class imbalance training problem. Furthermore, a multi-class feature selection method based on sparsity is integrated into the proposed framework to improve the classification performance. It also provides a way for investigating the importance of different features and regions. Using a dataset of 500 subjects, the proposed framework achieved a high accuracy of 75.2% with a balanced accuracy of 69.3% for the five-class classification using ten-fold cross validation, which is significantly better than the results using support vector machine or random forest, demonstrating the feasibility of the proposed framework to support clinical decision making.
River reach classification for the Greater Mekong Region at high spatial resolution
NASA Astrophysics Data System (ADS)
Ouellet Dallaire, C.; Lehner, B.
2014-12-01
River classifications have been used in river health and ecological assessments as coarse proxies to represent aquatic biodiversity when comprehensive biological and/or species data is unavailable. Currently there are no river classifications or biological data available in a consistent format for the extent of the Greater Mekong Region (GMR; including the Irrawaddy, the Salween, the Chao Praya, the Mekong and the Red River basins). The current project proposes a new river habitat classification for the region, facilitated by the HydroSHEDS (HYDROlogical SHuttle Elevation Derivatives at multiple Scales) database at 500m pixel resolution. The classification project is based on the Global River Classification framework relying on the creation of multiple sub-classifications based on different disciplines. The resulting classes from the sub-classification are later combined into final classes to create a holistic river reach classification. For the GMR, a final habitat classification was created based on three sub-classifications: a hydrological sub-classification based only on discharge indices (river size and flow variability); a physio-climatic sub-classification based on large scale indices of climate and elevation (biomes, ecoregions and elevation); and a geomorphological sub-classification based on local morphology (presence of floodplains, reach gradient and sand transport). Key variables and thresholds were identified in collaboration with local experts to ensure that regional knowledge was included. The final classification is composed 54 unique final classes based on 3 sub-classifications with less than 15 classes each. The resulting classifications are driven by abiotic variables and do not include biological data, but they represent a state-of-the art product based on best available data (mostly global data). The most common river habitat type is the "dry broadleaf, low gradient, very small river". These classifications could be applied in a wide range of hydro-ecological assessments and useful for a variety of stakeholders such as NGO, governments and researchers.
Phylogenetic classification of bony fishes.
Betancur-R, Ricardo; Wiley, Edward O; Arratia, Gloria; Acero, Arturo; Bailly, Nicolas; Miya, Masaki; Lecointre, Guillaume; Ortí, Guillermo
2017-07-06
Fish classifications, as those of most other taxonomic groups, are being transformed drastically as new molecular phylogenies provide support for natural groups that were unanticipated by previous studies. A brief review of the main criteria used by ichthyologists to define their classifications during the last 50 years, however, reveals slow progress towards using an explicit phylogenetic framework. Instead, the trend has been to rely, in varying degrees, on deep-rooted anatomical concepts and authority, often mixing taxa with explicit phylogenetic support with arbitrary groupings. Two leading sources in ichthyology frequently used for fish classifications (JS Nelson's volumes of Fishes of the World and W. Eschmeyer's Catalog of Fishes) fail to adopt a global phylogenetic framework despite much recent progress made towards the resolution of the fish Tree of Life. The first explicit phylogenetic classification of bony fishes was published in 2013, based on a comprehensive molecular phylogeny ( www.deepfin.org ). We here update the first version of that classification by incorporating the most recent phylogenetic results. The updated classification presented here is based on phylogenies inferred using molecular and genomic data for nearly 2000 fishes. A total of 72 orders (and 79 suborders) are recognized in this version, compared with 66 orders in version 1. The phylogeny resolves placement of 410 families, or ~80% of the total of 514 families of bony fishes currently recognized. The ordinal status of 30 percomorph families included in this study, however, remains uncertain (incertae sedis in the series Carangaria, Ovalentaria, or Eupercaria). Comments to support taxonomic decisions and comparisons with conflicting taxonomic groups proposed by others are presented. We also highlight cases were morphological support exist for the groups being classified. This version of the phylogenetic classification of bony fishes is substantially improved, providing resolution for more taxa than previous versions, based on more densely sampled phylogenetic trees. The classification presented in this study represents, unlike any other, the most up-to-date hypothesis of the Tree of Life of fishes.
Knowledge-based approach to video content classification
NASA Astrophysics Data System (ADS)
Chen, Yu; Wong, Edward K.
2001-01-01
A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.
Knowledge-based approach to video content classification
NASA Astrophysics Data System (ADS)
Chen, Yu; Wong, Edward K.
2000-12-01
A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.
Ecosystem classifications based on summer and winter conditions.
Andrew, Margaret E; Nelson, Trisalyn A; Wulder, Michael A; Hobart, George W; Coops, Nicholas C; Farmer, Carson J Q
2013-04-01
Ecosystem classifications map an area into relatively homogenous units for environmental research, monitoring, and management. However, their effectiveness is rarely tested. Here, three classifications are (1) defined and characterized for Canada along summertime productivity (moderate-resolution imaging spectrometer fraction of absorbed photosynthetically active radiation) and wintertime snow conditions (special sensor microwave/imager snow water equivalent), independently and in combination, and (2) comparatively evaluated to determine the ability of each classification to represent the spatial and environmental patterns of alternative schemes, including the Canadian ecozone framework. All classifications depicted similar patterns across Canada, but detailed class distributions differed. Class spatial characteristics varied with environmental conditions within classifications, but were comparable between classifications. There was moderate correspondence between classifications. The strongest association was between productivity classes and ecozones. The classification along both productivity and snow balanced these two sets of variables, yielding intermediate levels of association in all pairwise comparisons. Despite relatively low spatial agreement between classifications, they successfully captured patterns of the environmental conditions underlying alternate schemes (e.g., snow classes explained variation in productivity and vice versa). The performance of ecosystem classifications and the relevance of their input variables depend on the environmental patterns and processes used for applications and evaluation. Productivity or snow regimes, as constructed here, may be desirable when summarizing patterns controlled by summer- or wintertime conditions, respectively, or of climate change responses. General purpose ecosystem classifications should include both sets of drivers. Classifications should be carefully, quantitatively, and comparatively evaluated relative to a particular application prior to their implementation as monitoring and assessment frameworks.
Classification of melanoma lesions using sparse coded features and random forests
NASA Astrophysics Data System (ADS)
Rastgoo, Mojdeh; Lemaître, Guillaume; Morel, Olivier; Massich, Joan; Garcia, Rafael; Meriaudeau, Fabrice; Marzani, Franck; Sidibé, Désiré
2016-03-01
Malignant melanoma is the most dangerous type of skin cancer, yet it is the most treatable kind of cancer, conditioned by its early diagnosis which is a challenging task for clinicians and dermatologists. In this regard, CAD systems based on machine learning and image processing techniques are developed to differentiate melanoma lesions from benign and dysplastic nevi using dermoscopic images. Generally, these frameworks are composed of sequential processes: pre-processing, segmentation, and classification. This architecture faces mainly two challenges: (i) each process is complex with the need to tune a set of parameters, and is specific to a given dataset; (ii) the performance of each process depends on the previous one, and the errors are accumulated throughout the framework. In this paper, we propose a framework for melanoma classification based on sparse coding which does not rely on any pre-processing or lesion segmentation. Our framework uses Random Forests classifier and sparse representation of three features: SIFT, Hue and Opponent angle histograms, and RGB intensities. The experiments are carried out on the public PH2 dataset using a 10-fold cross-validation. The results show that SIFT sparse-coded feature achieves the highest performance with sensitivity and specificity of 100% and 90.3% respectively, with a dictionary size of 800 atoms and a sparsity level of 2. Furthermore, the descriptor based on RGB intensities achieves similar results with sensitivity and specificity of 100% and 71.3%, respectively for a smaller dictionary size of 100 atoms. In conclusion, dictionary learning techniques encode strong structures of dermoscopic images and provide discriminant descriptors.
Kambhampati, Satya Samyukta; Singh, Vishal; Manikandan, M Sabarimalai; Ramkumar, Barathram
2015-08-01
In this Letter, the authors present a unified framework for fall event detection and classification using the cumulants extracted from the acceleration (ACC) signals acquired using a single waist-mounted triaxial accelerometer. The main objective of this Letter is to find suitable representative cumulants and classifiers in effectively detecting and classifying different types of fall and non-fall events. It was discovered that the first level of the proposed hierarchical decision tree algorithm implements fall detection using fifth-order cumulants and support vector machine (SVM) classifier. In the second level, the fall event classification algorithm uses the fifth-order cumulants and SVM. Finally, human activity classification is performed using the second-order cumulants and SVM. The detection and classification results are compared with those of the decision tree, naive Bayes, multilayer perceptron and SVM classifiers with different types of time-domain features including the second-, third-, fourth- and fifth-order cumulants and the signal magnitude vector and signal magnitude area. The experimental results demonstrate that the second- and fifth-order cumulant features and SVM classifier can achieve optimal detection and classification rates of above 95%, as well as the lowest false alarm rate of 1.03%.
PROTAX-Sound: A probabilistic framework for automated animal sound identification
Somervuo, Panu; Ovaskainen, Otso
2017-01-01
Autonomous audio recording is stimulating new field in bioacoustics, with a great promise for conducting cost-effective species surveys. One major current challenge is the lack of reliable classifiers capable of multi-species identification. We present PROTAX-Sound, a statistical framework to perform probabilistic classification of animal sounds. PROTAX-Sound is based on a multinomial regression model, and it can utilize as predictors any kind of sound features or classifications produced by other existing algorithms. PROTAX-Sound combines audio and image processing techniques to scan environmental audio files. It identifies regions of interest (a segment of the audio file that contains a vocalization to be classified), extracts acoustic features from them and compares with samples in a reference database. The output of PROTAX-Sound is the probabilistic classification of each vocalization, including the possibility that it represents species not present in the reference database. We demonstrate the performance of PROTAX-Sound by classifying audio from a species-rich case study of tropical birds. The best performing classifier achieved 68% classification accuracy for 200 bird species. PROTAX-Sound improves the classification power of current techniques by combining information from multiple classifiers in a manner that yields calibrated classification probabilities. PMID:28863178
PROTAX-Sound: A probabilistic framework for automated animal sound identification.
de Camargo, Ulisses Moliterno; Somervuo, Panu; Ovaskainen, Otso
2017-01-01
Autonomous audio recording is stimulating new field in bioacoustics, with a great promise for conducting cost-effective species surveys. One major current challenge is the lack of reliable classifiers capable of multi-species identification. We present PROTAX-Sound, a statistical framework to perform probabilistic classification of animal sounds. PROTAX-Sound is based on a multinomial regression model, and it can utilize as predictors any kind of sound features or classifications produced by other existing algorithms. PROTAX-Sound combines audio and image processing techniques to scan environmental audio files. It identifies regions of interest (a segment of the audio file that contains a vocalization to be classified), extracts acoustic features from them and compares with samples in a reference database. The output of PROTAX-Sound is the probabilistic classification of each vocalization, including the possibility that it represents species not present in the reference database. We demonstrate the performance of PROTAX-Sound by classifying audio from a species-rich case study of tropical birds. The best performing classifier achieved 68% classification accuracy for 200 bird species. PROTAX-Sound improves the classification power of current techniques by combining information from multiple classifiers in a manner that yields calibrated classification probabilities.
Analysis of signals under compositional noise with applications to SONAR data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tucker, J. Derek; Wu, Wei; Srivastava, Anuj
2013-07-09
In this paper, we consider the problem of denoising and classification of SONAR signals observed under compositional noise, i.e., they have been warped randomly along the x-axis. The traditional techniques do not account for such noise and, consequently, cannot provide a robust classification of signals. We apply a recent framework that: 1) uses a distance-based objective function for data alignment and noise reduction; and 2) leads to warping-invariant distances between signals for robust clustering and classification. We use this framework to introduce two distances that can be used for signal classification: a) a y-distance, which is the distance between themore » aligned signals; and b) an x-distance that measures the amount of warping needed to align the signals. We focus on the task of clustering and classifying objects, using acoustic spectrum (acoustic color), which is complicated by the uncertainties in aspect angles at data collections. Small changes in the aspect angles corrupt signals in a way that amounts to compositional noise. As a result, we demonstrate the use of the developed metrics in classification of acoustic color data and highlight improvements in signal classification over current methods.« less
Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.
Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao
2017-06-12
Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.
Event Driven Messaging with Role-Based Subscriptions
NASA Technical Reports Server (NTRS)
Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, rachel; Allen, Christopher; Luong, Ivy; Chang, George; Zendejas, Silvino; Sadaqathulla, Syed
2009-01-01
Event Driven Messaging with Role-Based Subscriptions (EDM-RBS) is a framework integrated into the Service Management Database (SMDB) to allow for role-based and subscription-based delivery of synchronous and asynchronous messages over JMS (Java Messaging Service), SMTP (Simple Mail Transfer Protocol), or SMS (Short Messaging Service). This allows for 24/7 operation with users in all parts of the world. The software classifies messages by triggering data type, application source, owner of data triggering event (mission), classification, sub-classification and various other secondary classifying tags. Messages are routed to applications or users based on subscription rules using a combination of the above message attributes. This program provides a framework for identifying connected users and their applications for targeted delivery of messages over JMS to the client applications the user is logged into. EDMRBS provides the ability to send notifications over e-mail or pager rather than having to rely on a live human to do it. It is implemented as an Oracle application that uses Oracle relational database management system intrinsic functions. It is configurable to use Oracle AQ JMS API or an external JMS provider for messaging. It fully integrates into the event-logging framework of SMDB (Subnet Management Database).
Engineering Change Management Method Framework in Mechanical Engineering
NASA Astrophysics Data System (ADS)
Stekolschik, Alexander
2016-11-01
Engineering changes make an impact on different process chains in and outside the company, and lead to most error costs and time shifts. In fact, 30 to 50 per cent of development costs result from technical changes. Controlling engineering change processes can help us to avoid errors and risks, and contribute to cost optimization and a shorter time to market. This paper presents a method framework for controlling engineering changes at mechanical engineering companies. The developed classification of engineering changes and accordingly process requirements build the basis for the method framework. The developed method framework comprises two main areas: special data objects managed in different engineering IT tools and process framework. Objects from both areas are building blocks that can be selected to the overall business process based on the engineering process type and change classification. The process framework contains steps for the creation of change objects (both for overall change and for parts), change implementation, and release. Companies can select singleprocess building blocks from the framework, depending on the product development process and change impact. The developed change framework has been implemented at a division (10,000 employees) of a big German mechanical engineering company.
Creating a Canonical Scientific and Technical Information Classification System for NCSTRL+
NASA Technical Reports Server (NTRS)
Tiffany, Melissa E.; Nelson, Michael L.
1998-01-01
The purpose of this paper is to describe the new subject classification system for the NCSTRL+ project. NCSTRL+ is a canonical digital library (DL) based on the Networked Computer Science Technical Report Library (NCSTRL). The current NCSTRL+ classification system uses the NASA Scientific and Technical (STI) subject classifications, which has a bias towards the aerospace, aeronautics, and engineering disciplines. Examination of other scientific and technical information classification systems showed similar discipline-centric weaknesses. Traditional, library-oriented classification systems represented all disciplines, but were too generalized to serve the needs of a scientific and technically oriented digital library. Lack of a suitable existing classification system led to the creation of a lightweight, balanced, general classification system that allows the mapping of more specialized classification schemes into the new framework. We have developed the following classification system to give equal weight to all STI disciplines, while being compact and lightweight.
Mumtaz, Wajid; Ali, Syed Saad Azhar; Yasin, Mohd Azhar Mohd; Malik, Aamir Saeed
2018-02-01
Major depressive disorder (MDD), a debilitating mental illness, could cause functional disabilities and could become a social problem. An accurate and early diagnosis for depression could become challenging. This paper proposed a machine learning framework involving EEG-derived synchronization likelihood (SL) features as input data for automatic diagnosis of MDD. It was hypothesized that EEG-based SL features could discriminate MDD patients and healthy controls with an acceptable accuracy better than measures such as interhemispheric coherence and mutual information. In this work, classification models such as support vector machine (SVM), logistic regression (LR) and Naïve Bayesian (NB) were employed to model relationship between the EEG features and the study groups (MDD patient and healthy controls) and ultimately achieved discrimination of study participants. The results indicated that the classification rates were better than chance. More specifically, the study resulted into SVM classification accuracy = 98%, sensitivity = 99.9%, specificity = 95% and f-measure = 0.97; LR classification accuracy = 91.7%, sensitivity = 86.66%, specificity = 96.6% and f-measure = 0.90; NB classification accuracy = 93.6%, sensitivity = 100%, specificity = 87.9% and f-measure = 0.95. In conclusion, SL could be a promising method for diagnosing depression. The findings could be generalized to develop a robust CAD-based tool that may help for clinical purposes.
An ensemble predictive modeling framework for breast cancer classification.
Nagarajan, Radhakrishnan; Upreti, Meenakshi
2017-12-01
Molecular changes often precede clinical presentation of diseases and can be useful surrogates with potential to assist in informed clinical decision making. Recent studies have demonstrated the usefulness of modeling approaches such as classification that can predict the clinical outcomes from molecular expression profiles. While useful, a majority of these approaches implicitly use all molecular markers as features in the classification process often resulting in sparse high-dimensional projection of the samples often comparable to that of the sample size. In this study, a variant of the recently proposed ensemble classification approach is used for predicting good and poor-prognosis breast cancer samples from their molecular expression profiles. In contrast to traditional single and ensemble classifiers, the proposed approach uses multiple base classifiers with varying feature sets obtained from two-dimensional projection of the samples in conjunction with a majority voting strategy for predicting the class labels. In contrast to our earlier implementation, base classifiers in the ensembles are chosen based on maximal sensitivity and minimal redundancy by choosing only those with low average cosine distance. The resulting ensemble sets are subsequently modeled as undirected graphs. Performance of four different classification algorithms is shown to be better within the proposed ensemble framework in contrast to using them as traditional single classifier systems. Significance of a subset of genes with high-degree centrality in the network abstractions across the poor-prognosis samples is also discussed. Copyright © 2017 Elsevier Inc. All rights reserved.
Discriminative Nonlinear Analysis Operator Learning: When Cosparse Model Meets Image Classification.
Wen, Zaidao; Hou, Biao; Jiao, Licheng
2017-05-03
Linear synthesis model based dictionary learning framework has achieved remarkable performances in image classification in the last decade. Behaved as a generative feature model, it however suffers from some intrinsic deficiencies. In this paper, we propose a novel parametric nonlinear analysis cosparse model (NACM) with which a unique feature vector will be much more efficiently extracted. Additionally, we derive a deep insight to demonstrate that NACM is capable of simultaneously learning the task adapted feature transformation and regularization to encode our preferences, domain prior knowledge and task oriented supervised information into the features. The proposed NACM is devoted to the classification task as a discriminative feature model and yield a novel discriminative nonlinear analysis operator learning framework (DNAOL). The theoretical analysis and experimental performances clearly demonstrate that DNAOL will not only achieve the better or at least competitive classification accuracies than the state-of-the-art algorithms but it can also dramatically reduce the time complexities in both training and testing phases.
A classification scheme for edge-localized modes based on their probability distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shabbir, A., E-mail: aqsa.shabbir@ugent.be; Max Planck Institute for Plasma Physics, D-85748 Garching; Hornung, G.
We present here an automated classification scheme which is particularly well suited to scenarios where the parameters have significant uncertainties or are stochastic quantities. To this end, the parameters are modeled with probability distributions in a metric space and classification is conducted using the notion of nearest neighbors. The presented framework is then applied to the classification of type I and type III edge-localized modes (ELMs) from a set of carbon-wall plasmas at JET. This provides a fast, standardized classification of ELM types which is expected to significantly reduce the effort of ELM experts in identifying ELM types. Further, themore » classification scheme is general and can be applied to various other plasma phenomena as well.« less
Forest resource information system
NASA Technical Reports Server (NTRS)
Mroczynski, R. P. (Principal Investigator)
1978-01-01
The author has identified the following significant results. A benchmark classification evaluation framework was implemented. The FRIS preprocessing activities were refined. Potential geo-based referencing systems were identified as components of FRIS.
Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating
Wang, Bingkun; Huang, Yongfeng; Li, Xing
2016-01-01
E-commerce develops rapidly. Learning and taking good advantage of the myriad reviews from online customers has become crucial to the success in this game, which calls for increasingly more accuracy in sentiment classification of these reviews. Therefore the finer-grained review rating prediction is preferred over the rough binary sentiment classification. There are mainly two types of method in current review rating prediction. One includes methods based on review text content which focus almost exclusively on textual content and seldom relate to those reviewers and items remarked in other relevant reviews. The other one contains methods based on collaborative filtering which extract information from previous records in the reviewer-item rating matrix, however, ignoring review textual content. Here we proposed a framework for review rating prediction which shows the effective combination of the two. Then we further proposed three specific methods under this framework. Experiments on two movie review datasets demonstrate that our review rating prediction framework has better performance than those previous methods. PMID:26880879
Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating.
Wang, Bingkun; Huang, Yongfeng; Li, Xing
2016-01-01
E-commerce develops rapidly. Learning and taking good advantage of the myriad reviews from online customers has become crucial to the success in this game, which calls for increasingly more accuracy in sentiment classification of these reviews. Therefore the finer-grained review rating prediction is preferred over the rough binary sentiment classification. There are mainly two types of method in current review rating prediction. One includes methods based on review text content which focus almost exclusively on textual content and seldom relate to those reviewers and items remarked in other relevant reviews. The other one contains methods based on collaborative filtering which extract information from previous records in the reviewer-item rating matrix, however, ignoring review textual content. Here we proposed a framework for review rating prediction which shows the effective combination of the two. Then we further proposed three specific methods under this framework. Experiments on two movie review datasets demonstrate that our review rating prediction framework has better performance than those previous methods.
Classification of large-scale fundus image data sets: a cloud-computing framework.
Roychowdhury, Sohini
2016-08-01
Large medical image data sets with high dimensionality require substantial amount of computation time for data creation and data processing. This paper presents a novel generalized method that finds optimal image-based feature sets that reduce computational time complexity while maximizing overall classification accuracy for detection of diabetic retinopathy (DR). First, region-based and pixel-based features are extracted from fundus images for classification of DR lesions and vessel-like structures. Next, feature ranking strategies are used to distinguish the optimal classification feature sets. DR lesion and vessel classification accuracies are computed using the boosted decision tree and decision forest classifiers in the Microsoft Azure Machine Learning Studio platform, respectively. For images from the DIARETDB1 data set, 40 of its highest-ranked features are used to classify four DR lesion types with an average classification accuracy of 90.1% in 792 seconds. Also, for classification of red lesion regions and hemorrhages from microaneurysms, accuracies of 85% and 72% are observed, respectively. For images from STARE data set, 40 high-ranked features can classify minor blood vessels with an accuracy of 83.5% in 326 seconds. Such cloud-based fundus image analysis systems can significantly enhance the borderline classification performances in automated screening systems.
Attribute-based Decision Graphs: A framework for multiclass data classification.
Bertini, João Roberto; Nicoletti, Maria do Carmo; Zhao, Liang
2017-01-01
Graph-based algorithms have been successfully applied in machine learning and data mining tasks. A simple but, widely used, approach to build graphs from vector-based data is to consider each data instance as a vertex and connecting pairs of it using a similarity measure. Although this abstraction presents some advantages, such as arbitrary shape representation of the original data, it is still tied to some drawbacks, for example, it is dependent on the choice of a pre-defined distance metric and is biased by the local information among data instances. Aiming at exploring alternative ways to build graphs from data, this paper proposes an algorithm for constructing a new type of graph, called Attribute-based Decision Graph-AbDG. Given a vector-based data set, an AbDG is built by partitioning each data attribute range into disjoint intervals and representing each interval as a vertex. The edges are then established between vertices from different attributes according to a pre-defined pattern. Classification is performed through a matching process among the attribute values of the new instance and AbDG. Moreover, AbDG provides an inner mechanism to handle missing attribute values, which contributes for expanding its applicability. Results of classification tasks have shown that AbDG is a competitive approach when compared to well-known multiclass algorithms. The main contribution of the proposed framework is the combination of the advantages of attribute-based and graph-based techniques to perform robust pattern matching data classification, while permitting the analysis the input data considering only a subset of its attributes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Analyzing Sub-Classifications of Glaucoma via SOM Based Clustering of Optic Nerve Images.
Yan, Sanjun; Abidi, Syed Sibte Raza; Artes, Paul Habib
2005-01-01
We present a data mining framework to cluster optic nerve images obtained by Confocal Scanning Laser Tomography (CSLT) in normal subjects and patients with glaucoma. We use self-organizing maps and expectation maximization methods to partition the data into clusters that provide insights into potential sub-classification of glaucoma based on morphological features. We conclude that our approach provides a first step towards a better understanding of morphological features in optic nerve images obtained from glaucoma patients and healthy controls.
Active learning methods for interactive image retrieval.
Gosselin, Philippe Henri; Cord, Matthieu
2008-07-01
Active learning methods have been considered with increased interest in the statistical learning community. Initially developed within a classification framework, a lot of extensions are now being proposed to handle multimedia applications. This paper provides algorithms within a statistical framework to extend active learning for online content-based image retrieval (CBIR). The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. Focusing on interactive methods, active learning strategy is then described. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. First, as any active method is sensitive to the boundary estimation between classes, the RETIN strategy carries out a boundary correction to make the retrieval process more robust. Second, the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. Third, a batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies.
Zhang, Jianhua; Li, Sunan; Wang, Rubin
2017-01-01
In this paper, we deal with the Mental Workload (MWL) classification problem based on the measured physiological data. First we discussed the optimal depth (i.e., the number of hidden layers) and parameter optimization algorithms for the Convolutional Neural Networks (CNN). The base CNNs designed were tested according to five classification performance indices, namely Accuracy, Precision, F-measure, G-mean, and required training time. Then we developed an Ensemble Convolutional Neural Network (ECNN) to enhance the accuracy and robustness of the individual CNN model. For the ECNN design, three model aggregation approaches (weighted averaging, majority voting and stacking) were examined and a resampling strategy was used to enhance the diversity of individual CNN models. The results of MWL classification performance comparison indicated that the proposed ECNN framework can effectively improve MWL classification performance and is featured by entirely automatic feature extraction and MWL classification, when compared with traditional machine learning methods.
Bashir, Saba; Qamar, Usman; Khan, Farhan Hassan
2016-02-01
Accuracy plays a vital role in the medical field as it concerns with the life of an individual. Extensive research has been conducted on disease classification and prediction using machine learning techniques. However, there is no agreement on which classifier produces the best results. A specific classifier may be better than others for a specific dataset, but another classifier could perform better for some other dataset. Ensemble of classifiers has been proved to be an effective way to improve classification accuracy. In this research we present an ensemble framework with multi-layer classification using enhanced bagging and optimized weighting. The proposed model called "HM-BagMoov" overcomes the limitations of conventional performance bottlenecks by utilizing an ensemble of seven heterogeneous classifiers. The framework is evaluated on five different heart disease datasets, four breast cancer datasets, two diabetes datasets, two liver disease datasets and one hepatitis dataset obtained from public repositories. The analysis of the results show that ensemble framework achieved the highest accuracy, sensitivity and F-Measure when compared with individual classifiers for all the diseases. In addition to this, the ensemble framework also achieved the highest accuracy when compared with the state of the art techniques. An application named "IntelliHealth" is also developed based on proposed model that may be used by hospitals/doctors for diagnostic advice. Copyright © 2015 Elsevier Inc. All rights reserved.
The Landscape of long non-coding RNA classification
St Laurent, Georges; Wahlestedt, Claes; Kapranov, Philipp
2015-01-01
Advances in the depth and quality of transcriptome sequencing have revealed many new classes of long non-coding RNAs (lncRNAs). lncRNA classification has mushroomed to accommodate these new findings, even though the real dimensions and complexity of the non-coding transcriptome remain unknown. Although evidence of functionality of specific lncRNAs continues to accumulate, conflicting, confusing, and overlapping terminology has fostered ambiguity and lack of clarity in the field in general. The lack of fundamental conceptual un-ambiguous classification framework results in a number of challenges in the annotation and interpretation of non-coding transcriptome data. It also might undermine integration of the new genomic methods and datasets in an effort to unravel function of lncRNA. Here, we review existing lncRNA classifications, nomenclature, and terminology. Then we describe the conceptual guidelines that have emerged for their classification and functional annotation based on expanding and more comprehensive use of large systems biology-based datasets. PMID:25869999
A Bio-Inspired Herbal Tea Flavour Assessment Technique
Zakaria, Nur Zawatil Isqi; Masnan, Maz Jamilah; Zakaria, Ammar; Shakaff, Ali Yeon Md
2014-01-01
Herbal-based products are becoming a widespread production trend among manufacturers for the domestic and international markets. As the production increases to meet the market demand, it is very crucial for the manufacturer to ensure that their products have met specific criteria and fulfil the intended quality determined by the quality controller. One famous herbal-based product is herbal tea. This paper investigates bio-inspired flavour assessments in a data fusion framework involving an e-nose and e-tongue. The objectives are to attain good classification of different types and brands of herbal tea, classification of different flavour masking effects and finally classification of different concentrations of herbal tea. Two data fusion levels were employed in this research, low level data fusion and intermediate level data fusion. Four classification approaches; LDA, SVM, KNN and PNN were examined in search of the best classifier to achieve the research objectives. In order to evaluate the classifiers' performance, an error estimator based on k-fold cross validation and leave-one-out were applied. Classification based on GC-MS TIC data was also included as a comparison to the classification performance using fusion approaches. Generally, KNN outperformed the other classification techniques for the three flavour assessments in the low level data fusion and intermediate level data fusion. However, the classification results based on GC-MS TIC data are varied. PMID:25010697
Local linear discriminant analysis framework using sample neighbors.
Fan, Zizhu; Xu, Yong; Zhang, David
2011-07-01
The linear discriminant analysis (LDA) is a very popular linear feature extraction approach. The algorithms of LDA usually perform well under the following two assumptions. The first assumption is that the global data structure is consistent with the local data structure. The second assumption is that the input data classes are Gaussian distributions. However, in real-world applications, these assumptions are not always satisfied. In this paper, we propose an improved LDA framework, the local LDA (LLDA), which can perform well without needing to satisfy the above two assumptions. Our LLDA framework can effectively capture the local structure of samples. According to different types of local data structure, our LLDA framework incorporates several different forms of linear feature extraction approaches, such as the classical LDA and principal component analysis. The proposed framework includes two LLDA algorithms: a vector-based LLDA algorithm and a matrix-based LLDA (MLLDA) algorithm. MLLDA is directly applicable to image recognition, such as face recognition. Our algorithms need to train only a small portion of the whole training set before testing a sample. They are suitable for learning large-scale databases especially when the input data dimensions are very high and can achieve high classification accuracy. Extensive experiments show that the proposed algorithms can obtain good classification results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vernon, Christopher R.; Arntzen, Evan V.; Richmond, Marshall C.
Assessing the environmental benefits of proposed flow modification to large rivers provides invaluable insight into future hydropower project operations and relicensing activities. Providing a means to quantitatively define flow-ecology relationships is integral in establishing flow regimes that are mutually beneficial to power production and ecological needs. To compliment this effort an opportunity to create versatile tools that can be applied to broad geographic areas has been presented. In particular, integration with efforts standardized within the ecological limits of hydrologic alteration (ELOHA) is highly advantageous (Poff et al. 2010). This paper presents a geographic information system (GIS) framework for large rivermore » classification that houses a base geomorphic classification that is both flexible and accurate, allowing for full integration with other hydrologic models focused on addressing ELOHA efforts. A case study is also provided that integrates publically available National Hydrography Dataset Plus Version 2 (NHDPlusV2) data, Modular Aquatic Simulation System two-dimensional (MASS2) hydraulic data, and field collected data into the framework to produce a suite of flow-ecology related outputs. The case study objective was to establish areas of optimal juvenile salmonid rearing habitat under varying flow regimes throughout an impounded portion of the lower Snake River, USA (Figure 1) as an indicator to determine sites where the potential exists to create additional shallow water habitat. Additionally, an alternative hydrologic classification useable throughout the contiguous United States which can be coupled with the geomorphic aspect of this framework is also presented. This framework provides the user with the ability to integrate hydrologic and ecologic data into the base geomorphic aspect of this framework within a geographic information system (GIS) to output spatiotemporally variable flow-ecology relationship scenarios.« less
NASA Astrophysics Data System (ADS)
Pipaud, Isabel; Lehmkuhl, Frank
2017-09-01
In the field of geomorphology, automated extraction and classification of landforms is one of the most active research areas. Until the late 2000s, this task has primarily been tackled using pixel-based approaches. As these methods consider pixels and pixel neighborhoods as the sole basic entities for analysis, they cannot account for the irregular boundaries of real-world objects. Object-based analysis frameworks emerging from the field of remote sensing have been proposed as an alternative approach, and were successfully applied in case studies falling in the domains of both general and specific geomorphology. In this context, the a-priori selection of scale parameters or bandwidths is crucial for the segmentation result, because inappropriate parametrization will either result in over-segmentation or insufficient segmentation. In this study, we describe a novel supervised method for delineation and classification of alluvial fans, and assess its applicability using a SRTM 1‧‧ DEM scene depicting a section of the north-eastern Mongolian Altai, located in northwest Mongolia. The approach is premised on the application of mean-shift segmentation and the use of a one-class support vector machine (SVM) for classification. To consider variability in terms of alluvial fan dimension and shape, segmentation is performed repeatedly for different weightings of the incorporated morphometric parameters as well as different segmentation bandwidths. The final classification layer is obtained by selecting, for each real-world object, the most appropriate segmentation result according to fuzzy membership values derived from the SVM classification. Our results show that mean-shift segmentation and SVM-based classification provide an effective framework for delineation and classification of a particular landform. Variable bandwidths and terrain parameter weightings were identified as being crucial for consideration of intra-class variability, and, in turn, for a constantly high segmentation quality. Our analysis further reveals that incorporation of morphometric parameters quantifying specific morphological aspects of a landform is indispensable for developing an accurate classification scheme. Alluvial fans exhibiting accentuated composite morphologies were identified as a major challenge for automatic delineation, as they cannot be fully captured by a single segmentation run. There is, however, a high probability that this shortcoming can be overcome by enhancing the presented approach with a routine merging fan sub-entities based on their spatial relationships.
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
Bokulich, Nicholas A; Kaehler, Benjamin D; Rideout, Jai Ram; Dillon, Matthew; Bolyen, Evan; Knight, Rob; Huttley, Gavin A; Gregory Caporaso, J
2018-05-17
Taxonomic classification of marker-gene sequences is an important step in microbiome analysis. We present q2-feature-classifier ( https://github.com/qiime2/q2-feature-classifier ), a QIIME 2 plugin containing several novel machine-learning and alignment-based methods for taxonomy classification. We evaluated and optimized several commonly used classification methods implemented in QIIME 1 (RDP, BLAST, UCLUST, and SortMeRNA) and several new methods implemented in QIIME 2 (a scikit-learn naive Bayes machine-learning classifier, and alignment-based taxonomy consensus methods based on VSEARCH, and BLAST+) for classification of bacterial 16S rRNA and fungal ITS marker-gene amplicon sequence data. The naive-Bayes, BLAST+-based, and VSEARCH-based classifiers implemented in QIIME 2 meet or exceed the species-level accuracy of other commonly used methods designed for classification of marker gene sequences that were evaluated in this work. These evaluations, based on 19 mock communities and error-free sequence simulations, including classification of simulated "novel" marker-gene sequences, are available in our extensible benchmarking framework, tax-credit ( https://github.com/caporaso-lab/tax-credit-data ). Our results illustrate the importance of parameter tuning for optimizing classifier performance, and we make recommendations regarding parameter choices for these classifiers under a range of standard operating conditions. q2-feature-classifier and tax-credit are both free, open-source, BSD-licensed packages available on GitHub.
Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.
Classification of Dynamical Diffusion States in Single Molecule Tracking Microscopy
Bosch, Peter J.; Kanger, Johannes S.; Subramaniam, Vinod
2014-01-01
Single molecule tracking of membrane proteins by fluorescence microscopy is a promising method to investigate dynamic processes in live cells. Translating the trajectories of proteins to biological implications, such as protein interactions, requires the classification of protein motion within the trajectories. Spatial information of protein motion may reveal where the protein interacts with cellular structures, because binding of proteins to such structures often alters their diffusion speed. For dynamic diffusion systems, we provide an analytical framework to determine in which diffusion state a molecule is residing during the course of its trajectory. We compare different methods for the quantification of motion to utilize this framework for the classification of two diffusion states (two populations with different diffusion speed). We found that a gyration quantification method and a Bayesian statistics-based method are the most accurate in diffusion-state classification for realistic experimentally obtained datasets, of which the gyration method is much less computationally demanding. After classification of the diffusion, the lifetime of the states can be determined, and images of the diffusion states can be reconstructed at high resolution. Simulations validate these applications. We apply the classification and its applications to experimental data to demonstrate the potential of this approach to obtain further insights into the dynamics of cell membrane proteins. PMID:25099798
Fast and effective characterization of 3D region of interest in medical image data
NASA Astrophysics Data System (ADS)
Kontos, Despina; Megalooikonomou, Vasileios
2004-05-01
We propose a framework for detecting, characterizing and classifying spatial Regions of Interest (ROIs) in medical images, such as tumors and lesions in MRI or activation regions in fMRI. A necessary step prior to classification is efficient extraction of discriminative features. For this purpose, we apply a characterization technique especially designed for spatial ROIs. The main idea of this technique is to extract a k-dimensional feature vector using concentric spheres in 3D (or circles in 2D) radiating out of the ROI's center of mass. These vectors form characterization signatures that can be used to represent the initial ROIs. We focus on classifying fMRI ROIs obtained from a study that explores neuroanatomical correlates of semantic processing in Alzheimer's disease (AD). We detect a ROI highly associated with AD and apply the feature extraction technique with different experimental settings. We seek to distinguish control from patient samples. We study how classification can be performed using the extracted signatures as well as how different experimental parameters affect classification accuracy. The obtained classification accuracy ranged from 82% to 87% (based on the selected ROI) suggesting that the proposed classification framework can be potentially useful in supporting medical decision-making.
Supervised Semantic Classification for Nuclear Proliferation Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju; Cheriyadat, Anil M; Gleason, Shaun Scott
2010-01-01
Existing feature extraction and classification approaches are not suitable for monitoring proliferation activity using high-resolution multi-temporal remote sensing imagery. In this paper we present a supervised semantic labeling framework based on the Latent Dirichlet Allocation method. This framework is used to analyze over 120 images collected under different spatial and temporal settings over the globe representing three major semantic categories: airports, nuclear, and coal power plants. Initial experimental results show a reasonable discrimination of these three categories even though coal and nuclear images share highly common and overlapping objects. This research also identified several research challenges associated with nuclear proliferationmore » monitoring using high resolution remote sensing images.« less
Zhan, Liang; Liu, Yashu; Wang, Yalin; Zhou, Jiayu; Jahanshad, Neda; Ye, Jieping; Thompson, Paul M.
2015-01-01
Alzheimer's disease (AD) is a progressive brain disease. Accurate detection of AD and its prodromal stage, mild cognitive impairment (MCI), are crucial. There is also a growing interest in identifying brain imaging biomarkers that help to automatically differentiate stages of Alzheimer's disease. Here, we focused on brain structural networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer's Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying different stages of Alzheimer's disease. PMID:26257601
Heterogeneous data fusion for brain tumor classification.
Metsis, Vangelis; Huang, Heng; Andronesi, Ovidiu C; Makedon, Fillia; Tzika, Aria
2012-10-01
Current research in biomedical informatics involves analysis of multiple heterogeneous data sets. This includes patient demographics, clinical and pathology data, treatment history, patient outcomes as well as gene expression, DNA sequences and other information sources such as gene ontology. Analysis of these data sets could lead to better disease diagnosis, prognosis, treatment and drug discovery. In this report, we present a novel machine learning framework for brain tumor classification based on heterogeneous data fusion of metabolic and molecular datasets, including state-of-the-art high-resolution magic angle spinning (HRMAS) proton (1H) magnetic resonance spectroscopy and gene transcriptome profiling, obtained from intact brain tumor biopsies. Our experimental results show that our novel framework outperforms any analysis using individual dataset.
USE OF WATERSHED CLASSIFICATION IN MONITORING FRAMEWORKS FOR THE WESTERN LAKE SUPERIOR BASIS
In this case study we predicted stream sensitivity to nonpoint source pollution based on the nonlinear responses of hydrologic regimes and associated loadings of nonpoint source pollutants to catchment properties. We assessed two hydrologically-based thresholds of impairment, on...
NASA Technical Reports Server (NTRS)
Shull, Sarah A.; Gralla, Erica L.; deWeck, Olivier L.; Shishko, Robert
2006-01-01
One of the major logistical challenges in human space exploration is asset management. This paper presents observations on the practice of asset management in support of human space flight to date and discusses a functional-based supply classification and a framework for an integrated database that could be used to improve asset management and logistics for human missions to the Moon, Mars and beyond.
ERIC Educational Resources Information Center
Sanches-Ferreira, Manuela; Silveira-Maia, Mónica; Alves, Sílvia
2014-01-01
Portugal was the first country decreeing the mandatory use of the International Classification of Functioning, Disability and Health: Child and Youth (ICF-CY) framework for guiding special education assessment process and to base eligibility decision-making on students' functioning profiles--in contrast with traditional approaches centred on…
A Visual mining based framework for classification accuracy estimation
NASA Astrophysics Data System (ADS)
Arun, Pattathal Vijayakumar
2013-12-01
Classification techniques have been widely used in different remote sensing applications and correct classification of mixed pixels is a tedious task. Traditional approaches adopt various statistical parameters, however does not facilitate effective visualisation. Data mining tools are proving very helpful in the classification process. We propose a visual mining based frame work for accuracy assessment of classification techniques using open source tools such as WEKA and PREFUSE. These tools in integration can provide an efficient approach for getting information about improvements in the classification accuracy and helps in refining training data set. We have illustrated framework for investigating the effects of various resampling methods on classification accuracy and found that bilinear (BL) is best suited for preserving radiometric characteristics. We have also investigated the optimal number of folds required for effective analysis of LISS-IV images. Techniki klasyfikacji są szeroko wykorzystywane w różnych aplikacjach teledetekcyjnych, w których poprawna klasyfikacja pikseli stanowi poważne wyzwanie. Podejście tradycyjne wykorzystujące różnego rodzaju parametry statystyczne nie zapewnia efektywnej wizualizacji. Wielce obiecujące wydaje się zastosowanie do klasyfikacji narzędzi do eksploracji danych. W artykule zaproponowano podejście bazujące na wizualnej analizie eksploracyjnej, wykorzystujące takie narzędzia typu open source jak WEKA i PREFUSE. Wymienione narzędzia ułatwiają korektę pół treningowych i efektywnie wspomagają poprawę dokładności klasyfikacji. Działanie metody sprawdzono wykorzystując wpływ różnych metod resampling na zachowanie dokładności radiometrycznej i uzyskując najlepsze wyniki dla metody bilinearnej (BL).
McKenna, James E.; Schaeffer, Jeffrey S.; Stewart, Jana S.; Slattery, Michael T.
2015-01-01
Classifications are typically specific to particular issues or areas, leading to patchworks of subjectively defined spatial units. Stream conservation is hindered by the lack of a universal habitat classification system and would benefit from an independent hydrology-guided spatial framework of units encompassing all aquatic habitats at multiple spatial scales within large regions. We present a system that explicitly separates the spatial framework from any particular classification developed from the framework. The framework was constructed from landscape variables that are hydrologically and biologically relevant, covered all space within the study area, and was nested hierarchically and spatially related at scales ranging from the stream reach to the entire region; classifications may be developed from any subset of the 9 basins, 107 watersheds, 459 subwatersheds, or 10,000s of valley segments or stream reaches. To illustrate the advantages of this approach, we developed a fish-guided classification generated from a framework for the Great Lakes region that produced a mosaic of habitat units which, when aggregated, formed larger patches of more general conditions at progressively broader spatial scales. We identified greater than 1,200 distinct fish habitat types at the valley segment scale, most of which were rare. Comparisons of biodiversity and species assemblages are easily examined at any scale. This system can identify and quantify habitat types, evaluate habitat quality for conservation and/or restoration, and assist managers and policymakers with prioritization of protection and restoration efforts. Similar spatial frameworks and habitat classifications can be developed for any organism in any riverine ecosystem.
Optimizing Input/Output Using Adaptive File System Policies
NASA Technical Reports Server (NTRS)
Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.
1996-01-01
Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.
Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter
2017-11-01
Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Combining multiple decisions: applications to bioinformatics
NASA Astrophysics Data System (ADS)
Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.
2008-01-01
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.
ERIC Educational Resources Information Center
World Health Organization, Geneva (Switzerland).
This classification system is intended to offer a conceptual framework for information; the framework is relevant to the long-term consequences of disease, injuries or disorders, and applicable both to personal health care, including early identification and prevention, and to the mitigation of environmental and societal barriers. It begins with…
NASA Astrophysics Data System (ADS)
Zhong, Yanfei; Han, Xiaobing; Zhang, Liangpei
2018-04-01
Multi-class geospatial object detection from high spatial resolution (HSR) remote sensing imagery is attracting increasing attention in a wide range of object-related civil and engineering applications. However, the distribution of objects in HSR remote sensing imagery is location-variable and complicated, and how to accurately detect the objects in HSR remote sensing imagery is a critical problem. Due to the powerful feature extraction and representation capability of deep learning, the deep learning based region proposal generation and object detection integrated framework has greatly promoted the performance of multi-class geospatial object detection for HSR remote sensing imagery. However, due to the translation caused by the convolution operation in the convolutional neural network (CNN), although the performance of the classification stage is seldom influenced, the localization accuracies of the predicted bounding boxes in the detection stage are easily influenced. The dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage has not been addressed for HSR remote sensing imagery, and causes position accuracy problems for multi-class geospatial object detection with region proposal generation and object detection. In order to further improve the performance of the region proposal generation and object detection integrated framework for HSR remote sensing imagery object detection, a position-sensitive balancing (PSB) framework is proposed in this paper for multi-class geospatial object detection from HSR remote sensing imagery. The proposed PSB framework takes full advantage of the fully convolutional network (FCN), on the basis of a residual network, and adopts the PSB framework to solve the dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage. In addition, a pre-training mechanism is utilized to accelerate the training procedure and increase the robustness of the proposed algorithm. The proposed algorithm is validated with a publicly available 10-class object detection dataset.
Constrained maximum consistency multi-path mitigation
NASA Astrophysics Data System (ADS)
Smith, George B.
2003-10-01
Blind deconvolution algorithms can be useful as pre-processors for signal classification algorithms in shallow water. These algorithms remove the distortion of the signal caused by multipath propagation when no knowledge of the environment is available. A framework in which filters that produce signal estimates from each data channel that are as consistent with each other as possible in a least-squares sense has been presented [Smith, J. Acoust. Soc. Am. 107 (2000)]. This framework provides a solution to the blind deconvolution problem. One implementation of this framework yields the cross-relation on which EVAM [Gurelli and Nikias, IEEE Trans. Signal Process. 43 (1995)] and Rietsch [Rietsch, Geophysics 62(6) (1997)] processing are based. In this presentation, partially blind implementations that have good noise stability properties are compared using Classification Operating Characteristics (CLOC) analysis. [Work supported by ONR under Program Element 62747N and NRL, Stennis Space Center, MS.
Improved biliary detection and diagnosis through intelligent machine analysis.
Logeswaran, Rajasvaran
2012-09-01
This paper reports on work undertaken to improve automated detection of bile ducts in magnetic resonance cholangiopancreatography (MRCP) images, with the objective of conducting preliminary classification of the images for diagnosis. The proposed I-BDeDIMA (Improved Biliary Detection and Diagnosis through Intelligent Machine Analysis) scheme is a multi-stage framework consisting of successive phases of image normalization, denoising, structure identification, object labeling, feature selection and disease classification. A combination of multiresolution wavelet, dynamic intensity thresholding, segment-based region growing, region elimination, statistical analysis and neural networks, is used in this framework to achieve good structure detection and preliminary diagnosis. Tests conducted on over 200 clinical images with known diagnosis have shown promising results of over 90% accuracy. The scheme outperforms related work in the literature, making it a viable framework for computer-aided diagnosis of biliary diseases. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Impact of Information based Classification on Network Epidemics
Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini
2016-01-01
Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results. PMID:27329348
A conceptual framework and classification of capability areas for business process maturity
NASA Astrophysics Data System (ADS)
Van Looy, Amy; De Backer, Manu; Poels, Geert
2014-03-01
The article elaborates on business process maturity, which indicates how well an organisation can perform based on its business processes, i.e. on its way of working. This topic is of paramount importance for managers who try to excel in today's competitive world. Hence, business process maturity is an emerging research field. However, no consensus exists on the capability areas (or skills) needed to excel. Moreover, their theoretical foundation and synergies with other fields are frequently neglected. To overcome this gap, our study presents a conceptual framework with six main capability areas and 17 sub areas. It draws on theories regarding the traditional business process lifecycle, which are supplemented by recognised organisation management theories. The comprehensiveness of this framework is validated by mapping 69 business process maturity models (BPMMs) to the identified capability areas, based on content analysis. Nonetheless, as a consensus neither exists among the collected BPMMs, a classification of different maturity types is proposed, based on cluster analysis and discriminant analysis. Consequently, the findings contribute to the grounding of business process literature. Possible future avenues are evaluating existing BPMMs, directing new BPMMs or investigating which combinations of capability areas (i.e. maturity types) contribute more to performance than others.
Argumentation Based Joint Learning: A Novel Ensemble Learning Approach
Xu, Junyi; Yao, Li; Li, Le
2015-01-01
Recently, ensemble learning methods have been widely used to improve classification performance in machine learning. In this paper, we present a novel ensemble learning method: argumentation based multi-agent joint learning (AMAJL), which integrates ideas from multi-agent argumentation, ensemble learning, and association rule mining. In AMAJL, argumentation technology is introduced as an ensemble strategy to integrate multiple base classifiers and generate a high performance ensemble classifier. We design an argumentation framework named Arena as a communication platform for knowledge integration. Through argumentation based joint learning, high quality individual knowledge can be extracted, and thus a refined global knowledge base can be generated and used independently for classification. We perform numerous experiments on multiple public datasets using AMAJL and other benchmark methods. The results demonstrate that our method can effectively extract high quality knowledge for ensemble classifier and improve the performance of classification. PMID:25966359
Assawamakin, Anunchai; Prueksaaroon, Supakit; Kulawonganunchai, Supasak; Shaw, Philip James; Varavithya, Vara; Ruangrajitpakorn, Taneth; Tongsima, Sissades
2013-01-01
Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly. Here, a novel two-step machine-learning framework is presented to address this need. First, a Naïve Bayes estimator is used to rank features from which the top-ranked will most likely contain the most informative features for prediction of the underlying biological classes. The top-ranked features are then used in a Hidden Naïve Bayes classifier to construct a classification prediction model from these filtered attributes. In order to obtain the minimum set of the most informative biomarkers, the bottom-ranked features are successively removed from the Naïve Bayes-filtered feature list one at a time, and the classification accuracy of the Hidden Naïve Bayes classifier is checked for each pruned feature set. The performance of the proposed two-step Bayes classification framework was tested on different types of -omics datasets including gene expression microarray, single nucleotide polymorphism microarray (SNParray), and surface-enhanced laser desorption/ionization time-of-flight (SELDI-TOF) proteomic data. The proposed two-step Bayes classification framework was equal to and, in some cases, outperformed other classification methods in terms of prediction accuracy, minimum number of classification markers, and computational time.
Predicting Player Position for Talent Identification in Association Football
NASA Astrophysics Data System (ADS)
Razali, Nazim; Mustapha, Aida; Yatim, Faiz Ahmad; Aziz, Ruhaya Ab
2017-08-01
This paper is set to introduce a new framework from the perspective of Computer Science for identifying talents in the sport of football based on the players’ individual qualities; physical, mental, and technical. The combination of qualities as assessed by coaches are then used to predict the players’ position in a match that suits the player the best in a particular team formation. Evaluation of the proposed framework is two-fold; quantitatively via classification experiments to predict player position, and qualitatively via a Talent Identification Site developed to achieve the same goal. Results from the classification experiments using Bayesian Networks, Decision Trees, and K-Nearest Neighbor have shown an average of 98% accuracy, which will promote consistency in decision-making though elimination of personal bias in team selection. The positive reviews on the Football Identification Site based on user acceptance evaluation also indicates that the framework is sufficient to serve as the basis of developing an intelligent team management system in different sports, whereby growth and performance of sport players can be monitored and identified.
CLASSIFICATION FRAMEWORK FOR COASTAL SYSTEMS
U.S. Environmental Protection Agency. Classification Framework for Coastal Systems. EPA/600/R-04/061. U.S. Environmental Protection Agency, National Health and Environmental Effects Research Laboratory, Atlantic Ecology Division, Narragansett, RI, Gulf Ecology Division, Gulf Bree...
The need for international nursing diagnosis research and a theoretical framework.
Lunney, Margaret
2008-01-01
To describe the need for nursing diagnosis research and a theoretical framework for such research. A linguistics theory served as the foundation for the theoretical framework. Reasons for additional nursing diagnosis research are: (a) file names are needed for implementation of electronic health records, (b) international consensus is needed for an international classification, and (c) continuous changes occur in clinical practice. A theoretical framework used by the author is explained. Theoretical frameworks provide support for nursing diagnosis research. Linguistics theory served as an appropriate exemplar theory to support nursing research. Additional nursing diagnosis studies based upon a theoretical framework are needed and linguistics theory can provide an appropriate structure for this research.
Na, X D; Zang, S Y; Wu, C S; Li, W L
2015-11-01
Knowledge of the spatial extent of forested wetlands is essential to many studies including wetland functioning assessment, greenhouse gas flux estimation, and wildlife suitable habitat identification. For discriminating forested wetlands from their adjacent land cover types, researchers have resorted to image analysis techniques applied to numerous remotely sensed data. While with some success, there is still no consensus on the optimal approaches for mapping forested wetlands. To address this problem, we examined two machine learning approaches, random forest (RF) and K-nearest neighbor (KNN) algorithms, and applied these two approaches to the framework of pixel-based and object-based classifications. The RF and KNN algorithms were constructed using predictors derived from Landsat 8 imagery, Radarsat-2 advanced synthetic aperture radar (SAR), and topographical indices. The results show that the objected-based classifications performed better than per-pixel classifications using the same algorithm (RF) in terms of overall accuracy and the difference of their kappa coefficients are statistically significant (p<0.01). There were noticeably omissions for forested and herbaceous wetlands based on the per-pixel classifications using the RF algorithm. As for the object-based image analysis, there were also statistically significant differences (p<0.01) of Kappa coefficient between results performed based on RF and KNN algorithms. The object-based classification using RF provided a more visually adequate distribution of interested land cover types, while the object classifications based on the KNN algorithm showed noticeably commissions for forested wetlands and omissions for agriculture land. This research proves that the object-based classification with RF using optical, radar, and topographical data improved the mapping accuracy of land covers and provided a feasible approach to discriminate the forested wetlands from the other land cover types in forestry area.
Fusion and Sense Making of Heterogeneous Sensor Network and Other Sources
2017-03-16
multimodal fusion framework that uses both training data and web resources for scene classification, the experimental results on the benchmark datasets...show that the proposed text-aided scene classification framework could significantly improve classification performance. Experimental results also show...human whose adaptability is achieved by reliability- dependent weighting of different sensory modalities. Experimental results show that the proposed
Network Visualization Project (NVP)
2016-07-01
network visualization, network traffic analysis, network forensics 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF...shell, is a command-line framework used for network forensic analysis. Dshell processes existing pcap files and filters output information based on
Word-level language modeling for P300 spellers based on discriminative graphical models
NASA Astrophysics Data System (ADS)
Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat
2015-04-01
Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.
Placidi, Giuseppe; Petracca, Andrea; Spezialetti, Matteo; Iacoviello, Daniela
2016-01-01
A Brain Computer Interface (BCI) allows communication for impaired people unable to express their intention with common channels. Electroencephalography (EEG) represents an effective tool to allow the implementation of a BCI. The present paper describes a modular framework for the implementation of the graphic interface for binary BCIs based on the selection of symbols in a table. The proposed system is also designed to reduce the time required for writing text. This is made by including a motivational tool, necessary to improve the quality of the collected signals, and by containing a predictive module based on the frequency of occurrence of letters in a language, and of words in a dictionary. The proposed framework is described in a top-down approach through its modules: signal acquisition, analysis, classification, communication, visualization, and predictive engine. The framework, being modular, can be easily modified to personalize the graphic interface to the needs of the subject who has to use the BCI and it can be integrated with different classification strategies, communication paradigms, and dictionaries/languages. The implementation of a scenario and some experimental results on healthy subjects are also reported and discussed: the modules of the proposed scenario can be used as a starting point for further developments, and application on severely disabled people under the guide of specialized personnel.
Braem, G; De Vliegher, S; Supré, K; Haesebrouck, F; Leroy, F; De Vuyst, L
2011-01-10
Due to significant financial losses in the dairy cattle farming industry caused by mastitis and the possible influence of coagulase-negative staphylococci (CNS) in the development of this disease, accurate identification methods are needed that untangle the different species of the diverse CNS group. In this study, 39 Staphylococcus type strains and 253 field isolates were subjected to (GTG)(5)-PCR fingerprinting to construct a reference framework for the classification and identification of different CNS from (sub)clinical milk samples and teat apices swabs. Validation of the reference framework was performed by dividing the field isolates in two separate groups and testing whether one group of field isolates, in combination with type strains, could be used for a correct classification and identification of a second group of field isolates. (GTG)(5)-PCR fingerprinting achieved a typeability of 94.7% and an accuracy of 94.3% compared to identifications based on gene sequencing. The study shows the usefulness of the method to determine the identity of bovine Staphylococcus species, provided an identification framework updated with field isolates is available. Copyright © 2010 Elsevier B.V. All rights reserved.
Real-time classification of vehicles by type within infrared imagery
NASA Astrophysics Data System (ADS)
Kundegorski, Mikolaj E.; Akçay, Samet; Payen de La Garanderie, Grégoire; Breckon, Toby P.
2016-10-01
Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions. Despite these challenges, infra-red sensing offers significant generalized target object detection advantages in terms of all-weather operation and invariance to visual camouflage techniques. This work investigates the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking framework. Specifically we evaluate the use of traditional feature-driven bag of visual words and histogram of oriented gradient classification approaches against modern convolutional neural network architectures. Furthermore, we use classical photogrammetry, within the context of current target detection and classification techniques, as a means of approximating 3D target position within the scene based on this vehicle type classification. Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories. Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios.
EEG-based driver fatigue detection using hybrid deep generic model.
Phyo Phyo San; Sai Ho Ling; Rifai Chai; Tran, Yvonne; Craig, Ashley; Hung Nguyen
2016-08-01
Classification of electroencephalography (EEG)-based application is one of the important process for biomedical engineering. Driver fatigue is a major case of traffic accidents worldwide and considered as a significant problem in recent decades. In this paper, a hybrid deep generic model (DGM)-based support vector machine is proposed for accurate detection of driver fatigue. Traditionally, a probabilistic DGM with deep architecture is quite good at learning invariant features, but it is not always optimal for classification due to its trainable parameters are in the middle layer. Alternatively, Support Vector Machine (SVM) itself is unable to learn complicated invariance, but produces good decision surface when applied to well-behaved features. Consolidating unsupervised high-level feature extraction techniques, DGM and SVM classification makes the integrated framework stronger and enhance mutually in feature extraction and classification. The experimental results showed that the proposed DBN-based driver fatigue monitoring system achieves better testing accuracy of 73.29 % with 91.10 % sensitivity and 55.48 % specificity. In short, the proposed hybrid DGM-based SVM is an effective method for the detection of driver fatigue in EEG.
Ogawa, Takaya; Iyoki, Kenta; Fukushima, Tomohiro; Kajikawa, Yuya
2017-12-14
The field of porous materials is widely spreading nowadays, and researchers need to read tremendous numbers of papers to obtain a "bird's eye" view of a given research area. However, it is difficult for researchers to obtain an objective database based on statistical data without any relation to subjective knowledge related to individual research interests. Here, citation network analysis was applied for a comparative analysis of the research areas for zeolites and metal-organic frameworks as examples for porous materials. The statistical and objective data contributed to the analysis of: (1) the computational screening of research areas; (2) classification of research stages to a certain domain; (3) "well-cited" research areas; and (4) research area preferences of specific countries. Moreover, we proposed a methodology to assist researchers to gain potential research ideas by reviewing related research areas, which is based on the detection of unfocused ideas in one area but focused in the other area by a bibliometric approach.
Ogawa, Takaya; Fukushima, Tomohiro; Kajikawa, Yuya
2017-01-01
The field of porous materials is widely spreading nowadays, and researchers need to read tremendous numbers of papers to obtain a “bird’s eye” view of a given research area. However, it is difficult for researchers to obtain an objective database based on statistical data without any relation to subjective knowledge related to individual research interests. Here, citation network analysis was applied for a comparative analysis of the research areas for zeolites and metal-organic frameworks as examples for porous materials. The statistical and objective data contributed to the analysis of: (1) the computational screening of research areas; (2) classification of research stages to a certain domain; (3) “well-cited” research areas; and (4) research area preferences of specific countries. Moreover, we proposed a methodology to assist researchers to gain potential research ideas by reviewing related research areas, which is based on the detection of unfocused ideas in one area but focused in the other area by a bibliometric approach. PMID:29240708
Toward a Safety Risk-Based Classification of Unmanned Aircraft
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2016-01-01
There is a trend of growing interest and demand for greater access of unmanned aircraft (UA) to the National Airspace System (NAS) as the ongoing development of UA technology has created the potential for significant economic benefits. However, the lack of a comprehensive and efficient UA regulatory framework has constrained the number and kinds of UA operations that can be performed. This report presents initial results of a study aimed at defining a safety-risk-based UA classification as a plausible basis for a regulatory framework for UA operating in the NAS. Much of the study up to this point has been at a conceptual high level. The report includes a survey of contextual topics, analysis of safety risk considerations, and initial recommendations for a risk-based approach to safe UA operations in the NAS. The next phase of the study will develop and leverage deeper clarity and insight into practical engineering and regulatory considerations for ensuring that UA operations have an acceptable level of safety.
NASA Astrophysics Data System (ADS)
Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.
2017-09-01
Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.
Lu, Yingjie
2013-01-01
To facilitate patient involvement in online health community and obtain informative support and emotional support they need, a topic identification approach was proposed in this paper for identifying automatically topics of the health-related messages in online health community, thus assisting patients in reaching the most relevant messages for their queries efficiently. Feature-based classification framework was presented for automatic topic identification in our study. We first collected the messages related to some predefined topics in a online health community. Then we combined three different types of features, n-gram-based features, domain-specific features and sentiment features to build four feature sets for health-related text representation. Finally, three different text classification techniques, C4.5, Naïve Bayes and SVM were adopted to evaluate our topic classification model. By comparing different feature sets and different classification techniques, we found that n-gram-based features, domain-specific features and sentiment features were all considered to be effective in distinguishing different types of health-related topics. In addition, feature reduction technique based on information gain was also effective to improve the topic classification performance. In terms of classification techniques, SVM outperformed C4.5 and Naïve Bayes significantly. The experimental results demonstrated that the proposed approach could identify the topics of online health-related messages efficiently.
Model-based object classification using unification grammars and abstract representations
NASA Astrophysics Data System (ADS)
Liburdy, Kathleen A.; Schalkoff, Robert J.
1993-04-01
The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.
Group-Based Active Learning of Classification Models.
Luo, Zhipeng; Hauskrecht, Milos
2017-05-01
Learning of classification models from real-world data often requires additional human expert effort to annotate the data. However, this process can be rather costly and finding ways of reducing the human annotation effort is critical for this task. The objective of this paper is to develop and study new ways of providing human feedback for efficient learning of classification models by labeling groups of examples. Briefly, unlike traditional active learning methods that seek feedback on individual examples, we develop a new group-based active learning framework that solicits label information on groups of multiple examples. In order to describe groups in a user-friendly way, conjunctive patterns are used to compactly represent groups. Our empirical study on 12 UCI data sets demonstrates the advantages and superiority of our approach over both classic instance-based active learning work, as well as existing group-based active-learning methods.
NASA Astrophysics Data System (ADS)
Cheng, Gong; Han, Junwei; Zhou, Peicheng; Guo, Lei
2014-12-01
The rapid development of remote sensing technology has facilitated us the acquisition of remote sensing images with higher and higher spatial resolution, but how to automatically understand the image contents is still a big challenge. In this paper, we develop a practical and rotation-invariant framework for multi-class geospatial object detection and geographic image classification based on collection of part detectors (COPD). The COPD is composed of a set of representative and discriminative part detectors, where each part detector is a linear support vector machine (SVM) classifier used for the detection of objects or recurring spatial patterns within a certain range of orientation. Specifically, when performing multi-class geospatial object detection, we learn a set of seed-based part detectors where each part detector corresponds to a particular viewpoint of an object class, so the collection of them provides a solution for rotation-invariant detection of multi-class objects. When performing geographic image classification, we utilize a large number of pre-trained part detectors to discovery distinctive visual parts from images and use them as attributes to represent the images. Comprehensive evaluations on two remote sensing image databases and comparisons with some state-of-the-art approaches demonstrate the effectiveness and superiority of the developed framework.
Toward functional classification of neuronal types.
Sharpee, Tatyana O
2014-09-17
How many types of neurons are there in the brain? This basic neuroscience question remains unsettled despite many decades of research. Classification schemes have been proposed based on anatomical, electrophysiological, or molecular properties. However, different schemes do not always agree with each other. This raises the question of whether one can classify neurons based on their function directly. For example, among sensory neurons, can a classification scheme be devised that is based on their role in encoding sensory stimuli? Here, theoretical arguments are outlined for how this can be achieved using information theory by looking at optimal numbers of cell types and paying attention to two key properties: correlations between inputs and noise in neural responses. This theoretical framework could help to map the hierarchical tree relating different neuronal classes within and across species. Copyright © 2014 Elsevier Inc. All rights reserved.
Deng, Changjian; Lv, Kun; Shi, Debo; Yang, Bo; Yu, Song; He, Zhiyi; Yan, Jia
2018-06-12
In this paper, a novel feature selection and fusion framework is proposed to enhance the discrimination ability of gas sensor arrays for odor identification. Firstly, we put forward an efficient feature selection method based on the separability and the dissimilarity to determine the feature selection order for each type of feature when increasing the dimension of selected feature subsets. Secondly, the K-nearest neighbor (KNN) classifier is applied to determine the dimensions of the optimal feature subsets for different types of features. Finally, in the process of establishing features fusion, we come up with a classification dominance feature fusion strategy which conducts an effective basic feature. Experimental results on two datasets show that the recognition rates of Database I and Database II achieve 97.5% and 80.11%, respectively, when k = 1 for KNN classifier and the distance metric is correlation distance (COR), which demonstrates the superiority of the proposed feature selection and fusion framework in representing signal features. The novel feature selection method proposed in this paper can effectively select feature subsets that are conducive to the classification, while the feature fusion framework can fuse various features which describe the different characteristics of sensor signals, for enhancing the discrimination ability of gas sensors and, to a certain extent, suppressing drift effect.
Adam, Asrul; Mohd Tumari, Mohd Zaidi; Mohamad, Mohd Saberi
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236
NASA Astrophysics Data System (ADS)
Pullanagari, Reddy; Kereszturi, Gábor; Yule, Ian J.; Ghamisi, Pedram
2017-04-01
Accurate and spatially detailed mapping of complex urban environments is essential for land managers. Classifying high spectral and spatial resolution hyperspectral images is a challenging task because of its data abundance and computational complexity. Approaches with a combination of spectral and spatial information in a single classification framework have attracted special attention because of their potential to improve the classification accuracy. We extracted multiple features from spectral and spatial domains of hyperspectral images and evaluated them with two supervised classification algorithms; support vector machines (SVM) and an artificial neural network. The spatial features considered are produced by a gray level co-occurrence matrix and extended multiattribute profiles. All of these features were stacked, and the most informative features were selected using a genetic algorithm-based SVM. After selecting the most informative features, the classification model was integrated with a segmentation map derived using a hidden Markov random field. We tested the proposed method on a real application of a hyperspectral image acquired from AisaFENIX and on widely used hyperspectral images. From the results, it can be concluded that the proposed framework significantly improves the results with different spectral and spatial resolutions over different instrumentation.
Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing
2018-06-01
Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
TNM-O: ontology support for staging of malignant tumours.
Boeker, Martin; França, Fábio; Bronsert, Peter; Schulz, Stefan
2016-11-14
Objectives of this work are to (1) present an ontological framework for the TNM classification system, (2) exemplify this framework by an ontology for colon and rectum tumours, and (3) evaluate this ontology by assigning TNM classes to real world pathology data. The TNM ontology uses the Foundational Model of Anatomy for anatomical entities and BioTopLite 2 as a domain top-level ontology. General rules for the TNM classification system and the specific TNM classification for colorectal tumours were axiomatised in description logic. Case-based information was collected from tumour documentation practice in the Comprehensive Cancer Centre of a large university hospital. Based on the ontology, a module was developed that classifies pathology data. TNM was represented as an information artefact, which consists of single representational units. Corresponding to every representational unit, tumours and tumour aggregates were defined. Tumour aggregates consist of the primary tumour and, if existing, of infiltrated regional lymph nodes and distant metastases. TNM codes depend on the location and certain qualities of the primary tumour (T), the infiltrated regional lymph nodes (N) and the existence of distant metastases (M). Tumour data from clinical and pathological documentation were successfully classified with the ontology. A first version of the TNM Ontology represents the TNM system for the description of the anatomical extent of malignant tumours. The present work demonstrates its representational power and completeness as well as its applicability for classification of instance data.
Björck-Åkesson, Eva; Wilder, Jenny; Granlund, Mats; Pless, Mia; Simeonsson, Rune; Adolfsson, Margareta; Almqvist, Lena; Augustine, Lilly; Klang, Nina; Lillvist, Anne
2010-01-01
Early childhood intervention and habilitation services for children with disabilities operate on an interdisciplinary basis. It requires a common language between professionals, and a shared framework for intervention goals and intervention implementation. The International Classification of Functioning, Disability and Health (ICF) and the version for children and youth (ICF-CY) may serve as this common framework and language. This overview of studies implemented by our research group is based on three research questions: Do the ICF-CY conceptual model have a valid content and is it logically coherent when investigated empirically? Is the ICF-CY classification useful for documenting child characteristics in services? What difficulties and benefits are related to using ICF-CY model as a basis for intervention when it is implemented in services? A series of studies, undertaken by the CHILD researchers are analysed. The analysis is based on data sets from published studies or master theses. Results and conclusion show that the ICF-CY has a useful content and is logically coherent on model level. Professionals find it useful for documenting children's body functions and activities. Guidelines for separating activity and participation are needed. ICF-CY is a complex classification, implementing it in services is a long-term project.
Sivan, Manoj; Gallagher, Justin; Holt, Ray; Weightman, Andrew; O'Connor, Rory; Levesley, Martin
2016-01-01
The purpose of this study was to evaluate the International Classification of Functioning, Disability and Health (ICF) as a framework to ensure that key aspects of user feedback are identified in the design and testing stages of development of a home-based upper limb rehabilitation system. Seventeen stroke survivors with residual upper limb weakness, and seven healthcare professionals with expertise in stroke rehabilitation, were enrolled in the user-centered design process. Through semi-structured interviews, they provided feedback on the hardware, software and impact of a home-based rehabilitation device to facilitate self-managed arm exercise. Members of the multidisciplinary clinical and engineering research team, based on previous experience and existing literature in user-centred design, developed the topic list for the interviews. Meaningful concepts were extracted from participants' interviews based on existing ICF linking rules and matched to categories within the ICF Comprehensive Core Set for stroke. Most of the interview concepts (except personal factors) matched the existing ICF Comprehensive Core Set categories. Personal factors that emerged from interviews e.g. gender, age, interest, compliance, motivation, choice and convenience that might determine device usability are yet to be categorised within the ICF framework and hence could not be matched to a specific Core Set category.
Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng
2016-01-01
Accurate segmentation and classification of different anatomical structures of teeth from medical images plays an essential role in many clinical applications. Usually, the anatomical structures of teeth are manually labelled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing three dimensional (3D) information, and classify the tooth by employing unsupervised learning Pulse Coupled Neural Networks (PCNN) model. In order to evaluate the proposed method, the experiments are conducted on the different datasets of mandibular molars and the experimental results show that our method can achieve better accuracy and robustness compared to other four state of the art clustering methods.
Palmer, Michael J; Mercieca-Bebber, Rebecca; King, Madeleine; Calvert, Melanie; Richardson, Harriet; Brundage, Michael
2018-02-01
Missing patient-reported outcome data can lead to biased results, to loss of power to detect between-treatment differences, and to research waste. Awareness of factors may help researchers reduce missing patient-reported outcome data through study design and trial processes. The aim was to construct a Classification Framework of factors associated with missing patient-reported outcome data in the context of comparative studies. The first step in this process was informed by a systematic review. Two databases (MEDLINE and CINAHL) were searched from inception to March 2015 for English articles. Inclusion criteria were (a) relevant to patient-reported outcomes, (b) discussed missing data or compliance in prospective medical studies, and (c) examined predictors or causes of missing data, including reasons identified in actual trial datasets and reported on cover sheets. Two reviewers independently screened titles and abstracts. Discrepancies were discussed with the research team prior to finalizing the list of eligible papers. In completing the systematic review, four particular challenges to synthesizing the extracted information were identified. To address these challenges, operational principles were established by consensus to guide the development of the Classification Framework. A total of 6027 records were screened. In all, 100 papers were eligible and included in the review. Of these, 57% focused on cancer, 23% did not specify disease, and 20% reported for patients with a variety of non-cancer conditions. In total, 40% of the papers offered a descriptive analysis of possible factors associated with missing data, but some papers used other methods. In total, 663 excerpts of text (units), each describing a factor associated with missing patient-reported outcome data, were extracted verbatim. Redundant units were identified and sequestered. Similar units were grouped, and an iterative process of consensus among the investigators was used to reduce these units to a list of factors that met the guiding principles. The list was organized on a framework, using an iterative consensus-based process. The resultant Classification Framework is a summary of the factors associated with missing patient-reported outcome data described in the literature. It consists of 5 components (instrument, participant, centre, staff, and study) and 46 categories, each with one or more sub-categories or examples. A systematic review of the literature revealed 46 unique categories of factors associated with missing patient-reported outcome data, organized into 5 main component groups. The Classification Framework may assist researchers to improve the design of new randomized clinical trials and to implement procedures to reduce missing patient-reported outcome data. Further research using the Classification Framework to inform quantitative analyses of missing patient-reported outcome data in existing clinical trials and to inform qualitative inquiry of research staff is planned.
Lesher, Danielle Ann-Marie; Mulcahey, M J; Hershey, Peter; Stanton, Donna Breger; Tiedgen, Andrea C
We sought to identify outcome instruments used in rehabilitation of the hand and upper extremity; to determine their alignment with the constructs of the International Classification of Functioning, Disability and Health (ICF) and the Occupational Therapy Practice Framework: Domain and Process; and to report gaps in the constructs measured by outcome instruments as a basis for future research. We searched CINAHL, MEDLINE, OTseeker, and the Cochrane Central Register of Controlled Trials using scoping review methodology and evaluated outcome instruments for concordance with the ICF and the Framework. We identified 18 outcome instruments for analysis. The findings pertain to occupational therapists' focus on body functions, body structures, client factors, and activities of daily living; a gap in practice patterns in use of instruments; and overestimation of the degree to which instruments used are occupationally based. Occupational therapy practitioners should use outcome instruments that embody conceptual frameworks for classifying function and activity. Copyright © 2017 by the American Occupational Therapy Association, Inc.
NASA Technical Reports Server (NTRS)
Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Votava, Petr; Roy, Anshuman; Mukhopadhyay, Supratik; Nemani, Ramakrishna
2015-01-01
Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets, which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.
NASA Astrophysics Data System (ADS)
Basu, S.; Ganguly, S.; Michaelis, A.; Votava, P.; Roy, A.; Mukhopadhyay, S.; Nemani, R. R.
2015-12-01
Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.
A Framework for Text Mining in Scientometric Study: A Case Study in Biomedicine Publications
NASA Astrophysics Data System (ADS)
Silalahi, V. M. M.; Hardiyati, R.; Nadhiroh, I. M.; Handayani, T.; Rahmaida, R.; Amelia, M.
2018-04-01
The data of Indonesians research publications in the domain of biomedicine has been collected to be text mined for the purpose of a scientometric study. The goal is to build a predictive model that provides a classification of research publications on the potency for downstreaming. The model is based on the drug development processes adapted from the literatures. An effort is described to build the conceptual model and the development of a corpus on the research publications in the domain of Indonesian biomedicine. Then an investigation is conducted relating to the problems associated with building a corpus and validating the model. Based on our experience, a framework is proposed to manage the scientometric study based on text mining. Our method shows the effectiveness of conducting a scientometric study based on text mining in order to get a valid classification model. This valid model is mainly supported by the iterative and close interactions with the domain experts starting from identifying the issues, building a conceptual model, to the labelling, validation and results interpretation.
A Confidence Paradigm for Classification Systems
2008-09-01
methodology to determine how much confi- dence one should have in a classifier output. This research proposes a framework to determine the level of...theoretical framework that attempts to unite the viewpoints of the classification system developer (or engineer) and the classification system user (or...operating point. An algorithm is developed that minimizes a “confidence” measure called Binned Error in the Posterior ( BEP ). Then, we prove that training a
Korczowski, L; Congedo, M; Jutten, C
2015-08-01
The classification of electroencephalographic (EEG) data recorded from multiple users simultaneously is an important challenge in the field of Brain-Computer Interface (BCI). In this paper we compare different approaches for classification of single-trials Event-Related Potential (ERP) on two subjects playing a collaborative BCI game. The minimum distance to mean (MDM) classifier in a Riemannian framework is extended to use the diversity of the inter-subjects spatio-temporal statistics (MDM-hyper) or to merge multiple classifiers (MDM-multi). We show that both these classifiers outperform significantly the mean performance of the two users and analogous classifiers based on the step-wise linear discriminant analysis. More importantly, the MDM-multi outperforms the performance of the best player within the pair.
Regidor, E
2001-01-01
Two of the most important theory-based social class classifications are that of the neo-Weberian Goldthorpe and that of the neo-Marxist Wright. The social class classification proposal of the SES Working Group employed the Goldthorpe schema as a reference due to the empirical and mainly pragmatic aspects involved. In this article, these aspects are discussed and it is also discussed the problem of the validation of the measurements of social class and the problem of the use of the social class as an independent variable.
NASA Astrophysics Data System (ADS)
Hussain, M.; Chen, D.
2014-11-01
Buildings, the basic unit of an urban landscape, host most of its socio-economic activities and play an important role in the creation of urban land-use patterns. The spatial arrangement of different building types creates varied urban land-use clusters which can provide an insight to understand the relationships between social, economic, and living spaces. The classification of such urban clusters can help in policy-making and resource management. In many countries including the UK no national-level cadastral database containing information on individual building types exists in public domain. In this paper, we present a framework for inferring functional types of buildings based on the analysis of their form (e.g. geometrical properties, such as area and perimeter, layout) and spatial relationship from large topographic and address-based GIS database. Machine learning algorithms along with exploratory spatial analysis techniques are used to create the classification rules. The classification is extended to two further levels based on the functions (use) of buildings derived from address-based data. The developed methodology was applied to the Manchester metropolitan area using the Ordnance Survey's MasterMap®, a large-scale topographic and address-based data available for the UK.
A Framework for Inferring Taxonomic Class of Asteroids.
NASA Technical Reports Server (NTRS)
Dotson, J. L.; Mathias, D. L.
2017-01-01
Introduction: Taxonomic classification of asteroids based on their visible / near-infrared spectra or multi band photometry has proven to be a useful tool to infer other properties about asteroids. Meteorite analogs have been identified for several taxonomic classes, permitting detailed inference about asteroid composition. Trends have been identified between taxonomy and measured asteroid density. Thanks to NEOWise (Near-Earth-Object Wide-field Infrared Survey Explorer) and Spitzer (Spitzer Space Telescope), approximately twice as many asteroids have measured albedos than the number with taxonomic classifications. (If one only considers spectroscopically determined classifications, the ratio is greater than 40.) We present a Bayesian framework that provides probabilistic estimates of the taxonomic class of an asteroid based on its albedo. Although probabilistic estimates of taxonomic classes are not a replacement for spectroscopic or photometric determinations, they can be a useful tool for identifying objects for further study or for asteroid threat assessment models. Inputs and Framework: The framework relies upon two inputs: the expected fraction of each taxonomic class in the population and the albedo distribution of each class. Luckily, numerous authors have addressed both of these questions. For example, the taxonomic distribution by number, surface area and mass of the main belt has been estimated and a diameter limited estimate of fractional abundances of the near earth asteroid population was made. Similarly, the albedo distributions for taxonomic classes have been estimated for the combined main belt and NEA (Near Earth Asteroid) populations in different taxonomic systems and for the NEA population specifically. The framework utilizes a Bayesian inference appropriate for categorical data. The population fractions provide the prior while the albedo distributions allow calculation of the likelihood an albedo measurement is consistent with a given taxonomic class. These inputs allows calculation of the probability an asteroid with a specified albedo belongs to any given taxonomic class.
(GTG)5-PCR reference framework for acetic acid bacteria.
Papalexandratou, Zoi; Cleenwerck, Ilse; De Vos, Paul; De Vuyst, Luc
2009-11-01
One hundred and fifty-eight strains of acetic acid bacteria (AAB) were subjected to (GTG)(5)-PCR fingerprinting to construct a reference framework for their rapid classification and identification. Most of them clustered according to their respective taxonomic designation; others had to be reclassified based on polyphasic data. This study shows the usefulness of the method to determine the taxonomic and phylogenetic relationships among AAB and to study the AAB diversity of complex ecosystems.
Audio-based queries for video retrieval over Java enabled mobile devices
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Cheikh, Faouzi Alaya; Kiranyaz, Serkan; Gabbouj, Moncef
2006-02-01
In this paper we propose a generic framework for efficient retrieval of audiovisual media based on its audio content. This framework is implemented in a client-server architecture where the client application is developed in Java to be platform independent whereas the server application is implemented for the PC platform. The client application adapts to the characteristics of the mobile device where it runs such as screen size and commands. The entire framework is designed to take advantage of the high-level segmentation and classification of audio content to improve speed and accuracy of audio-based media retrieval. Therefore, the primary objective of this framework is to provide an adaptive basis for performing efficient video retrieval operations based on the audio content and types (i.e. speech, music, fuzzy and silence). Experimental results approve that such an audio based video retrieval scheme can be used from mobile devices to search and retrieve video clips efficiently over wireless networks.
Zhan, L.; Liu, Y.; Zhou, J.; Ye, J.; Thompson, P.M.
2015-01-01
Mild cognitive impairment (MCI) is an intermediate stage between normal aging and Alzheimer's disease (AD), and around 10-15% of people with MCI develop AD each year. More recently, MCI has been further subdivided into early and late stages, and there is interest in identifying sensitive brain imaging biomarkers that help to differentiate stages of MCI. Here, we focused on anatomical brain networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer's Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying early versus late MCI. PMID:26413202
A framework for classification of prokaryotic protein kinases.
Tyagi, Nidhi; Anamika, Krishanpal; Srinivasan, Narayanaswamy
2010-05-26
Overwhelming majority of the Serine/Threonine protein kinases identified by gleaning archaeal and eubacterial genomes could not be classified into any of the well known Hanks and Hunter subfamilies of protein kinases. This is owing to the development of Hanks and Hunter classification scheme based on eukaryotic protein kinases which are highly divergent from their prokaryotic homologues. A large dataset of prokaryotic Serine/Threonine protein kinases recognized from genomes of prokaryotes have been used to develop a classification framework for prokaryotic Ser/Thr protein kinases. We have used traditional sequence alignment and phylogenetic approaches and clustered the prokaryotic kinases which represent 72 subfamilies with at least 4 members in each. Such a clustering enables classification of prokaryotic Ser/Thr kinases and it can be used as a framework to classify newly identified prokaryotic Ser/Thr kinases. After series of searches in a comprehensive sequence database we recognized that 38 subfamilies of prokaryotic protein kinases are associated to a specific taxonomic level. For example 4, 6 and 3 subfamilies have been identified that are currently specific to phylum proteobacteria, cyanobacteria and actinobacteria respectively. Similarly subfamilies which are specific to an order, sub-order, class, family and genus have also been identified. In addition to these, we also identify organism-diverse subfamilies. Members of these clusters are from organisms of different taxonomic levels, such as archaea, bacteria, eukaryotes and viruses. Interestingly, occurrence of several taxonomic level specific subfamilies of prokaryotic kinases contrasts with classification of eukaryotic protein kinases in which most of the popular subfamilies of eukaryotic protein kinases occur diversely in several eukaryotes. Many prokaryotic Ser/Thr kinases exhibit a wide variety of modular organization which indicates a degree of complexity and protein-protein interactions in the signaling pathways in these microbes.
NASA Astrophysics Data System (ADS)
Dhiman, R.; Kalbar, P.; Inamdar, A. B.
2017-12-01
Coastal area classification in India is a challenge for federal and state government agencies due to fragile institutional framework, unclear directions in implementation of costal regulations and violations happening at private and government level. This work is an attempt to improvise the objectivity of existing classification methods to synergies the ecological systems and socioeconomic development in coastal cities. We developed a Geographic information system coupled Multi-criteria Decision Making (GIS-MCDM) approach to classify urban coastal areas where utility functions are used to transform the costal features into quantitative membership values after assessing the sensitivity of urban coastal ecosystem. Furthermore, these membership values for costal features are applied in different weighting schemes to derive Coastal Area Index (CAI) which classifies the coastal areas in four distinct categories viz. 1) No Development Zone, 2) Highly Sensitive Zone, 3) Moderately Sensitive Zone and 4) Low Sensitive Zone based on the sensitivity of urban coastal ecosystem. Mumbai, a coastal megacity in India is used as case study for demonstration of proposed method. Finally, uncertainty analysis using Monte Carlo approach to validate the sensitivity of CAI under specific multiple scenarios is carried out. Results of CAI method shows the clear demarcation of coastal areas in GIS environment based on the ecological sensitivity. CAI provides better decision support for federal and state level agencies to classify urban coastal areas according to the regional requirement of coastal resources considering resilience and sustainable development. CAI method will strengthen the existing institutional framework for decision making in classification of urban coastal areas where most effective coastal management options can be proposed.
Automated robot-assisted surgical skill evaluation: Predictive analytics approach.
Fard, Mahtab J; Ameri, Sattar; Darin Ellis, R; Chinnam, Ratna B; Pandya, Abhilash K; Klein, Michael D
2018-02-01
Surgical skill assessment has predominantly been a subjective task. Recently, technological advances such as robot-assisted surgery have created great opportunities for objective surgical evaluation. In this paper, we introduce a predictive framework for objective skill assessment based on movement trajectory data. Our aim is to build a classification framework to automatically evaluate the performance of surgeons with different levels of expertise. Eight global movement features are extracted from movement trajectory data captured by a da Vinci robot for surgeons with two levels of expertise - novice and expert. Three classification methods - k-nearest neighbours, logistic regression and support vector machines - are applied. The result shows that the proposed framework can classify surgeons' expertise as novice or expert with an accuracy of 82.3% for knot tying and 89.9% for a suturing task. This study demonstrates and evaluates the ability of machine learning methods to automatically classify expert and novice surgeons using global movement features. Copyright © 2017 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Inbar, Dan E.
1980-01-01
Presents an analytical framework based on a threefold classification--unequivocal failure, "satisficing," and unequivocal success--and four basic role climates--apathetic, frustrating, tense, and tranquil--that is applied to the elementary school principalship. (Author/WD)
Neonatal physical therapy. Part II: Practice frameworks and evidence-based practice guidelines.
Sweeney, Jane K; Heriza, Carolyn B; Blanchard, Yvette; Dusing, Stacey C
2010-01-01
(1) To outline frameworks for neonatal physical therapy based on 3 theoretical models, (2) to describe emerging literature supporting neonatal physical therapy practice, and (3) to identify evidence-based practice recommendations. Three models are presented as a framework for neonatal practice: (1) dynamic systems theory including synactive theory and the theory of neuronal group selection, (2) the International Classification of Functioning, Disability and Health, and (3) family-centered care. Literature is summarized to support neonatal physical therapists in the areas of examination, developmental care, intervention, and parent education. Practice recommendations are offered with levels of evidence identified. Neonatal physical therapy practice has a theoretical and evidence-based structure, and evidence is emerging for selected clinical procedures. Continued research to expand the science of neonatal physical therapy is critical to elevate the evidence and support practice recommendations.
A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.
Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu
2016-04-19
Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.
A genetic algorithm-based framework for wavelength selection on sample categorization.
Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F
2017-08-01
In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Iris Image Classification Based on Hierarchical Visual Codebook.
Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang
2014-06-01
Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.
Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S
2016-12-01
We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
VO2 estimation using 6-axis motion sensor with sports activity classification.
Nagata, Takashi; Nakamura, Naoteru; Miyatake, Masato; Yuuki, Akira; Yomo, Hiroyuki; Kawabata, Takashi; Hara, Shinsuke
2016-08-01
In this paper, we focus on oxygen consumption (VO2) estimation using 6-axis motion sensor (3-axis accelerometer and 3-axis gyroscope) for people playing sports with diverse intensities. The VO2 estimated with a small motion sensor can be used to calculate the energy expenditure, however, its accuracy depends on the intensities of various types of activities. In order to achieve high accuracy over a wide range of intensities, we employ an estimation framework that first classifies activities with a simple machine-learning based classification algorithm. We prepare different coefficients of linear regression model for different types of activities, which are determined with training data obtained by experiments. The best-suited model is used for each type of activity when VO2 is estimated. The accuracy of the employed framework depends on the trade-off between the degradation due to classification errors and improvement brought by applying separate, optimum model to VO2 estimation. Taking this trade-off into account, we evaluate the accuracy of the employed estimation framework by using a set of experimental data consisting of VO2 and motion data of people with a wide range of intensities of exercises, which were measured by a VO2 meter and motion sensor, respectively. Our numerical results show that the employed framework can improve the estimation accuracy in comparison to a reference method that uses a common regression model for all types of activities.
Engaging Elderly People in Telemedicine Through Gamification
Tabak, Monique; Dekker - van Weering, Marit; Vollenbroek-Hutten, Miriam
2015-01-01
Background Telemedicine can alleviate the increasing demand for elderly care caused by the rapidly aging population. However, user adherence to technology in telemedicine interventions is low and decreases over time. Therefore, there is a need for methods to increase adherence, specifically of the elderly user. A strategy that has recently emerged to address this problem is gamification. It is the application of game elements to nongame fields to motivate and increase user activity and retention. Objective This research aims to (1) provide an overview of existing theoretical frameworks for gamification and explore methods that specifically target the elderly user and (2) explore user classification theories for tailoring game content to the elderly user. This knowledge will provide a foundation for creating a new framework for applying gamification in telemedicine applications to effectively engage the elderly user by increasing and maintaining adherence. Methods We performed a broad Internet search using scientific and nonscientific search engines and included information that described either of the following subjects: the conceptualization of gamification, methods to engage elderly users through gamification, or user classification theories for tailored game content. Results Our search showed two main approaches concerning frameworks for gamification: from business practices, which mostly aim for more revenue, emerge an applied approach, while academia frameworks are developed incorporating theories on motivation while often aiming for lasting engagement. The search provided limited information regarding the application of gamification to engage elderly users, and a significant gap in knowledge on the effectiveness of a gamified application in practice. Several approaches for classifying users in general were found, based on archetypes and reasons to play, and we present them along with their corresponding taxonomies. The overview we created indicates great connectivity between these taxonomies. Conclusions Gamification frameworks have been developed from different backgrounds—business and academia—but rarely target the elderly user. The effectiveness of user classifications for tailored game content in this context is not yet known. As a next step, we propose the development of a framework based on the hypothesized existence of a relation between preference for game content and personality. PMID:26685287
Engaging Elderly People in Telemedicine Through Gamification.
de Vette, Frederiek; Tabak, Monique; Dekker-van Weering, Marit; Vollenbroek-Hutten, Miriam
2015-12-18
Telemedicine can alleviate the increasing demand for elderly care caused by the rapidly aging population. However, user adherence to technology in telemedicine interventions is low and decreases over time. Therefore, there is a need for methods to increase adherence, specifically of the elderly user. A strategy that has recently emerged to address this problem is gamification. It is the application of game elements to nongame fields to motivate and increase user activity and retention. This research aims to (1) provide an overview of existing theoretical frameworks for gamification and explore methods that specifically target the elderly user and (2) explore user classification theories for tailoring game content to the elderly user. This knowledge will provide a foundation for creating a new framework for applying gamification in telemedicine applications to effectively engage the elderly user by increasing and maintaining adherence. We performed a broad Internet search using scientific and nonscientific search engines and included information that described either of the following subjects: the conceptualization of gamification, methods to engage elderly users through gamification, or user classification theories for tailored game content. Our search showed two main approaches concerning frameworks for gamification: from business practices, which mostly aim for more revenue, emerge an applied approach, while academia frameworks are developed incorporating theories on motivation while often aiming for lasting engagement. The search provided limited information regarding the application of gamification to engage elderly users, and a significant gap in knowledge on the effectiveness of a gamified application in practice. Several approaches for classifying users in general were found, based on archetypes and reasons to play, and we present them along with their corresponding taxonomies. The overview we created indicates great connectivity between these taxonomies. Gamification frameworks have been developed from different backgrounds-business and academia-but rarely target the elderly user. The effectiveness of user classifications for tailored game content in this context is not yet known. As a next step, we propose the development of a framework based on the hypothesized existence of a relation between preference for game content and personality.
Genetics-Based Classification of Filoviruses Calls for Expanded Sampling of Genomic Sequences
Lauber, Chris; Gorbalenya, Alexander E.
2012-01-01
We have recently developed a computational approach for hierarchical, genome-based classification of viruses of a family (DEmARC). In DEmARC, virus clusters are delimited objectively by devising a universal family-wide threshold on intra-cluster genetic divergence of viruses that is specific for each level of the classification. Here, we apply DEmARC to a set of 56 filoviruses with complete genome sequences and compare the resulting classification to the ICTV taxonomy of the family Filoviridae. We find in total six candidate taxon levels two of which correspond to the species and genus ranks of the family. At these two levels, the six filovirus species and two genera officially recognized by ICTV, as well as a seventh tentative species for Lloviu virus and prototyping a third genus, are reproduced. DEmARC lends the highest possible support for these two as well as the four other levels, implying that the actual number of valid taxon levels remains uncertain and the choice of levels for filovirus species and genera is arbitrary. Based on our experience with other virus families, we conclude that the current sampling of filovirus genomic sequences needs to be considerably expanded in order to resolve these uncertainties in the framework of genetics-based classification. PMID:23170166
Genetics-based classification of filoviruses calls for expanded sampling of genomic sequences.
Lauber, Chris; Gorbalenya, Alexander E
2012-09-01
We have recently developed a computational approach for hierarchical, genome-based classification of viruses of a family (DEmARC). In DEmARC, virus clusters are delimited objectively by devising a universal family-wide threshold on intra-cluster genetic divergence of viruses that is specific for each level of the classification. Here, we apply DEmARC to a set of 56 filoviruses with complete genome sequences and compare the resulting classification to the ICTV taxonomy of the family Filoviridae. We find in total six candidate taxon levels two of which correspond to the species and genus ranks of the family. At these two levels, the six filovirus species and two genera officially recognized by ICTV, as well as a seventh tentative species for Lloviu virus and prototyping a third genus, are reproduced. DEmARC lends the highest possible support for these two as well as the four other levels, implying that the actual number of valid taxon levels remains uncertain and the choice of levels for filovirus species and genera is arbitrary. Based on our experience with other virus families, we conclude that the current sampling of filovirus genomic sequences needs to be considerably expanded in order to resolve these uncertainties in the framework of genetics-based classification.
A Study of Hand Back Skin Texture Patterns for Personal Identification and Gender Classification
Xie, Jin; Zhang, Lei; You, Jane; Zhang, David; Qu, Xiaofeng
2012-01-01
Human hand back skin texture (HBST) is often consistent for a person and distinctive from person to person. In this paper, we study the HBST pattern recognition problem with applications to personal identification and gender classification. A specially designed system is developed to capture HBST images, and an HBST image database was established, which consists of 1,920 images from 80 persons (160 hands). An efficient texton learning based method is then presented to classify the HBST patterns. First, textons are learned in the space of filter bank responses from a set of training images using the l1 -minimization based sparse representation (SR) technique. Then, under the SR framework, we represent the feature vector at each pixel over the learned dictionary to construct a representation coefficient histogram. Finally, the coefficient histogram is used as skin texture feature for classification. Experiments on personal identification and gender classification are performed by using the established HBST database. The results show that HBST can be used to assist human identification and gender classification. PMID:23012512
A hierarchical framework of aquatic ecological units in North America (Nearctic Zone).
James R. Maxwell; Clayton J. Edwards; Mark E. Jensen; Steven J. Paustian; Harry Parrott; Donley M. Hill
1995-01-01
Proposes a framework for classifying and mapping aquatic systems at various scales using ecologically significant physical and biological criteria. Classification and mapping concepts follow tenets of hierarchical theory, pattern recognition, and driving variables. Criteria are provided for the hierarchical classification and mapping of aquatic ecological units of...
A Classification Framework for Exploring Technology-Enabled Practice--Frame TEP
ERIC Educational Resources Information Center
Prestridge, Sarah; de Aldama, Carlos
2016-01-01
This article theorizes the construction of a classification framework to explore teachers' beliefs and pedagogical practices for the use of digital technologies in the classroom. There are currently many individual schemas and models that represent both developmental and divergent concepts associated with technology-enabled practice. This article…
Pianta, R C; Longmaid, K; Ferguson, J E
1999-06-01
Investigated an attachment-based theoretical framework and classification system, introduced by Kaplan and Main (1986), for interpreting children's family drawings. This study concentrated on the psychometric properties of the system and the relation between drawings classified using this system and teacher ratings of classroom social-emotional and behavioral functioning, controlling for child age, ethnic status, intelligence, and fine motor skills. This nonclinical sample consisted of 200 kindergarten children of diverse racial and socioeconomic status (SES). Limited support for reliability of this classification system was obtained. Kappas for overall classifications of drawings (e.g., secure) exceeded .80 and mean kappa for discrete drawing features (e.g., figures with smiles) was .82. Coders' endorsement of the presence of certain discrete drawing features predicted their overall classification at 82.5% accuracy. Drawing classification was related to teacher ratings of classroom functioning independent of child age, sex, race, SES, intelligence, and fine motor skills (with p values for the multivariate effects ranging from .043-.001). Results are discussed in terms of the psychometric properties of this system for classifying children's representations of family and the limitations of family drawing techniques for young children.
2017-09-01
unique characteristics of reported anomalies in the collected traffic signals to build a classification framework. Other cyber events, such as a...Furthermore, we identify unique characteristics of reported anomalies in the collected traffic signals to build a classification framework. Other cyber...2]. The applications build flow rules using network topology information provided by the control plane [1]. Since the control plane is able to
2014-01-01
Background Most evidence on the effect of collaborative care for depression is derived in the selective environment of randomised controlled trials. In collaborative care, practice nurses may act as case managers. The Primary Care Services Improvement Project (PCSIP) aimed to assess the cost-effectiveness of alternative models of practice nurse involvement in a real world Australian setting. Previous analyses have demonstrated the value of high level practice nurse involvement in the management of diabetes and obesity. This paper reports on their value in the management of depression. Methods General practices were assigned to a low or high model of care based on observed levels of practice nurse involvement in clinical-based activities for the management of depression (i.e. percentage of depression patients seen, percentage of consultation time spent on clinical-based activities). Linked, routinely collected data was used to determine patient level depression outcomes (proportion of depression-free days) and health service usage costs. Standardised depression assessment tools were not routinely used, therefore a classification framework to determine the patient’s depressive state was developed using proxy measures (e.g. symptoms, medications, referrals, hospitalisations and suicide attempts). Regression analyses of costs and depression outcomes were conducted, using propensity weighting to control for potential confounders. Results Capacity to determine depressive state using the classification framework was dependent upon the level of detail provided in medical records. While antidepressant medication prescriptions were a strong indicator of depressive state, they could not be relied upon as the sole measure. Propensity score weighted analyses of total depression-related costs and depression outcomes, found that the high level model of care cost more (95% CI: -$314.76 to $584) and resulted in 5% less depression-free days (95% CI: -0.15 to 0.05), compared to the low level model. However, this result was highly uncertain, as shown by the confidence intervals. Conclusions Classification of patients’ depressive state was feasible, but time consuming, using the classification framework proposed. Further validation of the framework is required. Unlike the analyses of diabetes and obesity management, no significant differences in the proportion of depression-free days or health service costs were found between the alternative levels of practice nurse involvement. PMID:24422622
Boosting CNN performance for lung texture classification using connected filtering
NASA Astrophysics Data System (ADS)
Tarando, Sebastián. Roberto; Fetita, Catalin; Kim, Young-Wouk; Cho, Hyoun; Brillet, Pierre-Yves
2018-02-01
Infiltrative lung diseases describe a large group of irreversible lung disorders requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. This paper presents an original image pre-processing framework based on locally connected filtering applied in multiresolution, which helps improving the learning process and boost the performance of CNN for lung texture classification. By removing the dense vascular network from images used by the CNN for lung classification, locally connected filters provide a better discrimination between different lung patterns and help regularizing the classification output. The approach was tested in a preliminary evaluation on a 10 patient database of various lung pathologies, showing an increase of 10% in true positive rate (on average for all the cases) with respect to the state of the art cascade of CNNs for this task.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
Pipeline Processing with an Iterative, Context-based Detection Model
2014-04-19
stripping the incoming data stream of repeating and irrelevant signals prior to running primary detectors , adaptive beamforming and matched field processing...framework, pattern detectors , correlation detectors , subspace detectors , matched field detectors , nuclear explosion monitoring 16. SECURITY CLASSIFICATION...10 5. Teleseismic paths from earthquakes in
A review of supervised object-based land-cover image classification
NASA Astrophysics Data System (ADS)
Ma, Lei; Li, Manchun; Ma, Xiaoxue; Cheng, Liang; Du, Peijun; Liu, Yongxue
2017-08-01
Object-based image classification for land-cover mapping purposes using remote-sensing imagery has attracted significant attention in recent years. Numerous studies conducted over the past decade have investigated a broad array of sensors, feature selection, classifiers, and other factors of interest. However, these research results have not yet been synthesized to provide coherent guidance on the effect of different supervised object-based land-cover classification processes. In this study, we first construct a database with 28 fields using qualitative and quantitative information extracted from 254 experimental cases described in 173 scientific papers. Second, the results of the meta-analysis are reported, including general characteristics of the studies (e.g., the geographic range of relevant institutes, preferred journals) and the relationships between factors of interest (e.g., spatial resolution and study area or optimal segmentation scale, accuracy and number of targeted classes), especially with respect to the classification accuracy of different sensors, segmentation scale, training set size, supervised classifiers, and land-cover types. Third, useful data on supervised object-based image classification are determined from the meta-analysis. For example, we find that supervised object-based classification is currently experiencing rapid advances, while development of the fuzzy technique is limited in the object-based framework. Furthermore, spatial resolution correlates with the optimal segmentation scale and study area, and Random Forest (RF) shows the best performance in object-based classification. The area-based accuracy assessment method can obtain stable classification performance, and indicates a strong correlation between accuracy and training set size, while the accuracy of the point-based method is likely to be unstable due to mixed objects. In addition, the overall accuracy benefits from higher spatial resolution images (e.g., unmanned aerial vehicle) or agricultural sites where it also correlates with the number of targeted classes. More than 95.6% of studies involve an area less than 300 ha, and the spatial resolution of images is predominantly between 0 and 2 m. Furthermore, we identify some methods that may advance supervised object-based image classification. For example, deep learning and type-2 fuzzy techniques may further improve classification accuracy. Lastly, scientists are strongly encouraged to report results of uncertainty studies to further explore the effects of varied factors on supervised object-based image classification.
NASA Astrophysics Data System (ADS)
Zhao, Lili; Yin, Jianping; Yuan, Lihuan; Liu, Qiang; Li, Kuan; Qiu, Minghui
2017-07-01
Automatic detection of abnormal cells from cervical smear images is extremely demanded in annual diagnosis of women's cervical cancer. For this medical cell recognition problem, there are three different feature sections, namely cytology morphology, nuclear chromatin pathology and region intensity. The challenges of this problem come from feature combination s and classification accurately and efficiently. Thus, we propose an efficient abnormal cervical cell detection system based on multi-instance extreme learning machine (MI-ELM) to deal with above two questions in one unified framework. MI-ELM is one of the most promising supervised learning classifiers which can deal with several feature sections and realistic classification problems analytically. Experiment results over Herlev dataset demonstrate that the proposed method outperforms three traditional methods for two-class classification in terms of well accuracy and less time.
Graph-based sensor fusion for classification of transient acoustic signals.
Srinivas, Umamahesh; Nasrabadi, Nasser M; Monga, Vishal
2015-03-01
Advances in acoustic sensing have enabled the simultaneous acquisition of multiple measurements of the same physical event via co-located acoustic sensors. We exploit the inherent correlation among such multiple measurements for acoustic signal classification, to identify the launch/impact of munition (i.e., rockets, mortars). Specifically, we propose a probabilistic graphical model framework that can explicitly learn the class conditional correlations between the cepstral features extracted from these different measurements. Additionally, we employ symbolic dynamic filtering-based features, which offer improvements over the traditional cepstral features in terms of robustness to signal distortions. Experiments on real acoustic data sets show that our proposed algorithm outperforms conventional classifiers as well as the recently proposed joint sparsity models for multisensor acoustic classification. Additionally our proposed algorithm is less sensitive to insufficiency in training samples compared to competing approaches.
NASA Astrophysics Data System (ADS)
Georganos, Stefanos; Grippa, Tais; Vanhuysse, Sabine; Lennert, Moritz; Shimoni, Michal; Wolff, Eléonore
2017-10-01
This study evaluates the impact of three Feature Selection (FS) algorithms in an Object Based Image Analysis (OBIA) framework for Very-High-Resolution (VHR) Land Use-Land Cover (LULC) classification. The three selected FS algorithms, Correlation Based Selection (CFS), Mean Decrease in Accuracy (MDA) and Random Forest (RF) based Recursive Feature Elimination (RFE), were tested on Support Vector Machine (SVM), K-Nearest Neighbor, and Random Forest (RF) classifiers. The results demonstrate that the accuracy of SVM and KNN classifiers are the most sensitive to FS. The RF appeared to be more robust to high dimensionality, although a significant increase in accuracy was found by using the RFE method. In terms of classification accuracy, SVM performed the best using FS, followed by RF and KNN. Finally, only a small number of features is needed to achieve the highest performance using each classifier. This study emphasizes the benefits of rigorous FS for maximizing performance, as well as for minimizing model complexity and interpretation.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Decision Manifold Approximation for Physics-Based Simulations
NASA Technical Reports Server (NTRS)
Wong, Jay Ming; Samareh, Jamshid A.
2016-01-01
With the recent surge of success in big-data driven deep learning problems, many of these frameworks focus on the notion of architecture design and utilizing massive databases. However, in some scenarios massive sets of data may be difficult, and in some cases infeasible, to acquire. In this paper we discuss a trajectory-based framework that quickly learns the underlying decision manifold of binary simulation classifications while judiciously selecting exploratory target states to minimize the number of required simulations. Furthermore, we draw particular attention to the simulation prediction application idealized to the case where failures in simulations can be predicted and avoided, providing machine intelligence to novice analysts. We demonstrate this framework in various forms of simulations and discuss its efficacy.
Mercury⊕: An evidential reasoning image classifier
NASA Astrophysics Data System (ADS)
Peddle, Derek R.
1995-12-01
MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.
Forster, Alan J; Bernard, Burnand; Drösler, Saskia E; Gurevich, Yana; Harrison, James; Januel, Jean-Marie; Romano, Patrick S; Southern, Danielle A; Sundararajan, Vijaya; Quan, Hude; Vanderloo, Saskia E; Pincus, Harold A; Ghali, William A
2017-08-01
To assess the utility of the proposed World Health Organization (WHO)'s International Classification of Disease (ICD) framework for classifying patient safety events. Independent classification of 45 clinical vignettes using a web-based platform. The WHO's multi-disciplinary Quality and Safety Topic Advisory Group. The framework consists of three concepts: harm, cause and mode. We defined a concept as 'classifiable' if more than half of the raters could assign an ICD-11 code for the case. We evaluated reasons why cases were nonclassifiable using a qualitative approach. Harm was classifiable in 31 of 45 cases (69%). Of these, only 20 could be classified according to cause and mode. Classifiable cases were those in which a clear cause and effect relationship existed (e.g. medication administration error). Nonclassifiable cases were those without clear causal attribution (e.g. pressure ulcer). Of the 14 cases in which harm was not evident (31%), only 5 could be classified according to cause and mode and represented potential adverse events. Overall, nine cases (20%) were nonclassifiable using the three-part patient safety framework and contained significant ambiguity in the relationship between healthcare outcome and putative cause. The proposed framework enabled classification of the majority of patient safety events. Cases in which potentially harmful events did not cause harm were not classifiable; additional code categories within the ICD-11 are one proposal to address this concern. Cases with ambiguity in cause and effect relationship between healthcare processes and outcomes remain difficult to classify. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Lajnef, Tarek; Chaibi, Sahbi; Ruby, Perrine; Aguera, Pierre-Emmanuel; Eichenlaub, Jean-Baptiste; Samet, Mounir; Kachouri, Abdennaceur; Jerbi, Karim
2015-07-30
Sleep staging is a critical step in a range of electrophysiological signal processing pipelines used in clinical routine as well as in sleep research. Although the results currently achievable with automatic sleep staging methods are promising, there is need for improvement, especially given the time-consuming and tedious nature of visual sleep scoring. Here we propose a sleep staging framework that consists of a multi-class support vector machine (SVM) classification based on a decision tree approach. The performance of the method was evaluated using polysomnographic data from 15 subjects (electroencephalogram (EEG), electrooculogram (EOG) and electromyogram (EMG) recordings). The decision tree, or dendrogram, was obtained using a hierarchical clustering technique and a wide range of time and frequency-domain features were extracted. Feature selection was carried out using forward sequential selection and classification was evaluated using k-fold cross-validation. The dendrogram-based SVM (DSVM) achieved mean specificity, sensitivity and overall accuracy of 0.92, 0.74 and 0.88 respectively, compared to expert visual scoring. Restricting DSVM classification to data where both experts' scoring was consistent (76.73% of the data) led to a mean specificity, sensitivity and overall accuracy of 0.94, 0.82 and 0.92 respectively. The DSVM framework outperforms classification with more standard multi-class "one-against-all" SVM and linear-discriminant analysis. The promising results of the proposed methodology suggest that it may be a valuable alternative to existing automatic methods and that it could accelerate visual scoring by providing a robust starting hypnogram that can be further fine-tuned by expert inspection. Copyright © 2015 Elsevier B.V. All rights reserved.
Neighbourhood-scale urban forest ecosystem classification.
Steenberg, James W N; Millward, Andrew A; Duinker, Peter N; Nowak, David J; Robinson, Pamela J
2015-11-01
Urban forests are now recognized as essential components of sustainable cities, but there remains uncertainty concerning how to stratify and classify urban landscapes into units of ecological significance at spatial scales appropriate for management. Ecosystem classification is an approach that entails quantifying the social and ecological processes that shape ecosystem conditions into logical and relatively homogeneous management units, making the potential for ecosystem-based decision support available to urban planners. The purpose of this study is to develop and propose a framework for urban forest ecosystem classification (UFEC). The multifactor framework integrates 12 ecosystem components that characterize the biophysical landscape, built environment, and human population. This framework is then applied at the neighbourhood scale in Toronto, Canada, using hierarchical cluster analysis. The analysis used 27 spatially-explicit variables to quantify the ecosystem components in Toronto. Twelve ecosystem classes were identified in this UFEC application. Across the ecosystem classes, tree canopy cover was positively related to economic wealth, especially income. However, education levels and homeownership were occasionally inconsistent with the expected positive relationship with canopy cover. Open green space and stocking had variable relationships with economic wealth and were more closely related to population density, building intensity, and land use. The UFEC can provide ecosystem-based information for greening initiatives, tree planting, and the maintenance of the existing canopy. Moreover, its use has the potential to inform the prioritization of limited municipal resources according to ecological conditions and to concerns of social equity in the access to nature and distribution of ecosystem service supply. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analysis of composition-based metagenomic classification.
Higashi, Susan; Barreto, André da Motta Salles; Cantão, Maurício Egidio; de Vasconcelos, Ana Tereza Ribeiro
2012-01-01
An essential step of a metagenomic study is the taxonomic classification, that is, the identification of the taxonomic lineage of the organisms in a given sample. The taxonomic classification process involves a series of decisions. Currently, in the context of metagenomics, such decisions are usually based on empirical studies that consider one specific type of classifier. In this study we propose a general framework for analyzing the impact that several decisions can have on the classification problem. Instead of focusing on any specific classifier, we define a generic score function that provides a measure of the difficulty of the classification task. Using this framework, we analyze the impact of the following parameters on the taxonomic classification problem: (i) the length of n-mers used to encode the metagenomic sequences, (ii) the similarity measure used to compare sequences, and (iii) the type of taxonomic classification, which can be conventional or hierarchical, depending on whether the classification process occurs in a single shot or in several steps according to the taxonomic tree. We defined a score function that measures the degree of separability of the taxonomic classes under a given configuration induced by the parameters above. We conducted an extensive computational experiment and found out that reasonable values for the parameters of interest could be (i) intermediate values of n, the length of the n-mers; (ii) any similarity measure, because all of them resulted in similar scores; and (iii) the hierarchical strategy, which performed better in all of the cases. As expected, short n-mers generate lower configuration scores because they give rise to frequency vectors that represent distinct sequences in a similar way. On the other hand, large values for n result in sparse frequency vectors that represent differently metagenomic fragments that are in fact similar, also leading to low configuration scores. Regarding the similarity measure, in contrast to our expectations, the variation of the measures did not change the configuration scores significantly. Finally, the hierarchical strategy was more effective than the conventional strategy, which suggests that, instead of using a single classifier, one should adopt multiple classifiers organized as a hierarchy.
Retinal artery-vein classification via topology estimation
Estrada, Rolando; Allingham, Michael J.; Mettu, Priyatham S.; Cousins, Scott W.; Tomasi, Carlo; Farsiu, Sina
2015-01-01
We propose a novel, graph-theoretic framework for distinguishing arteries from veins in a fundus image. We make use of the underlying vessel topology to better classify small and midsized vessels. We extend our previously proposed tree topology estimation framework by incorporating expert, domain-specific features to construct a simple, yet powerful global likelihood model. We efficiently maximize this model by iteratively exploring the space of possible solutions consistent with the projected vessels. We tested our method on four retinal datasets and achieved classification accuracies of 91.0%, 93.5%, 91.7%, and 90.9%, outperforming existing methods. Our results show the effectiveness of our approach, which is capable of analyzing the entire vasculature, including peripheral vessels, in wide field-of-view fundus photographs. This topology-based method is a potentially important tool for diagnosing diseases with retinal vascular manifestation. PMID:26068204
Framework for evaluating disease severity measures in older adults with comorbidity.
Boyd, Cynthia M; Weiss, Carlos O; Halter, Jeff; Han, K Carol; Ershler, William B; Fried, Linda P
2007-03-01
Accounting for the influence of concurrent conditions on health and functional status for both research and clinical decision-making purposes is especially important in older adults. Although approaches to classifying severity of individual diseases and conditions have been developed, the utility of these classification systems has not been evaluated in the presence of multiple conditions. We present a framework for evaluating severity classification systems for common chronic diseases. The framework evaluates the: (a) goal or purpose of the classification system; (b) physiological and/or functional criteria for severity graduation; and (c) potential reliability and validity of the system balanced against burden and costs associated with classification. Approaches to severity classification of individual diseases were not originally conceived for the study of comorbidity. Therefore, they vary greatly in terms of objectives, physiological systems covered, level of severity characterization, reliability and validity, and costs and burdens. Using different severity classification systems to account for differing levels of disease severity in a patient with multiple diseases, or, assessing global disease burden may be challenging. Most approaches to severity classification are not adequate to address comorbidity. Nevertheless, thoughtful use of some existing approaches and refinement of others may advance the study of comorbidity and diagnostic and therapeutic approaches to patients with multimorbidity.
A Classification of Statistics Courses (A Framework for Studying Statistical Education)
ERIC Educational Resources Information Center
Turner, J. C.
1976-01-01
A classification of statistics courses in presented, with main categories of "course type,""methods of presentation,""objectives," and "syllabus." Examples and suggestions for uses of the classification are given. (DT)
Wang, Lizhu; Riseng, Catherine M.; Mason, Lacey; Werhrly, Kevin; Rutherford, Edward; McKenna, James E.; Castiglione, Chris; Johnson, Lucinda B.; Infante, Dana M.; Sowa, Scott P.; Robertson, Mike; Schaeffer, Jeff; Khoury, Mary; Gaiot, John; Hollenhurst, Tom; Brooks, Colin N.; Coscarelli, Mark
2015-01-01
Managing the world's largest and most complex freshwater ecosystem, the Laurentian Great Lakes, requires a spatially hierarchical basin-wide database of ecological and socioeconomic information that is comparable across the region. To meet such a need, we developed a spatial classification framework and database — Great Lakes Aquatic Habitat Framework (GLAHF). GLAHF consists of catchments, coastal terrestrial, coastal margin, nearshore, and offshore zones that encompass the entire Great Lakes Basin. The catchments captured in the database as river pour points or coastline segments are attributed with data known to influence physicochemical and biological characteristics of the lakes from the catchments. The coastal terrestrial zone consists of 30-m grid cells attributed with data from the terrestrial region that has direct connection with the lakes. The coastal margin and nearshore zones consist of 30-m grid cells attributed with data describing the coastline conditions, coastal human disturbances, and moderately to highly variable physicochemical and biological characteristics. The offshore zone consists of 1.8-km grid cells attributed with data that are spatially less variable compared with the other aquatic zones. These spatial classification zones and their associated data are nested within lake sub-basins and political boundaries and allow the synthesis of information from grid cells to classification zones, within and among political boundaries, lake sub-basins, Great Lakes, or within the entire Great Lakes Basin. This spatially structured database could help the development of basin-wide management plans, prioritize locations for funding and specific management actions, track protection and restoration progress, and conduct research for science-based decision making.
Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images †
Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao
2017-01-01
Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications. PMID:28604624
Real-time video analysis for retail stores
NASA Astrophysics Data System (ADS)
Hassan, Ehtesham; Maurya, Avinash K.
2015-03-01
With the advancement in video processing technologies, we can capture subtle human responses in a retail store environment which play decisive role in the store management. In this paper, we present a novel surveillance video based analytic system for retail stores targeting localized and global traffic estimate. Development of an intelligent system for human traffic estimation in real-life poses a challenging problem because of the variation and noise involved. In this direction, we begin with a novel human tracking system by an intelligent combination of motion based and image level object detection. We demonstrate the initial evaluation of this approach on available standard dataset yielding promising result. Exact traffic estimate in a retail store require correct separation of customers from service providers. We present a role based human classification framework using Gaussian mixture model for this task. A novel feature descriptor named graded colour histogram is defined for object representation. Using, our role based human classification and tracking system, we have defined a novel computationally efficient framework for two types of analytics generation i.e., region specific people count and dwell-time estimation. This system has been extensively evaluated and tested on four hours of real-life video captured from a retail store.
Automatic classification of animal vocalizations
NASA Astrophysics Data System (ADS)
Clemins, Patrick J.
2005-11-01
Bioacoustics, the study of animal vocalizations, has begun to use increasingly sophisticated analysis techniques in recent years. Some common tasks in bioacoustics are repertoire determination, call detection, individual identification, stress detection, and behavior correlation. Each research study, however, uses a wide variety of different measured variables, called features, and classification systems to accomplish these tasks. The well-established field of human speech processing has developed a number of different techniques to perform many of the aforementioned bioacoustics tasks. Melfrequency cepstral coefficients (MFCCs) and perceptual linear prediction (PLP) coefficients are two popular feature sets. The hidden Markov model (HMM), a statistical model similar to a finite autonoma machine, is the most commonly used supervised classification model and is capable of modeling both temporal and spectral variations. This research designs a framework that applies models from human speech processing for bioacoustic analysis tasks. The development of the generalized perceptual linear prediction (gPLP) feature extraction model is one of the more important novel contributions of the framework. Perceptual information from the species under study can be incorporated into the gPLP feature extraction model to represent the vocalizations as the animals might perceive them. By including this perceptual information and modifying parameters of the HMM classification system, this framework can be applied to a wide range of species. The effectiveness of the framework is shown by analyzing African elephant and beluga whale vocalizations. The features extracted from the African elephant data are used as input to a supervised classification system and compared to results from traditional statistical tests. The gPLP features extracted from the beluga whale data are used in an unsupervised classification system and the results are compared to labels assigned by experts. The development of a framework from which to build animal vocalization classifiers will provide bioacoustics researchers with a consistent platform to analyze and classify vocalizations. A common framework will also allow studies to compare results across species and institutions. In addition, the use of automated classification techniques can speed analysis and uncover behavioral correlations not readily apparent using traditional techniques.
Baldacchino, Tara; Jacobs, William R; Anderson, Sean R; Worden, Keith; Rowson, Jennifer
2018-01-01
This contribution presents a novel methodology for myolectric-based control using surface electromyographic (sEMG) signals recorded during finger movements. A multivariate Bayesian mixture of experts (MoE) model is introduced which provides a powerful method for modeling force regression at the fingertips, while also performing finger movement classification as a by-product of the modeling algorithm. Bayesian inference of the model allows uncertainties to be naturally incorporated into the model structure. This method is tested using data from the publicly released NinaPro database which consists of sEMG recordings for 6 degree-of-freedom force activations for 40 intact subjects. The results demonstrate that the MoE model achieves similar performance compared to the benchmark set by the authors of NinaPro for finger force regression. Additionally, inherent to the Bayesian framework is the inclusion of uncertainty in the model parameters, naturally providing confidence bounds on the force regression predictions. Furthermore, the integrated clustering step allows a detailed investigation into classification of the finger movements, without incurring any extra computational effort. Subsequently, a systematic approach to assessing the importance of the number of electrodes needed for accurate control is performed via sensitivity analysis techniques. A slight degradation in regression performance is observed for a reduced number of electrodes, while classification performance is unaffected.
Mäenpää, Helena; Autti-Rämö, Ilona; Varho, Tarja; Forsten, Wivi; Haataja, Leena
2017-03-01
To develop a national consensus on outcome measures that define functional ability in children with cerebral palsy (CP) according to the International Classification of Functioning, Disability and Health (ICF) framework. The project started in 2008 in neuropaediatric units of two university hospitals and one outpatient clinic. Each professional group selected representatives to be knowledge brokers for their own specialty. Based on the evidence, expert opinion, and the ICF framework, multiprofessional teams selected the most valid measures used in clinical practice (2009-2010). Data from 269 children with CP were analysed, classified by the Gross Motor Function Classification System, Manual Ability Classification System, and Communication Function Classification System, and evaluated. The process aimed at improving and unifying clinical practice in Finland through a national consensus on the core set of measures. The selected measures were presented by professional groups, and consensus was reached on the recommended core set of measures to be used in all hospitals treating children with CP in Finland. A national consensus on relevant and feasible measures is essential for identifying differences in the effectiveness of local practices, and for conducting multisite intervention studies. This project showed that multiprofessional rehabilitation practices can be improved through respect for and inclusion of everyone involved. © 2016 Mac Keith Press.
A subgeneric classification of Selaginella (Selaginellaceae).
Weststrand, Stina; Korall, Petra
2016-12-01
The lycophyte family Selaginellaceae includes approximately 750 herbaceous species worldwide, with the main species richness in the tropics and subtropics. We recently presented a phylogenetic analysis of Selaginellaceae based on DNA sequence data and, with the phylogeny as a framework, the study discussed the character evolution of the group focusing on gross morphology. Here we translate these findings into a new classification. To present a robust and useful classification, we identified well-supported monophyletic groups from our previous phylogenetic analysis of 223 species, which together represent the diversity of the family with respect to morphology, taxonomy, and geographical distribution. Care was taken to choose groups with supporting morphology. In this classification, we recognize a single genus Selaginella and seven subgenera: Selaginella, Rupestrae, Lepidophyllae, Gymnogynum, Exaltatae, Ericetorum, and Stachygynandrum. The subgenera are all well supported based on analysis of DNA sequence data and morphology. A key to the subgenera is presented. Our new classification is based on a well-founded hypothesis of the evolutionary relationships of Selaginella, and each subgenus can be identified by a suite of morphological features, most of them possible to study in the field. Our intention is that the classification will be useful not only to experts in the field, but also to a broader audience. © 2016 Weststrand and Korall. Published by the Botanical Society of America. This work is licensed under a Creative Commons Attribution License (CC-BY 4.0).
ON DEPARTURES FROM INDEPENDENCE IN CROSS-CLASSIFICATIONS.
ERIC Educational Resources Information Center
CASE, C. MARSTON
THIS NOTE IS CONCERNED WITH IDEAS AND PROBLEMS INVOLVED IN CROSS-CLASSIFICATION OF OBSERVATIONS ON A GIVEN POPULATION, ESPECIALLY TWO-DIMENSIONAL CROSS-CLASSIFICATIONS. MAIN OBJECTIVES OF THE NOTE INCLUDE--(1) ESTABLISHMENT OF A CONCEPTUAL FRAMEWORK FOR CHARACTERIZATION AND COMPARISON OF CROSS-CLASSIFICATIONS, (2) DISCUSSION OF EXISTING METHODS…
An Optimization-based Framework to Learn Conditional Random Fields for Multi-label Classification
Naeini, Mahdi Pakdaman; Batal, Iyad; Liu, Zitao; Hong, CharmGil; Hauskrecht, Milos
2015-01-01
This paper studies multi-label classification problem in which data instances are associated with multiple, possibly high-dimensional, label vectors. This problem is especially challenging when labels are dependent and one cannot decompose the problem into a set of independent classification problems. To address the problem and properly represent label dependencies we propose and study a pairwise conditional random Field (CRF) model. We develop a new approach for learning the structure and parameters of the CRF from data. The approach maximizes the pseudo likelihood of observed labels and relies on the fast proximal gradient descend for learning the structure and limited memory BFGS for learning the parameters of the model. Empirical results on several datasets show that our approach outperforms several multi-label classification baselines, including recently published state-of-the-art methods. PMID:25927015
Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Cao, Xiangyong; Zhou, Feng; Xu, Lin; Meng, Deyu; Xu, Zongben; Paisley, John
2018-05-01
This paper presents a new supervised classification algorithm for remotely sensed hyperspectral image (HSI) which integrates spectral and spatial information in a unified Bayesian framework. First, we formulate the HSI classification problem from a Bayesian perspective. Then, we adopt a convolutional neural network (CNN) to learn the posterior class distributions using a patch-wise training strategy to better use the spatial information. Next, spatial information is further considered by placing a spatial smoothness prior on the labels. Finally, we iteratively update the CNN parameters using stochastic gradient decent (SGD) and update the class labels of all pixel vectors using an alpha-expansion min-cut-based algorithm. Compared with other state-of-the-art methods, the proposed classification method achieves better performance on one synthetic dataset and two benchmark HSI datasets in a number of experimental settings.
Harmouche, Rola; Subbanna, Nagesh K; Collins, D Louis; Arnold, Douglas L; Arbel, Tal
2015-05-01
In this paper, a fully automatic probabilistic method for multiple sclerosis (MS) lesion classification is presented, whereby the posterior probability density function over healthy tissues and two types of lesions (T1-hypointense and T2-hyperintense) is generated at every voxel. During training, the system explicitly models the spatial variability of the intensity distributions throughout the brain by first segmenting it into distinct anatomical regions and then building regional likelihood distributions for each tissue class based on multimodal magnetic resonance image (MRI) intensities. Local class smoothness is ensured by incorporating neighboring voxel information in the prior probability through Markov random fields. The system is tested on two datasets from real multisite clinical trials consisting of multimodal MRIs from a total of 100 patients with MS. Lesion classification results based on the framework are compared with and without the regional information, as well as with other state-of-the-art methods against the labels from expert manual raters. The metrics for comparison include Dice overlap, sensitivity, and positive predictive rates for both voxel and lesion classifications. Statistically significant improvements in Dice values ( ), for voxel-based and lesion-based sensitivity values ( ), and positive predictive rates ( and respectively) are shown when the proposed method is compared to the method without regional information, and to a widely used method [1]. This holds particularly true in the posterior fossa, an area where classification is very challenging. The proposed method allows us to provide clinicians with accurate tissue labels for T1-hypointense and T2-hyperintense lesions, two types of lesions that differ in appearance and clinical ramifications, and with a confidence level in the classification, which helps clinicians assess the classification results.
Main and interactive effects of watershed storage and forest fragmentation on watershed exports, habitat quality, community composition and food-web relationships were compared within and acoss two hydrogeomorphic regions (HGM, North Shore Highlands and Lake Superior clay plains/...
Factors Associated with Leisure Activity among Young Adults with Developmental Disabilities
ERIC Educational Resources Information Center
Van Naarden Braun, Kim; Yeargin-Allsopp, Marshalyn; Lollar, Donald
2006-01-01
The framework of the International Classification of Functioning, Disability, and Health (ICF) was applied to examine the factors associated with childhood impairment and leisure activity. Information on leisure activity was obtained using a structured questionnaire from a population-based cohort of young adults with childhood impairment. The…
Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso
2017-03-15
Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.
Pombo, Nuno; Garcia, Nuno; Bousson, Kouamana
2017-03-01
Sleep apnea syndrome (SAS), which can significantly decrease the quality of life is associated with a major risk factor of health implications such as increased cardiovascular disease, sudden death, depression, irritability, hypertension, and learning difficulties. Thus, it is relevant and timely to present a systematic review describing significant applications in the framework of computational intelligence-based SAS, including its performance, beneficial and challenging effects, and modeling for the decision-making on multiple scenarios. This study aims to systematically review the literature on systems for the detection and/or prediction of apnea events using a classification model. Forty-five included studies revealed a combination of classification techniques for the diagnosis of apnea, such as threshold-based (14.75%) and machine learning (ML) models (85.25%). In addition, the ML models, were clustered in a mind map, include neural networks (44.26%), regression (4.91%), instance-based (11.47%), Bayesian algorithms (1.63%), reinforcement learning (4.91%), dimensionality reduction (8.19%), ensemble learning (6.55%), and decision trees (3.27%). A classification model should provide an auto-adaptive and no external-human action dependency. In addition, the accuracy of the classification models is related with the effective features selection. New high-quality studies based on randomized controlled trials and validation of models using a large and multiple sample of data are recommended. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Ottersen, Trygve; Grépin, Karen A; Henderson, Klara; Pinkstaff, Crossley Beth; Norheim, Ole Frithjof; Røttingen, John-Arne
2018-01-01
Abstract The distributions of income and health within and across countries are changing. This challenges the way donors allocate development assistance for health (DAH) and particularly the role of gross national income per capita (GNIpc) in classifying countries to determine whether countries are eligible to receive assistance and how much they receive. Informed by a literature review and stakeholder consultations and interviews, we developed a stepwise approach to the design and assessment of country classification frameworks for the allocation of DAH, with emphasis on critical value choices. We devised 25 frameworks, all which combined GNIpc and at least one other indicator into an index. Indicators were selected and assessed based on relevance, salience, validity, consistency, and availability and timeliness, where relevance concerned the extent to which the indicator represented country’s health needs, domestic capacity, the expected impact of DAH, or equity. We assessed how the use of the different frameworks changed the rankings of low- and middle-income countries relative to a country’s ranking based on GNIpc alone. We found that stakeholders generally considered needs to be the most important concern to be captured by classification frameworks, followed by inequality, expected impact and domestic capacity. We further found that integrating a health-needs indicator with GNIpc makes a significant difference for many countries and country categories—and especially middle-income countries with high burden of unmet health needs—while the choice of specific indicator makes less difference. This together with assessments of relevance, salience, validity, consistency, and availability and timeliness suggest that donors have reasons to include a health-needs indicator in the initial classification of countries. It specifically suggests that life expectancy and disability-adjusted life year rate are indicators worth considering. Indicators related to other concerns may be mainly relevant at different stages of the decision-making process, require better data, or both. PMID:29415238
Schulz, S; Romacker, M; Hahn, U
1998-01-01
The development of powerful and comprehensive medical ontologies that support formal reasoning on a large scale is one of the key requirements for clinical computing in the next millennium. Taxonomic medical knowledge, a major portion of these ontologies, is mainly characterized by generalization and part-whole relations between concepts. While reasoning in generalization hierarchies is quite well understood, no fully conclusive mechanism as yet exists for part-whole reasoning. The approach we take emulates part-whole reasoning via classification-based reasoning using SEP triplets, a special data structure for encoding part-whole relations that is fully embedded in the formal framework of standard description logics.
Schulz, S.; Romacker, M.; Hahn, U.
1998-01-01
The development of powerful and comprehensive medical ontologies that support formal reasoning on a large scale is one of the key requirements for clinical computing in the next millennium. Taxonomic medical knowledge, a major portion of these ontologies, is mainly characterized by generalization and part-whole relations between concepts. While reasoning in generalization hierarchies is quite well understood, no fully conclusive mechanism as yet exists for part-whole reasoning. The approach we take emulates part-whole reasoning via classification-based reasoning using SEP triplets, a special data structure for encoding part-whole relations that is fully embedded in the formal framework of standard description logics. Images Figure 3 PMID:9929335
Age and gender classification in the wild with unsupervised feature learning
NASA Astrophysics Data System (ADS)
Wan, Lihong; Huo, Hong; Fang, Tao
2017-03-01
Inspired by unsupervised feature learning (UFL) within the self-taught learning framework, we propose a method based on UFL, convolution representation, and part-based dimensionality reduction to handle facial age and gender classification, which are two challenging problems under unconstrained circumstances. First, UFL is introduced to learn selective receptive fields (filters) automatically by applying whitening transformation and spherical k-means on random patches collected from unlabeled data. The learning process is fast and has no hyperparameters to tune. Then, the input image is convolved with these filters to obtain filtering responses on which local contrast normalization is applied. Average pooling and feature concatenation are then used to form global face representation. Finally, linear discriminant analysis with part-based strategy is presented to reduce the dimensions of the global representation and to improve classification performances further. Experiments on three challenging databases, namely, Labeled faces in the wild, Gallagher group photos, and Adience, demonstrate the effectiveness of the proposed method relative to that of state-of-the-art approaches.
NASA Astrophysics Data System (ADS)
Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.
2014-10-01
In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.
On-line classification of pollutants in water using wireless portable electronic noses.
Herrero, José Luis; Lozano, Jesús; Santos, José Pedro; Suárez, José Ignacio
2016-06-01
A portable electronic nose with database connection for on-line classification of pollutants in water is presented in this paper. It is a hand-held, lightweight and powered instrument with wireless communications capable of standalone operation. A network of similar devices can be configured for distributed measurements. It uses four resistive microsensors and headspace as sampling method for extracting the volatile compounds from glass vials. The measurement and control program has been developed in LabVIEW using the database connection toolkit to send the sensors data to a server for training and classification with Artificial Neural Networks (ANNs). The use of a server instead of the microprocessor of the e-nose increases the capacity of memory and the computing power of the classifier and allows external users to perform data classification. To address this challenge, this paper also proposes a web-based framework (based on RESTFul web services, Asynchronous JavaScript and XML and JavaScript Object Notation) that allows remote users to train ANNs and request classification values regardless user's location and the type of device used. Results show that the proposed prototype can discriminate the samples measured (Blank water, acetone, toluene, ammonia, formaldehyde, hydrogen peroxide, ethanol, benzene, dichloromethane, acetic acid, xylene and dimethylacetamide) with a 94% classification success rate. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sanches-Ferreira, Manuela; Simeonsson, Rune J; Silveira-Maia, Mónica; Alves, Sílvia; Tavares, Ana; Pinheiro, Sara
2013-05-01
The International Classification of Functioning, Disability and Health (ICF) was introduced in Portuguese education law as the compulsory system to guide eligibility policy and practice in special education. This paper describes the implementation of the ICF and its utility in the assessment process and eligibility determination of students for special education. A study to evaluate the utility of the ICF was commissioned by the Portuguese Ministry of Education and carried out by an external evaluation team. A document analysis was made of the assessment and eligibility processes of 237 students, selected from a nationally representative sample. The results provided support for the use of the ICF in student assessment and in the multidimensional approach of generating student functioning profiles as the basis for determining eligibility. The use of the ICF contributed to the differentiation of eligible and non eligible students based on their functioning profiles. The findings demonstrate the applicability of the ICF framework and classification system for determining eligibility for special education services on the basis of student functioning rather than medical or psychological diagnose. The use of the International Classification of Functioning, Disability and Health (ICF) framework in special education policy is as follows: • The functional perspective of the ICF offers a more comprehensive, holistic assessment of student needs than medical diagnoses. • ICF-based assessment of the nature and severity of functioning can serve as the basis for determining eligibility for special education and habilitation. • Profiles of functioning can support decision making in designing appropriate educational interventions for students.
Classifying Black Hole States with Machine Learning
NASA Astrophysics Data System (ADS)
Huppenkothen, Daniela
2018-01-01
Galactic black hole binaries are known to go through different states with apparent signatures in both X-ray light curves and spectra, leading to important implications for accretion physics as well as our knowledge of General Relativity. Existing frameworks of classification are usually based on human interpretation of low-dimensional representations of the data, and generally only apply to fairly small data sets. Machine learning, in contrast, allows for rapid classification of large, high-dimensional data sets. In this talk, I will report on advances made in classification of states observed in Black Hole X-ray Binaries, focusing on the two sources GRS 1915+105 and Cygnus X-1, and show both the successes and limitations of using machine learning to derive physical constraints on these systems.
A Model-Free Machine Learning Method for Risk Classification and Survival Probability Prediction.
Geng, Yuan; Lu, Wenbin; Zhang, Hao Helen
2014-01-01
Risk classification and survival probability prediction are two major goals in survival data analysis since they play an important role in patients' risk stratification, long-term diagnosis, and treatment selection. In this article, we propose a new model-free machine learning framework for risk classification and survival probability prediction based on weighted support vector machines. The new procedure does not require any specific parametric or semiparametric model assumption on data, and is therefore capable of capturing nonlinear covariate effects. We use numerous simulation examples to demonstrate finite sample performance of the proposed method under various settings. Applications to a glioma tumor data and a breast cancer gene expression survival data are shown to illustrate the new methodology in real data analysis.
Optimized hardware framework of MLP with random hidden layers for classification applications
NASA Astrophysics Data System (ADS)
Zyarah, Abdullah M.; Ramesh, Abhishek; Merkel, Cory; Kudithipudi, Dhireesha
2016-05-01
Multilayer Perceptron Networks with random hidden layers are very efficient at automatic feature extraction and offer significant performance improvements in the training process. They essentially employ large collection of fixed, random features, and are expedient for form-factor constrained embedded platforms. In this work, a reconfigurable and scalable architecture is proposed for the MLPs with random hidden layers with a customized building block based on CORDIC algorithm. The proposed architecture also exploits fixed point operations for area efficiency. The design is validated for classification on two different datasets. An accuracy of ~ 90% for MNIST dataset and 75% for gender classification on LFW dataset was observed. The hardware has 299 speed-up over the corresponding software realization.
[Construction of biopharmaceutics classification system of Chinese materia medica].
Liu, Yang; Wei, Li; Dong, Ling; Zhu, Mei-Ling; Tang, Ming-Min; Zhang, Lei
2014-12-01
Based on the characteristics of multicomponent of traditional Chinese medicine and drawing lessons from the concepts, methods and techniques of biopharmaceutics classification system (BCS) in chemical field, this study comes up with the science framework of biopharmaceutics classification system of Chinese materia medica (CMMBCS). Using the different comparison method of multicomponent level and the CMMBCS method of overall traditional Chinese medicine, the study constructs the method process while setting forth academic thoughts and analyzing theory. The basic role of this system is clear to reveal the interaction and the related absorption mechanism of multicomponent in traditional Chinese medicine. It also provides new ideas and methods for improving the quality of Chinese materia medica and the development of new drug research.
Instruction-matrix-based genetic programming.
Li, Gang; Wang, Jin Feng; Lee, Kin Hong; Leung, Kwong-Sak
2008-08-01
In genetic programming (GP), evolving tree nodes separately would reduce the huge solution space. However, tree nodes are highly interdependent with respect to their fitness. In this paper, we propose a new GP framework, namely, instruction-matrix (IM)-based GP (IMGP), to handle their interactions. IMGP maintains an IM to evolve tree nodes and subtrees separately. IMGP extracts program trees from an IM and updates the IM with the information of the extracted program trees. As the IM actually keeps most of the information of the schemata of GP and evolves the schemata directly, IMGP is effective and efficient. Our experimental results on benchmark problems have verified that IMGP is not only better than those of canonical GP in terms of the qualities of the solutions and the number of program evaluations, but they are also better than some of the related GP algorithms. IMGP can also be used to evolve programs for classification problems. The classifiers obtained have higher classification accuracies than four other GP classification algorithms on four benchmark classification problems. The testing errors are also comparable to or better than those obtained with well-known classifiers. Furthermore, an extended version, called condition matrix for rule learning, has been used successfully to handle multiclass classification problems.
Lesion classification using clinical and visual data fusion by multiple kernel learning
NASA Astrophysics Data System (ADS)
Kisilev, Pavel; Hashoul, Sharbell; Walach, Eugene; Tzadok, Asaf
2014-03-01
To overcome operator dependency and to increase diagnosis accuracy in breast ultrasound (US), a lot of effort has been devoted to developing computer-aided diagnosis (CAD) systems for breast cancer detection and classification. Unfortunately, the efficacy of such CAD systems is limited since they rely on correct automatic lesions detection and localization, and on robustness of features computed based on the detected areas. In this paper we propose a new approach to boost the performance of a Machine Learning based CAD system, by combining visual and clinical data from patient files. We compute a set of visual features from breast ultrasound images, and construct the textual descriptor of patients by extracting relevant keywords from patients' clinical data files. We then use the Multiple Kernel Learning (MKL) framework to train SVM based classifier to discriminate between benign and malignant cases. We investigate different types of data fusion methods, namely, early, late, and intermediate (MKL-based) fusion. Our database consists of 408 patient cases, each containing US images, textual description of complaints and symptoms filled by physicians, and confirmed diagnoses. We show experimentally that the proposed MKL-based approach is superior to other classification methods. Even though the clinical data is very sparse and noisy, its MKL-based fusion with visual features yields significant improvement of the classification accuracy, as compared to the image features only based classifier.
Al-Masni, Mohammed A; Al-Antari, Mugahed A; Park, Jeong-Min; Gi, Geon; Kim, Tae-Yeon; Rivera, Patricio; Valarezo, Edwin; Choi, Mun-Taek; Han, Seung-Moo; Kim, Tae-Seong
2018-04-01
Automatic detection and classification of the masses in mammograms are still a big challenge and play a crucial role to assist radiologists for accurate diagnosis. In this paper, we propose a novel Computer-Aided Diagnosis (CAD) system based on one of the regional deep learning techniques, a ROI-based Convolutional Neural Network (CNN) which is called You Only Look Once (YOLO). Although most previous studies only deal with classification of masses, our proposed YOLO-based CAD system can handle detection and classification simultaneously in one framework. The proposed CAD system contains four main stages: preprocessing of mammograms, feature extraction utilizing deep convolutional networks, mass detection with confidence, and finally mass classification using Fully Connected Neural Networks (FC-NNs). In this study, we utilized original 600 mammograms from Digital Database for Screening Mammography (DDSM) and their augmented mammograms of 2,400 with the information of the masses and their types in training and testing our CAD. The trained YOLO-based CAD system detects the masses and then classifies their types into benign or malignant. Our results with five-fold cross validation tests show that the proposed CAD system detects the mass location with an overall accuracy of 99.7%. The system also distinguishes between benign and malignant lesions with an overall accuracy of 97%. Our proposed system even works on some challenging breast cancer cases where the masses exist over the pectoral muscles or dense regions. Copyright © 2018 Elsevier B.V. All rights reserved.
Kurtz, Camille; Beaulieu, Christopher F.; Napel, Sandy; Rubin, Daniel L.
2014-01-01
Computer-assisted image retrieval applications could assist radiologist interpretations by identifying similar images in large archives as a means to providing decision support. However, the semantic gap between low-level image features and their high level semantics may impair the system performances. Indeed, it can be challenging to comprehensively characterize the images using low-level imaging features to fully capture the visual appearance of diseases on images, and recently the use of semantic terms has been advocated to provide semantic descriptions of the visual contents of images. However, most of the existing image retrieval strategies do not consider the intrinsic properties of these terms during the comparison of the images beyond treating them as simple binary (presence/absence) features. We propose a new framework that includes semantic features in images and that enables retrieval of similar images in large databases based on their semantic relations. It is based on two main steps: (1) annotation of the images with semantic terms extracted from an ontology, and (2) evaluation of the similarity of image pairs by computing the similarity between the terms using the Hierarchical Semantic-Based Distance (HSBD) coupled to an ontological measure. The combination of these two steps provides a means of capturing the semantic correlations among the terms used to characterize the images that can be considered as a potential solution to deal with the semantic gap problem. We validate this approach in the context of the retrieval and the classification of 2D regions of interest (ROIs) extracted from computed tomographic (CT) images of the liver. Under this framework, retrieval accuracy of more than 0.96 was obtained on a 30-images dataset using the Normalized Discounted Cumulative Gain (NDCG) index that is a standard technique used to measure the effectiveness of information retrieval algorithms when a separate reference standard is available. Classification results of more than 95% were obtained on a 77-images dataset. For comparison purpose, the use of the Earth Mover's Distance (EMD), which is an alternative distance metric that considers all the existing relations among the terms, led to results retrieval accuracy of 0.95 and classification results of 93% with a higher computational cost. The results provided by the presented framework are competitive with the state-of-the-art and emphasize the usefulness of the proposed methodology for radiology image retrieval and classification. PMID:24632078
NASA Astrophysics Data System (ADS)
Thelen, Brian T.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.
2017-04-01
With all of the new remote sensing modalities available, and with ever increasing capabilities and frequency of collection, there is a desire to fundamentally understand/quantify the information content in the collected image data relative to various exploitation goals, such as detection/classification. A fundamental approach for this is the framework of Bayesian decision theory, but a daunting challenge is to have significantly flexible and accurate multivariate models for the features and/or pixels that capture a wide assortment of distributions and dependen- cies. In addition, data can come in the form of both continuous and discrete representations, where the latter is often generated based on considerations of robustness to imaging conditions and occlusions/degradations. In this paper we propose a novel suite of "latent" models fundamentally based on multivariate Gaussian copula models that can be used for quantized data from SAR imagery. For this Latent Gaussian Copula (LGC) model, we derive an approximate, maximum-likelihood estimation algorithm and demonstrate very reasonable estimation performance even for the larger images with many pixels. However applying these LGC models to large dimen- sions/images within a Bayesian decision/classification theory is infeasible due to the computational/numerical issues in evaluating the true full likelihood, and we propose an alternative class of novel pseudo-likelihoood detection statistics that are computationally feasible. We show in a few simple examples that these statistics have the potential to provide very good and robust detection/classification performance. All of this framework is demonstrated on a simulated SLICY data set, and the results show the importance of modeling the dependencies, and of utilizing the pseudo-likelihood methods.
Multi-scale Gaussian representation and outline-learning based cell image segmentation.
Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli
2013-01-01
High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.
Multi-scale Gaussian representation and outline-learning based cell image segmentation
2013-01-01
Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488
Tixier, Eliott; Raphel, Fabien; Lombardi, Damiano; Gerbeau, Jean-Frédéric
2017-01-01
The Micro-Electrode Array (MEA) device enables high-throughput electrophysiology measurements that are less labor-intensive than patch-clamp based techniques. Combined with human-induced pluripotent stem cells cardiomyocytes (hiPSC-CM), it represents a new and promising paradigm for automated and accurate in vitro drug safety evaluation. In this article, the following question is addressed: which features of the MEA signals should be measured to better classify the effects of drugs? A framework for the classification of drugs using MEA measurements is proposed. The classification is based on the ion channels blockades induced by the drugs. It relies on an in silico electrophysiology model of the MEA, a feature selection algorithm and automatic classification tools. An in silico model of the MEA is developed and is used to generate synthetic measurements. An algorithm that extracts MEA measurements features designed to perform well in a classification context is described. These features are called composite biomarkers. A state-of-the-art machine learning program is used to carry out the classification of drugs using experimental MEA measurements. The experiments are carried out using five different drugs: mexiletine, flecainide, diltiazem, moxifloxacin, and dofetilide. We show that the composite biomarkers outperform the classical ones in different classification scenarios. We show that using both synthetic and experimental MEA measurements improves the robustness of the composite biomarkers and that the classification scores are increased.
Discriminative least squares regression for multiclass classification and feature selection.
Xiang, Shiming; Nie, Feiping; Meng, Gaofeng; Pan, Chunhong; Zhang, Changshui
2012-11-01
This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.
Liu, Xiao; Chen, Hsinchun
2015-12-01
Social media offer insights of patients' medical problems such as drug side effects and treatment failures. Patient reports of adverse drug events from social media have great potential to improve current practice of pharmacovigilance. However, extracting patient adverse drug event reports from social media continues to be an important challenge for health informatics research. In this study, we develop a research framework with advanced natural language processing techniques for integrated and high-performance patient reported adverse drug event extraction. The framework consists of medical entity extraction for recognizing patient discussions of drug and events, adverse drug event extraction with shortest dependency path kernel based statistical learning method and semantic filtering with information from medical knowledge bases, and report source classification to tease out noise. To evaluate the proposed framework, a series of experiments were conducted on a test bed encompassing about postings from major diabetes and heart disease forums in the United States. The results reveal that each component of the framework significantly contributes to its overall effectiveness. Our framework significantly outperforms prior work. Published by Elsevier Inc.
A modified method for MRF segmentation and bias correction of MR image with intensity inhomogeneity.
Xie, Mei; Gao, Jingjing; Zhu, Chongjin; Zhou, Yan
2015-01-01
Markov random field (MRF) model is an effective method for brain tissue classification, which has been applied in MR image segmentation for decades. However, it falls short of the expected classification in MR images with intensity inhomogeneity for the bias field is not considered in the formulation. In this paper, we propose an interleaved method joining a modified MRF classification and bias field estimation in an energy minimization framework, whose initial estimation is based on k-means algorithm in view of prior information on MRI. The proposed method has a salient advantage of overcoming the misclassifications from the non-interleaved MRF classification for the MR image with intensity inhomogeneity. In contrast to other baseline methods, experimental results also have demonstrated the effectiveness and advantages of our algorithm via its applications in the real and the synthetic MR images.
Business model framework applications in health care: A systematic review.
Fredriksson, Jens Jacob; Mazzocato, Pamela; Muhammed, Rafiq; Savage, Carl
2017-11-01
It has proven to be a challenge for health care organizations to achieve the Triple Aim. In the business literature, business model frameworks have been used to understand how organizations are aligned to achieve their goals. We conducted a systematic literature review with an explanatory synthesis approach to understand how business model frameworks have been applied in health care. We found a large increase in applications of business model frameworks during the last decade. E-health was the most common context of application. We identified six applications of business model frameworks: business model description, financial assessment, classification based on pre-defined typologies, business model analysis, development, and evaluation. Our synthesis suggests that the choice of business model framework and constituent elements should be informed by the intent and context of application. We see a need for harmonization in the choice of elements in order to increase generalizability, simplify application, and help organizations realize the Triple Aim.
Automatic Earth observation data service based on reusable geo-processing workflow
NASA Astrophysics Data System (ADS)
Chen, Nengcheng; Di, Liping; Gong, Jianya; Yu, Genong; Min, Min
2008-12-01
A common Sensor Web data service framework for Geo-Processing Workflow (GPW) is presented as part of the NASA Sensor Web project. This framework consists of a data service node, a data processing node, a data presentation node, a Catalogue Service node and BPEL engine. An abstract model designer is used to design the top level GPW model, model instantiation service is used to generate the concrete BPEL, and the BPEL execution engine is adopted. The framework is used to generate several kinds of data: raw data from live sensors, coverage or feature data, geospatial products, or sensor maps. A scenario for an EO-1 Sensor Web data service for fire classification is used to test the feasibility of the proposed framework. The execution time and influences of the service framework are evaluated. The experiments show that this framework can improve the quality of services for sensor data retrieval and processing.
ERIC Educational Resources Information Center
Bruneau, Beverly J.
1997-01-01
Describes the Literacy Pyramid (based on the United States Department of Agriculture food pyramid), a classification of eight instructional events, which is intended as a framework for teachers to think about the purpose of various instructional formats and about organizing time for language arts instruction. (SR)
Problem Classification in Counseling.
ERIC Educational Resources Information Center
Klimes, Rudolf E.
1992-01-01
This paper describes a framework for counselors that will help them classify personal and social problems of clients for base-line and end-line comparisons. Counseling's goal, as presented here, is to help individuals for a lifetime; therapy is not seen as the giving of advice or solutions, but as a teaching process through which clients become…
Westby, Carol; Washington, Karla N
2017-07-26
The aim of this tutorial is to support speech-language pathologists' (SLPs') application of the International Classification of Functioning, Disability and Health (ICF) in assessment and treatment practices with children with language impairment. This tutorial reviews the framework of the ICF, describes the implications of the ICF for SLPs, distinguishes between students' capacity to perform a skill in a structured context and the actual performance of that skill in naturalistic contexts, and provides a case study of an elementary school child to demonstrate how the principles of the ICF can guide assessment and intervention. The Scope of Practice and Preferred Practice documents for the American Speech-Language-Hearing Association identify the ICF as the framework for practice in speech-language pathology. This tutorial will facilitate clinicians' ability to identify personal and environmental factors that influence students' skill capacity and skill performance, assess students' capacity and performance, and develop impairment-based and socially based language goals linked to Common Core State Standards that build students' language capacity and their communicative performance in naturalistic contexts.
Supervised graph hashing for histopathology image retrieval and classification.
Shi, Xiaoshuang; Xing, Fuyong; Xu, KaiDi; Xie, Yuanpu; Su, Hai; Yang, Lin
2017-12-01
In pathology image analysis, morphological characteristics of cells are critical to grade many diseases. With the development of cell detection and segmentation techniques, it is possible to extract cell-level information for further analysis in pathology images. However, it is challenging to conduct efficient analysis of cell-level information on a large-scale image dataset because each image usually contains hundreds or thousands of cells. In this paper, we propose a novel image retrieval based framework for large-scale pathology image analysis. For each image, we encode each cell into binary codes to generate image representation using a novel graph based hashing model and then conduct image retrieval by applying a group-to-group matching method to similarity measurement. In order to improve both computational efficiency and memory requirement, we further introduce matrix factorization into the hashing model for scalable image retrieval. The proposed framework is extensively validated with thousands of lung cancer images, and it achieves 97.98% classification accuracy and 97.50% retrieval precision with all cells of each query image used. Copyright © 2017 Elsevier B.V. All rights reserved.
Using reconstructed IVUS images for coronary plaque classification.
Caballero, Karla L; Barajas, Joel; Pujol, Oriol; Rodriguez, Oriol; Radeva, Petia
2007-01-01
Coronary plaque rupture is one of the principal causes of sudden death in western societies. Reliable diagnostic of the different plaque types are of great interest for the medical community the predicting their evolution and applying an effective treatment. To achieve this, a tissue classification must be performed. Intravascular Ultrasound (IVUS) represents a technique to explore the vessel walls and to observe its histological properties. In this paper, a method to reconstruct IVUS images from the raw Radio Frequency (RF) data coming from ultrasound catheter is proposed. This framework offers a normalization scheme to compare accurately different patient studies. The automatic tissue classification is based on texture analysis and Adapting Boosting (Adaboost) learning technique combined with Error Correcting Output Codes (ECOC). In this study, 9 in-vivo cases are reconstructed with 7 different parameter set. This method improves the classification rate based on images, yielding a 91% of well-detected tissue using the best parameter set. It also reduces the inter-patient variability compared with the analysis of DICOM images, which are obtained from the commercial equipment.
Towards exaggerated emphysema stereotypes
NASA Astrophysics Data System (ADS)
Chen, C.; Sørensen, L.; Lauze, F.; Igel, C.; Loog, M.; Feragen, A.; de Bruijne, M.; Nielsen, M.
2012-03-01
Classification is widely used in the context of medical image analysis and in order to illustrate the mechanism of a classifier, we introduce the notion of an exaggerated image stereotype based on training data and trained classifier. The stereotype of some image class of interest should emphasize/exaggerate the characteristic patterns in an image class and visualize the information the employed classifier relies on. This is useful for gaining insight into the classification and serves for comparison with the biological models of disease. In this work, we build exaggerated image stereotypes by optimizing an objective function which consists of a discriminative term based on the classification accuracy, and a generative term based on the class distributions. A gradient descent method based on iterated conditional modes (ICM) is employed for optimization. We use this idea with Fisher's linear discriminant rule and assume a multivariate normal distribution for samples within a class. The proposed framework is applied to computed tomography (CT) images of lung tissue with emphysema. The synthesized stereotypes illustrate the exaggerated patterns of lung tissue with emphysema, which is underpinned by three different quantitative evaluation methods.
Using greenhouse gas fluxes to define soil functional types
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrakis, Sandra; Barba, Josep; Bond-Lamberty, Ben
Soils provide key ecosystem services and directly control ecosystem functions; thus, there is a need to define the reference state of soil functionality. Most common functional classifications of ecosystems are vegetation-centered and neglect soil characteristics and processes. We propose Soil Functional Types (SFTs) as a conceptual approach to represent and describe the functionality of soils based on characteristics of their greenhouse gas (GHG) flux dynamics. We used automated measurements of CO2, CH4 and N2O in a forested area to define SFTs following a simple statistical framework. This study supports the hypothesis that SFTs provide additional insights on the spatial variabilitymore » of soil functionality beyond information represented by commonly measured soil parameters (e.g., soil moisture, soil temperature, litter biomass). We discuss the implications of this framework at the plot-scale and the potential of this approach at larger scales. This approach is a first step to provide a framework to define SFTs, but a community effort is necessary to harmonize any global classification for soil functionality. A global application of the proposed SFT framework will only be possible if there is a community-wide effort to share data and create a global database of GHG emissions from soils.« less
A novel application of deep learning for single-lead ECG classification.
Mathews, Sherin M; Kambhamettu, Chandra; Barner, Kenneth E
2018-06-04
Detecting and classifying cardiac arrhythmias is critical to the diagnosis of patients with cardiac abnormalities. In this paper, a novel approach based on deep learning methodology is proposed for the classification of single-lead electrocardiogram (ECG) signals. We demonstrate the application of the Restricted Boltzmann Machine (RBM) and deep belief networks (DBN) for ECG classification following detection of ventricular and supraventricular heartbeats using single-lead ECG. The effectiveness of this proposed algorithm is illustrated using real ECG signals from the widely-used MIT-BIH database. Simulation results demonstrate that with a suitable choice of parameters, RBM and DBN can achieve high average recognition accuracies of ventricular ectopic beats (93.63%) and of supraventricular ectopic beats (95.57%) at a low sampling rate of 114 Hz. Experimental results indicate that classifiers built into this deep learning-based framework achieved state-of-the art performance models at lower sampling rates and simple features when compared to traditional methods. Further, employing features extracted at a sampling rate of 114 Hz when combined with deep learning provided enough discriminatory power for the classification task. This performance is comparable to that of traditional methods and uses a much lower sampling rate and simpler features. Thus, our proposed deep neural network algorithm demonstrates that deep learning-based methods offer accurate ECG classification and could potentially be extended to other physiological signal classifications, such as those in arterial blood pressure (ABP), nerve conduction (EMG), and heart rate variability (HRV) studies. Copyright © 2018. Published by Elsevier Ltd.
Brain tumor segmentation based on local independent projection-based classification.
Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Chen, Wufan; Feng, Qianjin
2014-10-01
Brain tumor segmentation is an important procedure for early tumor diagnosis and radiotherapy planning. Although numerous brain tumor segmentation methods have been presented, enhancing tumor segmentation methods is still challenging because brain tumor MRI images exhibit complex characteristics, such as high diversity in tumor appearance and ambiguous tumor boundaries. To address this problem, we propose a novel automatic tumor segmentation method for MRI images. This method treats tumor segmentation as a classification problem. Additionally, the local independent projection-based classification (LIPC) method is used to classify each voxel into different classes. A novel classification framework is derived by introducing the local independent projection into the classical classification model. Locality is important in the calculation of local independent projections for LIPC. Locality is also considered in determining whether local anchor embedding is more applicable in solving linear projection weights compared with other coding methods. Moreover, LIPC considers the data distribution of different classes by learning a softmax regression model, which can further improve classification performance. In this study, 80 brain tumor MRI images with ground truth data are used as training data and 40 images without ground truth data are used as testing data. The segmentation results of testing data are evaluated by an online evaluation tool. The average dice similarities of the proposed method for segmenting complete tumor, tumor core, and contrast-enhancing tumor on real patient data are 0.84, 0.685, and 0.585, respectively. These results are comparable to other state-of-the-art methods.
From Classification to Epilepsy Ontology and Informatics
Zhang, Guo-Qiang; Sahoo, Satya S; Lhatoo, Samden D
2012-01-01
Summary The 2010 International League Against Epilepsy (ILAE) classification and terminology commission report proposed a much needed departure from previous classifications to incorporate advances in molecular biology, neuroimaging, and genetics. It proposed an interim classification and defined two key requirements that need to be satisfied. The first is the ability to classify epilepsy in dimensions according to a variety of purposes including clinical research, patient care, and drug discovery. The second is the ability of the classification system to evolve with new discoveries. Multi-dimensionality and flexibility are crucial to the success of any future classification. In addition, a successful classification system must play a central role in the rapidly growing field of epilepsy informatics. An epilepsy ontology, based on classification, will allow information systems to facilitate data-intensive studies and provide a proven route to meeting the two foregoing key requirements. Epilepsy ontology will be a structured terminology system that accommodates proposed and evolving ILAE classifications, the NIH/NINDS Common Data Elements, the ICD systems and explicitly specifies all known relationships between epilepsy concepts in a proper framework. This will aid evidence based epilepsy diagnosis, investigation, treatment and research for a diverse community of clinicians and researchers. Benefits range from systematization of electronic patient records to multi-modal data repositories for research and training manuals for those involved in epilepsy care. Given the complexity, heterogeneity and pace of research advances in the epilepsy domain, such an ontology must be collaboratively developed by key stakeholders in the epilepsy community and experts in knowledge engineering and computer science. PMID:22765502
A definitional framework for the human/biometric sensor interaction model
NASA Astrophysics Data System (ADS)
Elliott, Stephen J.; Kukula, Eric P.
2010-04-01
Existing definitions for biometric testing and evaluation do not fully explain errors in a biometric system. This paper provides a definitional framework for the Human Biometric-Sensor Interaction (HBSI) model. This paper proposes six new definitions based around two classifications of presentations, erroneous and correct. The new terms are: defective interaction (DI), concealed interaction (CI), false interaction (FI), failure to detect (FTD), failure to extract (FTX), and successfully acquired samples (SAS). As with all definitions, the new terms require a modification to the general biometric model developed by Mansfield and Wayman [1].
Gates, Timothy J; Noyce, David A
2016-11-01
This manuscript describes the development and evaluation of a conceptual framework for real-time operation of dynamic on-demand extension of the red clearance interval as a countermeasure for red-light-running. The framework includes a decision process for determining, based on the real-time status of vehicles arriving at the intersection, when extension of the red clearance interval should occur and the duration of each extension. A zonal classification scheme was devised to assess whether an approaching vehicle requires additional time to safely clear the intersection based on the remaining phase time, type of vehicle, current speed, and current distance from the intersection. Expected performance of the conceptual framework was evaluated through modeling of replicated field operations using vehicular event data collected as part of this research. The results showed highly accurate classification of red-light-running vehicles needing additional clearance time and relatively few false extension calls from stopping vehicles, thereby minimizing the expected impacts to signal and traffic operations. Based on the recommended parameters, extension calls were predicted to occur once every 26.5cycles. Assuming a 90scycle, 1.5 extensions per hour were expected per approach, with an estimated extension time of 2.30s/h. Although field implementation was not performed, it is anticipated that long-term reductions in targeted red-light-running conflicts and crashes will likely occur if red clearance interval extension systems are implemented at locations where start-up delay on the conflicting approach is generally minimal, such as intersections with lag left-turn phasing. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
World Health Organization, Geneva (Switzerland).
The manual contains three classifications (impairments, disabilities, and handicaps), each relating to a different plane of experience consequent upon disease. Section 1 attempts to clarify the nature of health related experiences by addressing reponse to acute and chronic illness; the unifying framework for classification (principle events in the…
Updated United Nations Framework Classification for reserves and resources of extractive industries
Ahlbrandt, T.S.; Blaise, J.R.; Blystad, P.; Kelter, D.; Gabrielyants, G.; Heiberg, S.; Martinez, A.; Ross, J.G.; Slavov, S.; Subelj, A.; Young, E.D.
2004-01-01
The United Nations have studied how the oil and gas resource classification developed jointly by the SPE, the World Petroleum Congress (WPC) and the American Association of Petroleum Geologists (AAPG) could be harmonized with the United Nations Framework Classification (UNFC) for Solid Fuel and Mineral Resources (1). The United Nations has continued to build on this and other works, with support from many relevant international organizations, with the objective of updating the UNFC to apply to the extractive industries. The result is the United Nations Framework Classification for Energy and Mineral Resources (2) that this paper will present. Reserves and resources are categorized with respect to three sets of criteria: ??? Economic and commercial viability ??? Field project status and feasibility ??? The level of geologic knowledge The field project status criteria are readily recognized as the ones highlighted in the SPE/WPC/AAPG classification system of 2000. The geologic criteria absorb the rich traditions that form the primary basis for the Russian classification system, and the ones used to delimit, in part, proved reserves. Economic and commercial criteria facilitate the use of the classification in general, and reflect the commercial considerations used to delimit proved reserves in particular. The classification system will help to develop a common understanding of reserves and resources for all the extractive industries and will assist: ??? International and national resources management to secure supplies; ??? Industries' management of business processes to achieve efficiency in exploration and production; and ??? An appropriate basis for documenting the value of reserves and resources in financial statements.
Guo, Hao; Liu, Lei; Chen, Junjie; Xu, Yong; Jie, Xiang
2017-01-01
Functional magnetic resonance imaging (fMRI) is one of the most useful methods to generate functional connectivity networks of the brain. However, conventional network generation methods ignore dynamic changes of functional connectivity between brain regions. Previous studies proposed constructing high-order functional connectivity networks that consider the time-varying characteristics of functional connectivity, and a clustering method was performed to decrease computational cost. However, random selection of the initial clustering centers and the number of clusters negatively affected classification accuracy, and the network lost neurological interpretability. Here we propose a novel method that introduces the minimum spanning tree method to high-order functional connectivity networks. As an unbiased method, the minimum spanning tree simplifies high-order network structure while preserving its core framework. The dynamic characteristics of time series are not lost with this approach, and the neurological interpretation of the network is guaranteed. Simultaneously, we propose a multi-parameter optimization framework that involves extracting discriminative features from the minimum spanning tree high-order functional connectivity networks. Compared with the conventional methods, our resting-state fMRI classification method based on minimum spanning tree high-order functional connectivity networks greatly improved the diagnostic accuracy for Alzheimer's disease. PMID:29249926
An estimation framework for building information modeling (BIM)-based demolition waste by type.
Kim, Young-Chan; Hong, Won-Hwa; Park, Jae-Woo; Cha, Gi-Wook
2017-12-01
Most existing studies on demolition waste (DW) quantification do not have an official standard to estimate the amount and type of DW. Therefore, there are limitations in the existing literature for estimating DW with a consistent classification system. Building information modeling (BIM) is a technology that can generate and manage all the information required during the life cycle of a building, from design to demolition. Nevertheless, there has been a lack of research regarding its application to the demolition stage of a building. For an effective waste management plan, the estimation of the type and volume of DW should begin from the building design stage. However, the lack of tools hinders an early estimation. This study proposes a BIM-based framework that estimates DW in the early design stages, to achieve an effective and streamlined planning, processing, and management. Specifically, the input of construction materials in the Korean construction classification system and those in the BIM library were matched. Based on this matching integration, the estimates of DW by type were calculated by applying the weight/unit volume factors and the rates of DW volume change. To verify the framework, its operation was demonstrated by means of an actual BIM modeling and by comparing its results with those available in the literature. This study is expected to contribute not only to the estimation of DW at the building level, but also to the automated estimation of DW at the district level.
A Novel Multi-Class Ensemble Model for Classifying Imbalanced Biomedical Datasets
NASA Astrophysics Data System (ADS)
Bikku, Thulasi; Sambasiva Rao, N., Dr; Rao, Akepogu Ananda, Dr
2017-08-01
This paper mainly focuseson developing aHadoop based framework for feature selection and classification models to classify high dimensionality data in heterogeneous biomedical databases. Wide research has been performing in the fields of Machine learning, Big data and Data mining for identifying patterns. The main challenge is extracting useful features generated from diverse biological systems. The proposed model can be used for predicting diseases in various applications and identifying the features relevant to particular diseases. There is an exponential growth of biomedical repositories such as PubMed and Medline, an accurate predictive model is essential for knowledge discovery in Hadoop environment. Extracting key features from unstructured documents often lead to uncertain results due to outliers and missing values. In this paper, we proposed a two phase map-reduce framework with text preprocessor and classification model. In the first phase, mapper based preprocessing method was designed to eliminate irrelevant features, missing values and outliers from the biomedical data. In the second phase, a Map-Reduce based multi-class ensemble decision tree model was designed and implemented in the preprocessed mapper data to improve the true positive rate and computational time. The experimental results on the complex biomedical datasets show that the performance of our proposed Hadoop based multi-class ensemble model significantly outperforms state-of-the-art baselines.
Dellinger, E Patchen; Forsmark, Christopher E; Layer, Peter; Lévy, Philippe; Maraví-Poma, Enrique; Petrov, Maxim S; Shimosegawa, Tooru; Siriwardena, Ajith K; Uomo, Generoso; Whitcomb, David C; Windsor, John A
2012-12-01
To develop a new international classification of acute pancreatitis severity on the basis of a sound conceptual framework, comprehensive review of published evidence, and worldwide consultation. The Atlanta definitions of acute pancreatitis severity are ingrained in the lexicon of pancreatologists but suboptimal because these definitions are based on empiric description of occurrences that are merely associated with severity. A personal invitation to contribute to the development of a new international classification of acute pancreatitis severity was sent to all surgeons, gastroenterologists, internists, intensivists, and radiologists who are currently active in clinical research on acute pancreatitis. The invitation was not limited to members of certain associations or residents of certain countries. A global Web-based survey was conducted and a dedicated international symposium was organized to bring contributors from different disciplines together and discuss the concept and definitions. The new international classification is based on the actual local and systemic determinants of severity, rather than description of events that are correlated with severity. The local determinant relates to whether there is (peri)pancreatic necrosis or not, and if present, whether it is sterile or infected. The systemic determinant relates to whether there is organ failure or not, and if present, whether it is transient or persistent. The presence of one determinant can modify the effect of another such that the presence of both infected (peri)pancreatic necrosis and persistent organ failure have a greater effect on severity than either determinant alone. The derivation of a classification based on the above principles results in 4 categories of severity-mild, moderate, severe, and critical. This classification is the result of a consultative process amongst pancreatologists from 49 countries spanning North America, South America, Europe, Asia, Oceania, and Africa. It provides a set of concise up-to-date definitions of all the main entities pertinent to classifying the severity of acute pancreatitis in clinical practice and research. This ensures that the determinant-based classification can be used in a uniform manner throughout the world.
A Learning-Based Approach for IP Geolocation
NASA Astrophysics Data System (ADS)
Eriksson, Brian; Barford, Paul; Sommers, Joel; Nowak, Robert
The ability to pinpoint the geographic location of IP hosts is compelling for applications such as on-line advertising and network attack diagnosis. While prior methods can accurately identify the location of hosts in some regions of the Internet, they produce erroneous results when the delay or topology measurement on which they are based is limited. The hypothesis of our work is that the accuracy of IP geolocation can be improved through the creation of a flexible analytic framework that accommodates different types of geolocation information. In this paper, we describe a new framework for IP geolocation that reduces to a machine-learning classification problem. Our methodology considers a set of lightweight measurements from a set of known monitors to a target, and then classifies the location of that target based on the most probable geographic region given probability densities learned from a training set. For this study, we employ a Naive Bayes framework that has low computational complexity and enables additional environmental information to be easily added to enhance the classification process. To demonstrate the feasibility and accuracy of our approach, we test IP geolocation on over 16,000 routers given ping measurements from 78 monitors with known geographic placement. Our results show that the simple application of our method improves geolocation accuracy for over 96% of the nodes identified in our data set, with on average accuracy 70 miles closer to the true geographic location versus prior constraint-based geolocation. These results highlight the promise of our method and indicate how future expansion of the classifier can lead to further improvements in geolocation accuracy.
Sertel, O.; Kong, J.; Shimada, H.; Catalyurek, U.V.; Saltz, J.H.; Gurcan, M.N.
2009-01-01
We are developing a computer-aided prognosis system for neuroblastoma (NB), a cancer of the nervous system and one of the most malignant tumors affecting children. Histopathological examination is an important stage for further treatment planning in routine clinical diagnosis of NB. According to the International Neuroblastoma Pathology Classification (the Shimada system), NB patients are classified into favorable and unfavorable histology based on the tissue morphology. In this study, we propose an image analysis system that operates on digitized H&E stained whole-slide NB tissue samples and classifies each slide as either stroma-rich or stroma-poor based on the degree of Schwannian stromal development. Our statistical framework performs the classification based on texture features extracted using co-occurrence statistics and local binary patterns. Due to the high resolution of digitized whole-slide images, we propose a multi-resolution approach that mimics the evaluation of a pathologist such that the image analysis starts from the lowest resolution and switches to higher resolutions when necessary. We employ an offine feature selection step, which determines the most discriminative features at each resolution level during the training step. A modified k-nearest neighbor classifier is used to determine the confidence level of the classification to make the decision at a particular resolution level. The proposed approach was independently tested on 43 whole-slide samples and provided an overall classification accuracy of 88.4%. PMID:20161324
Using SAR Interferograms and Coherence Images for Object-Based Delineation of Unstable Slopes
NASA Astrophysics Data System (ADS)
Friedl, Barbara; Holbling, Daniel
2015-05-01
This study uses synthetic aperture radar (SAR) interferometric products for the semi-automated identification and delineation of unstable slopes and active landslides. Single-pair interferograms and coherence images are therefore segmented and classified in an object-based image analysis (OBIA) framework. The rule-based classification approach has been applied to landslide-prone areas located in Taiwan and Southern Germany. The semi-automatically obtained results were validated against landslide polygons derived from manual interpretation.
Towards a framework for agent-based image analysis of remote-sensing data
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-01-01
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916
Towards a framework for agent-based image analysis of remote-sensing data.
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-04-03
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).
Michez, Adrien; Piégay, Hervé; Lisein, Jonathan; Claessens, Hugues; Lejeune, Philippe
2016-03-01
Riparian forests are critically endangered many anthropogenic pressures and natural hazards. The importance of riparian zones has been acknowledged by European Directives, involving multi-scale monitoring. The use of this very-high-resolution and hyperspatial imagery in a multi-temporal approach is an emerging topic. The trend is reinforced by the recent and rapid growth of the use of the unmanned aerial system (UAS), which has prompted the development of innovative methodology. Our study proposes a methodological framework to explore how a set of multi-temporal images acquired during a vegetative period can differentiate some of the deciduous riparian forest species and their health conditions. More specifically, the developed approach intends to identify, through a process of variable selection, which variables derived from UAS imagery and which scale of image analysis are the most relevant to our objectives.The methodological framework is applied to two study sites to describe the riparian forest through two fundamental characteristics: the species composition and the health condition. These characteristics were selected not only because of their use as proxies for the riparian zone ecological integrity but also because of their use for river management.The comparison of various scales of image analysis identified the smallest object-based image analysis (OBIA) objects (ca. 1 m(2)) as the most relevant scale. Variables derived from spectral information (bands ratios) were identified as the most appropriate, followed by variables related to the vertical structure of the forest. Classification results show good overall accuracies for the species composition of the riparian forest (five classes, 79.5 and 84.1% for site 1 and site 2). The classification scenario regarding the health condition of the black alders of the site 1 performed the best (90.6%).The quality of the classification models developed with a UAS-based, cost-effective, and semi-automatic approach competes successfully with those developed using more expensive imagery, such as multi-spectral and hyperspectral airborne imagery. The high overall accuracy results obtained by the classification of the diseased alders open the door to applications dedicated to monitoring of the health conditions of riparian forest. Our methodological framework will allow UAS users to manage large imagery metric datasets derived from those dense time series.
Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso
2017-01-01
Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood. PMID:28294963
A framework for farmland parcels extraction based on image classification
NASA Astrophysics Data System (ADS)
Liu, Guoying; Ge, Wenying; Song, Xu; Zhao, Hongdan
2018-03-01
It is very important for the government to build an accurate national basic cultivated land database. In this work, farmland parcels extraction is one of the basic steps. However, during the past years, people had to spend much time on determining an area is a farmland parcel or not, since they were bounded to understand remote sensing images only from the mere visual interpretation. In order to overcome this problem, in this study, a method was proposed to extract farmland parcels by means of image classification. In the proposed method, farmland areas and ridge areas of the classification map are semantically processed independently and the results are fused together to form the final results of farmland parcels. Experiments on high spatial remote sensing images have shown the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Lu, Guolan; Halig, Luma; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei
2014-03-01
As an emerging technology, hyperspectral imaging (HSI) combines both the chemical specificity of spectroscopy and the spatial resolution of imaging, which may provide a non-invasive tool for cancer detection and diagnosis. Early detection of malignant lesions could improve both survival and quality of life of cancer patients. In this paper, we introduce a tensor-based computation and modeling framework for the analysis of hyperspectral images to detect head and neck cancer. The proposed classification method can distinguish between malignant tissue and healthy tissue with an average sensitivity of 96.97% and an average specificity of 91.42% in tumor-bearing mice. The hyperspectral imaging and classification technology has been demonstrated in animal models and can have many potential applications in cancer research and management.
Buckingham, C D; Adams, A
2000-10-01
This is the second of two linked papers exploring decision making in nursing. The first paper, 'Classifying clinical decision making: a unifying approach' investigated difficulties with applying a range of decision-making theories to nursing practice. This is due to the diversity of terminology and theoretical concepts used, which militate against nurses being able to compare the outcomes of decisions analysed within different frameworks. It is therefore problematic for nurses to assess how good their decisions are, and where improvements can be made. However, despite the range of nomenclature, it was argued that there are underlying similarities between all theories of decision processes and that these should be exposed through integration within a single explanatory framework. A proposed solution was to use a general model of psychological classification to clarify and compare terms, concepts and processes identified across the different theories. The unifying framework of classification was described and this paper operationalizes it to demonstrate how different approaches to clinical decision making can be re-interpreted as classification behaviour. Particular attention is focused on classification in nursing, and on re-evaluating heuristic reasoning, which has been particularly prone to theoretical and terminological confusion. Demonstrating similarities in how different disciplines make decisions should promote improved multidisciplinary collaboration and a weakening of clinical elitism, thereby enhancing organizational effectiveness in health care and nurses' professional status. This is particularly important as nurses' roles continue to expand to embrace elements of managerial, medical and therapeutic work. Analysing nurses' decisions as classification behaviour will also enhance clinical effectiveness, and assist in making nurses' expertise more visible. In addition, the classification framework explodes the myth that intuition, traditionally associated with nurses' decision making, is less rational and scientific than other approaches.
ERIC Educational Resources Information Center
McLaughlin, Margaret J.; Dyson, Alan; Nagle, Katherine; Thurlow, Martha; Rouse, Martyn; Hardman, Michael; Norwich, Brahm; Burke, Phillip J.; Perlin, Michael
2006-01-01
This article is the second in a 2-part synthesis of an international comparative seminar on the classification of children with disabilities. In this article, the authors discuss classification frameworks used in identifying children for the purpose of providing special education and related services. The authors summarize 7 papers that addressed…
A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI
NASA Astrophysics Data System (ADS)
Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina
2015-03-01
Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diane De Steven,Ph.D.; Maureen Tone,PhD.
1997-10-01
This report address four project objectives: (1) Gradient model of Carolina bay vegetation on the SRS--The authors use ordination analyses to identify environmental and landscape factors that are correlated with vegetation composition. Significant factors can provide a framework for site-based conservation of existing diversity, and they may also be useful site predictors for potential vegetation in bay restorations. (2) Regional analysis of Carolina bay vegetation diversity--They expand the ordination analyses to assess the degree to which SRS bays encompass the range of vegetation diversity found in the regional landscape of South Carolina's western Upper Coastal Plain. Such comparisons can indicatemore » floristic status relative to regional potentials and identify missing species or community elements that might be re-introduced or restored. (3) Classification of vegetation communities in Upper Coastal Plain bays--They use cluster analysis to identify plant community-types at the regional scale, and explore how this classification may be functional with respect to significant environmental and landscape factors. An environmentally-based classification at the whole-bay level can provide a system of templates for managing bays as individual units and for restoring bays to desired plant communities. (4) Qualitative model for bay vegetation dynamics--They analyze present-day vegetation in relation to historic land uses and disturbances. The distinctive history of SRS bays provides the possibility of assessing pathways of post-disturbance succession. They attempt to develop a coarse-scale model of vegetation shifts in response to changing site factors; such qualitative models can provide a basis for suggesting management interventions that may be needed to maintain desired vegetation in protected or restored bays.« less
Elliott, Caroline M.; Jacobson, Robert B.
2006-01-01
A multiscale geomorphic classification was established for the 39-mile, 59-mile, and adjacent segments of the Missouri National Recreational River administered by the National Park Service in South Dakota and Nebraska. The objective of the classification was to define naturally occurring clusters of geomorphic characteristics that would be indicative of discrete sets of geomorphic processes, with the intent that such a classification would be useful in river-management and rehabilitation decisions. The statistical classification was based on geomorphic characteristics of the river collected from 1999 orthophotography and the persistence of classified units was evaluated by comparison with similar datasets for 2003 and 2004 and by evaluating variation of bank erosion rates by geomorphic class. Changes in channel location and form were also explored using imagery and maps from 1993-2004, 1941 and 1894. The multivariate classification identified a hierarchy of naturally occurring clusters of reach-scale geomorphic characteristics. The simplest level of the hierarchy divides the river from segments into discrete reaches characterized by single and multithread channels and additional hierarchical levels established 4-part and 10-part classifications. The classification system presents a physical framework that can be applied to prioritization and design of bank stabilization projects, design of habitat rehabilitation projects, and stratification of monitoring and assessment sampling programs.
New Metaphors for Organizing Data Could Change the Nature of Computers.
ERIC Educational Resources Information Center
Young, Jeffrey R.
1997-01-01
Based on the idea that the current framework for organizing electronic data does not take advantage of the mind's ability to make connections among disparate pieces of information, several projects at universities around the country are taking new approaches to classification and storage of vast amounts of computerized data. The new systems take…
ERIC Educational Resources Information Center
Miciak, Jeremy; Taylor, W. Pat; Denton, Carolyn A.; Fletcher, Jack M.
2015-01-01
Few empirical investigations have evaluated learning disabilities (LD) identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability of LD classification decisions of the concordance/discordance method (C/DM) across different psychoeducational assessment batteries. C/DM criteria were…
Wu, Hao-Yang; Wang, Yan-Hui; Xie, Qiang; Ke, Yun-Ling; Bu, Wen-Jun
2016-06-17
With the great development of sequencing technologies and systematic methods, our understanding of evolutionary relationships at deeper levels within the tree of life has greatly improved over the last decade. However, the current taxonomic methodology is insufficient to describe the growing levels of diversity in both a standardised and general way due to the limitations of using only morphological traits to describe clades. Herein, we propose the idea of a molecular classification based on hierarchical and discrete amino acid characters. Clades are classified based on the results of phylogenetic analyses and described using amino acids with group specificity in phylograms. Practices based on the recently published phylogenomic datasets of insects together with 15 de novo sequenced transcriptomes in this study demonstrate that such a methodology can accommodate various higher ranks of taxonomy. Such an approach has the advantage of describing organisms in a standard and discrete way within a phylogenetic framework, thereby facilitating the recognition of clades from the view of the whole lineage, as indicated by PhyloCode. By combining identification keys and phylogenies, the molecular classification based on hierarchical and discrete characters may greatly boost the progress of integrative taxonomy.
Wu, Hao-Yang; Wang, Yan-Hui; Xie, Qiang; Ke, Yun-Ling; Bu, Wen-Jun
2016-01-01
With the great development of sequencing technologies and systematic methods, our understanding of evolutionary relationships at deeper levels within the tree of life has greatly improved over the last decade. However, the current taxonomic methodology is insufficient to describe the growing levels of diversity in both a standardised and general way due to the limitations of using only morphological traits to describe clades. Herein, we propose the idea of a molecular classification based on hierarchical and discrete amino acid characters. Clades are classified based on the results of phylogenetic analyses and described using amino acids with group specificity in phylograms. Practices based on the recently published phylogenomic datasets of insects together with 15 de novo sequenced transcriptomes in this study demonstrate that such a methodology can accommodate various higher ranks of taxonomy. Such an approach has the advantage of describing organisms in a standard and discrete way within a phylogenetic framework, thereby facilitating the recognition of clades from the view of the whole lineage, as indicated by PhyloCode. By combining identification keys and phylogenies, the molecular classification based on hierarchical and discrete characters may greatly boost the progress of integrative taxonomy. PMID:27312960
The Early Detection of the Emerald Ash Borer (EAB) Using Advanced Geospacial Technologies
NASA Astrophysics Data System (ADS)
Hu, B.; Li, J.; Wang, J.; Hall, B.
2014-11-01
The objectives of this study were to exploit Light Detection And Ranging (LiDAR) and very high spatial resolution (VHR) data and their synergy with hyperspectral imagery in the early detection of the EAB presence in trees within urban areas and to develop a framework to combine information extracted from multiple data sources. To achieve these, an object-oriented framework was developed to combine information derived from available data sets to characterize ash trees. Within this framework, individual trees were first extracted and then classified into different species based on their spectral information derived from hyperspectral imagery, spatial information from VHR imagery, and for each ash tree its health state and EAB infestation stage were determined based on hyperspectral imagery. The developed framework and methods were demonstrated to be effective according to the results obtained on two study sites in the city of Toronto, Ontario Canada. The individual tree delineation method provided satisfactory results with an overall accuracy of 78 % and 19 % commission and 23 % omission errors when used on the combined very high-spatial resolution imagery and LiDAR data. In terms of the identification of ash trees, given sufficient representative training data, our classification model was able to predict tree species with above 75 % overall accuracy, and mis-classification occurred mainly between ash and maple trees. The hypothesis that a strong correlation exists between general tree stress and EAB infestation was confirmed. Vegetation indices sensitive to leaf chlorophyll content derived from hyperspectral imagery can be used to predict the EAB infestation levels for each ash tree.
Deep Learning Accurately Predicts Estrogen Receptor Status in Breast Cancer Metabolomics Data.
Alakwaa, Fadhl M; Chaudhary, Kumardeep; Garmire, Lana X
2018-01-05
Metabolomics holds the promise as a new technology to diagnose highly heterogeneous diseases. Conventionally, metabolomics data analysis for diagnosis is done using various statistical and machine learning based classification methods. However, it remains unknown if deep neural network, a class of increasingly popular machine learning methods, is suitable to classify metabolomics data. Here we use a cohort of 271 breast cancer tissues, 204 positive estrogen receptor (ER+), and 67 negative estrogen receptor (ER-) to test the accuracies of feed-forward networks, a deep learning (DL) framework, as well as six widely used machine learning models, namely random forest (RF), support vector machines (SVM), recursive partitioning and regression trees (RPART), linear discriminant analysis (LDA), prediction analysis for microarrays (PAM), and generalized boosted models (GBM). DL framework has the highest area under the curve (AUC) of 0.93 in classifying ER+/ER- patients, compared to the other six machine learning algorithms. Furthermore, the biological interpretation of the first hidden layer reveals eight commonly enriched significant metabolomics pathways (adjusted P-value <0.05) that cannot be discovered by other machine learning methods. Among them, protein digestion and absorption and ATP-binding cassette (ABC) transporters pathways are also confirmed in integrated analysis between metabolomics and gene expression data in these samples. In summary, deep learning method shows advantages for metabolomics based breast cancer ER status classification, with both the highest prediction accuracy (AUC = 0.93) and better revelation of disease biology. We encourage the adoption of feed-forward networks based deep learning method in the metabolomics research community for classification.
Doukas, Charalampos; Goudas, Theodosis; Fischer, Simon; Mierswa, Ingo; Chatziioannou, Aristotle; Maglogiannis, Ilias
2010-01-01
This paper presents an open image-mining framework that provides access to tools and methods for the characterization of medical images. Several image processing and feature extraction operators have been implemented and exposed through Web Services. Rapid-Miner, an open source data mining system has been utilized for applying classification operators and creating the essential processing workflows. The proposed framework has been applied for the detection of salient objects in Obstructive Nephropathy microscopy images. Initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.
The Performance of Short-Term Heart Rate Variability in the Detection of Congestive Heart Failure
Barros, Allan Kardec; Ohnishi, Noboru
2016-01-01
Congestive heart failure (CHF) is a cardiac disease associated with the decreasing capacity of the cardiac output. It has been shown that the CHF is the main cause of the cardiac death around the world. Some works proposed to discriminate CHF subjects from healthy subjects using either electrocardiogram (ECG) or heart rate variability (HRV) from long-term recordings. In this work, we propose an alternative framework to discriminate CHF from healthy subjects by using HRV short-term intervals based on 256 RR continuous samples. Our framework uses a matching pursuit algorithm based on Gabor functions. From the selected Gabor functions, we derived a set of features that are inputted into a hybrid framework which uses a genetic algorithm and k-nearest neighbour classifier to select a subset of features that has the best classification performance. The performance of the framework is analyzed using both Fantasia and CHF database from Physionet archives which are, respectively, composed of 40 healthy volunteers and 29 subjects. From a set of nonstandard 16 features, the proposed framework reaches an overall accuracy of 100% with five features. Our results suggest that the application of hybrid frameworks whose classifier algorithms are based on genetic algorithms has outperformed well-known classifier methods. PMID:27891509
NASA Astrophysics Data System (ADS)
Dolan, B.; Rutledge, S. A.; Barnum, J. I.; Matsui, T.; Tao, W. K.; Iguchi, T.
2017-12-01
POLarimetric Radar Retrieval and Instrument Simulator (POLARRIS) is a framework that has been developed to simulate radar observations from cloud resolving model (CRM) output and subject model data and observations to the same retrievals, analysis and visualization. This framework not only enables validation of bulk microphysical model simulated properties, but also offers an opportunity to study the uncertainties associated with retrievals such as hydrometeor classification (HID). For the CSU HID, membership beta functions (MBFs) are built using a set of simulations with realistic microphysical assumptions about axis ratio, density, canting angles, size distributions for each of ten hydrometeor species. These assumptions are tested using POLARRIS to understand their influence on the resulting simulated polarimetric data and final HID classification. Several of these parameters (density, size distributions) are set by the model microphysics, and therefore the specific assumptions of axis ratio and canting angle are carefully studied. Through these sensitivity studies, we hope to be able to provide uncertainties in retrieved polarimetric variables and HID as applied to CRM output. HID retrievals assign a classification to each point by determining the highest score, thereby identifying the dominant hydrometeor type within a volume. However, in nature, there is rarely just one a single hydrometeor type at a particular point. Models allow for mixing ratios of different hydrometeors within a grid point. We use the mixing ratios from CRM output in concert with the HID scores and classifications to understand how the HID algorithm can provide information about mixtures within a volume, as well as calculate a confidence in the classifications. We leverage the POLARRIS framework to additionally probe radar wavelength differences toward the possibility of a multi-wavelength HID which could utilize the strengths of different wavelengths to improve HID classifications. With these uncertainties and algorithm improvements, cases of convection are studied in a continental (Oklahoma) and maritime (Darwin, Australia) regime. Observations from C-band polarimetric data in both locations are compared to CRM simulations from NU-WRF using the POLARRIS framework.
Addressing location uncertainties in GPS-based activity monitoring: A methodological framework
Wan, Neng; Lin, Ge; Wilson, Gaines J.
2016-01-01
Location uncertainty has been a major barrier in information mining from location data. Although the development of electronic and telecommunication equipment has led to an increased amount and refined resolution of data about individuals’ spatio-temporal trajectories, the potential of such data, especially in the context of environmental health studies, has not been fully realized due to the lack of methodology that addresses location uncertainties. This article describes a methodological framework for deriving information about people’s continuous activities from individual-collected Global Positioning System (GPS) data, which is vital for a variety of environmental health studies. This framework is composed of two major methods that address critical issues at different stages of GPS data processing: (1) a fuzzy classification method for distinguishing activity patterns; and (2) a scale-adaptive method for refining activity locations and outdoor/indoor environments. Evaluation of this framework based on smartphone-collected GPS data indicates that it is robust to location errors and is able to generate useful information about individuals’ life trajectories. PMID:28943777
Audio stream classification for multimedia database search
NASA Astrophysics Data System (ADS)
Artese, M.; Bianco, S.; Gagliardi, I.; Gasparini, F.
2013-03-01
Search and retrieval of huge archives of Multimedia data is a challenging task. A classification step is often used to reduce the number of entries on which to perform the subsequent search. In particular, when new entries of the database are continuously added, a fast classification based on simple threshold evaluation is desirable. In this work we present a CART-based (Classification And Regression Tree [1]) classification framework for audio streams belonging to multimedia databases. The database considered is the Archive of Ethnography and Social History (AESS) [2], which is mainly composed of popular songs and other audio records describing the popular traditions handed down generation by generation, such as traditional fairs, and customs. The peculiarities of this database are that it is continuously updated; the audio recordings are acquired in unconstrained environment; and for the non-expert human user is difficult to create the ground truth labels. In our experiments, half of all the available audio files have been randomly extracted and used as training set. The remaining ones have been used as test set. The classifier has been trained to distinguish among three different classes: speech, music, and song. All the audio files in the dataset have been previously manually labeled into the three classes above defined by domain experts.
Layer, P; Dellinger, E P; Forsmark, C E; Lévy, P; Maraví-Poma, E; Shimosegawa, T; Siriwardena, A K; Uomo, G; Whitcomb, D C; Windsor, J A; Petrov, M S
2013-06-01
The aim of this study was to develop a new international classification of acute pancreatitis severity on the basis of a sound conceptual framework, comprehensive review of published evidence, and worldwide consultation. The Atlanta definitions of acute pancreatitis severity are ingrained in the lexicon of pancreatologists but suboptimal because these definitions are based on empiric descriptions of occurrences that are merely associated with severity. A personal invitation to contribute to the development of a new international classification of acute pancreatitis severity was sent to all surgeons, gastroenterologists, internists, intensive medicine specialists, and radiologists who are currently active in clinical research on acute pancreatitis. The invitation was not limited to members of certain associations or residents of certain countries. A global Web-based survey was conducted and a dedicated international symposium was organised to bring contributors from different disciplines together and discuss the concept and definitions. The new international classification is based on the actual local and systemic determinants of severity, rather than descriptions of events that are correlated with severity. The local determinant relates to whether there is (peri)pancreatic necrosis or not, and if present, whether it is sterile or infected. The systemic determinant relates to whether there is organ failure or not, and if present, whether it is transient or persistent. The presence of one determinant can modify the effect of another such that the presence of both infected (peri)pancreatic necrosis and persistent organ failure have a greater effect on severity than either determinant alone. The derivation of a classification based on the above principles results in 4 categories of severity - mild, moderate, severe, and critical. This classification is the result of a consultative process amongst pancreatologists from 49 countries spanning North America, South America, Europe, Asia, Oceania, and Africa. It provides a set of concise up-to-date definitions of all the main entities pertinent to classifying the severity of acute pancreatitis in clinical practice and research. This ensures that the determinant-based classification can be used in a uniform manner throughout the world. © Georg Thieme Verlag KG Stuttgart · New York.
Software platform for managing the classification of error- related potentials of observers
NASA Astrophysics Data System (ADS)
Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.
2015-09-01
Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.
An embedded system for face classification in infrared video using sparse representation
NASA Astrophysics Data System (ADS)
Saavedra M., Antonio; Pezoa, Jorge E.; Zarkesh-Ha, Payman; Figueroa, Miguel
2017-09-01
We propose a platform for robust face recognition in Infrared (IR) images using Compressive Sensing (CS). In line with CS theory, the classification problem is solved using a sparse representation framework, where test images are modeled by means of a linear combination of the training set. Because the training set constitutes an over-complete dictionary, we identify new images by finding their sparsest representation based on the training set, using standard l1-minimization algorithms. Unlike conventional face-recognition algorithms, we feature extraction is performed using random projections with a precomputed binary matrix, as proposed in the CS literature. This random sampling reduces the effects of noise and occlusions such as facial hair, eyeglasses, and disguises, which are notoriously challenging in IR images. Thus, the performance of our framework is robust to these noise and occlusion factors, achieving an average accuracy of approximately 90% when the UCHThermalFace database is used for training and testing purposes. We implemented our framework on a high-performance embedded digital system, where the computation of the sparse representation of IR images was performed by a dedicated hardware using a deeply pipelined architecture on an Field-Programmable Gate Array (FPGA).
Delineation and geometric modeling of road networks
NASA Astrophysics Data System (ADS)
Poullis, Charalambos; You, Suya
In this work we present a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR. Uniquely, the proposed system is an integrated solution that merges the power of perceptual grouping theory (Gabor filtering, tensor voting) and optimized segmentation techniques (global optimization using graph-cuts) into a unified framework to address the challenging problems of geospatial feature detection and classification. Firstly, the local precision of the Gabor filters is combined with the global context of the tensor voting to produce accurate classification of the geospatial features. In addition, the tensorial representation used for the encoding of the data eliminates the need for any thresholds, therefore removing any data dependencies. Secondly, a novel orientation-based segmentation is presented which incorporates the classification of the perceptual grouping, and results in segmentations with better defined boundaries and continuous linear segments. Finally, a set of gaussian-based filters are applied to automatically extract centerline information (magnitude, width and orientation). This information is then used for creating road segments and transforming them to their polygonal representations.
Mao, Yong; Zhou, Xiao-Bo; Pi, Dao-Ying; Sun, You-Xian; Wong, Stephen T C
2005-10-01
In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.
Lauber, Chris; Gorbalenya, Alexander E
2012-04-01
Virus taxonomy has received little attention from the research community despite its broad relevance. In an accompanying paper (C. Lauber and A. E. Gorbalenya, J. Virol. 86:3890-3904, 2012), we have introduced a quantitative approach to hierarchically classify viruses of a family using pairwise evolutionary distances (PEDs) as a measure of genetic divergence. When applied to the six most conserved proteins of the Picornaviridae, it clustered 1,234 genome sequences in groups at three hierarchical levels (to which we refer as the "GENETIC classification"). In this study, we compare the GENETIC classification with the expert-based picornavirus taxonomy and outline differences in the underlying frameworks regarding the relation of virus groups and genetic diversity that represent, respectively, the structure and content of a classification. To facilitate the analysis, we introduce two novel diagrams. The first connects the genetic diversity of taxa to both the PED distribution and the phylogeny of picornaviruses. The second depicts a classification and the accommodated genetic diversity in a standardized manner. Generally, we found striking agreement between the two classifications on species and genus taxa. A few disagreements concern the species Human rhinovirus A and Human rhinovirus C and the genus Aphthovirus, which were split in the GENETIC classification. Furthermore, we propose a new supergenus level and universal, level-specific PED thresholds, not reached yet by many taxa. Since the species threshold is approached mostly by taxa with large sampling sizes and those infecting multiple hosts, it may represent an upper limit on divergence, beyond which homologous recombination in the six most conserved genes between two picornaviruses might not give viable progeny.
Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés
2016-07-15
Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.
Ganchev, Philip; Malehorn, David; Bigbee, William L.; Gopalakrishnan, Vanathi
2013-01-01
We present a novel framework for integrative biomarker discovery from related but separate data sets created in biomarker profiling studies. The framework takes prior knowledge in the form of interpretable, modular rules, and uses them during the learning of rules on a new data set. The framework consists of two methods of transfer of knowledge from source to target data: transfer of whole rules and transfer of rule structures. We evaluated the methods on three pairs of data sets: one genomic and two proteomic. We used standard measures of classification performance and three novel measures of amount of transfer. Preliminary evaluation shows that whole-rule transfer improves classification performance over using the target data alone, especially when there is more source data than target data. It also improves performance over using the union of the data sets. PMID:21571094
Classification of Birds and Bats Using Flight Tracks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cullinan, Valerie I.; Matzner, Shari; Duberstein, Corey A.
Classification of birds and bats that use areas targeted for offshore wind farm development and the inference of their behavior is essential to evaluating the potential effects of development. The current approach to assessing the number and distribution of birds at sea involves transect surveys using trained individuals in boats or airplanes or using high-resolution imagery. These approaches are costly and have safety concerns. Based on a limited annotated library extracted from a single-camera thermal video, we provide a framework for building models that classify birds and bats and their associated behaviors. As an example, we developed a discriminant modelmore » for theoretical flight paths and applied it to data (N = 64 tracks) extracted from 5-min video clips. The agreement between model- and observer-classified path types was initially only 41%, but it increased to 73% when small-scale jitter was censored and path types were combined. Classification of 46 tracks of bats, swallows, gulls, and terns on average was 82% accurate, based on a jackknife cross-validation. Model classification of bats and terns (N = 4 and 2, respectively) was 94% and 91% correct, respectively; however, the variance associated with the tracks from these targets is poorly estimated. Model classification of gulls and swallows (N ≥ 18) was on average 73% and 85% correct, respectively. The models developed here should be considered preliminary because they are based on a small data set both in terms of the numbers of species and the identified flight tracks. Future classification models would be greatly improved by including a measure of distance between the camera and the target.« less
The generalization ability of SVM classification based on Markov sampling.
Xu, Jie; Tang, Yuan Yan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang; Zhang, Baochang
2015-06-01
The previously known works studying the generalization ability of support vector machine classification (SVMC) algorithm are usually based on the assumption of independent and identically distributed samples. In this paper, we go far beyond this classical framework by studying the generalization ability of SVMC based on uniformly ergodic Markov chain (u.e.M.c.) samples. We analyze the excess misclassification error of SVMC based on u.e.M.c. samples, and obtain the optimal learning rate of SVMC for u.e.M.c. We also introduce a new Markov sampling algorithm for SVMC to generate u.e.M.c. samples from given dataset, and present the numerical studies on the learning performance of SVMC based on Markov sampling for benchmark datasets. The numerical studies show that the SVMC based on Markov sampling not only has better generalization ability as the number of training samples are bigger, but also the classifiers based on Markov sampling are sparsity when the size of dataset is bigger with regard to the input dimension.
Stubbington, Rachel; Chadd, Richard; Cid, Núria; Csabai, Zoltán; Miliša, Marko; Morais, Manuela; Munné, Antoni; Pařil, Petr; Pešić, Vladimir; Tziortzis, Iakovos; Verdonschot, Ralf C M; Datry, Thibault
2018-03-15
Intermittent rivers and ephemeral streams (IRES) are common across Europe and dominate some Mediterranean river networks. In all climate zones, IRES support high biodiversity and provide ecosystem services. As dynamic ecosystems that transition between flowing, pool, and dry states, IRES are typically poorly represented in biomonitoring programmes implemented to characterize EU Water Framework Directive ecological status. We report the results of a survey completed by representatives from 20 European countries to identify current challenges to IRES status assessment, examples of best practice, and priorities for future research. We identify five major barriers to effective ecological status classification in IRES: 1. the exclusion of IRES from Water Framework Directive biomonitoring based on their small catchment size; 2. the lack of river typologies that distinguish between contrasting IRES; 3. difficulties in defining the 'reference conditions' that represent unimpacted dynamic ecosystems; 4. classification of IRES ecological status based on lotic communities sampled using methods developed for perennial rivers; and 5. a reliance on taxonomic characterization of local communities. Despite these challenges, we recognize examples of innovative practice that can inform modification of current biomonitoring activity to promote effective IRES status classification. Priorities for future research include reconceptualization of the reference condition approach to accommodate spatiotemporal fluctuations in community composition, and modification of indices of ecosystem health to recognize both taxon-specific sensitivities to intermittence and dispersal abilities, within a landscape context. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Probabilistic grammatical model for helix‐helix contact site classification
2013-01-01
Background Hidden Markov Models power many state‐of‐the‐art tools in the field of protein bioinformatics. While excelling in their tasks, these methods of protein analysis do not convey directly information on medium‐ and long‐range residue‐residue interactions. This requires an expressive power of at least context‐free grammars. However, application of more powerful grammar formalisms to protein analysis has been surprisingly limited. Results In this work, we present a probabilistic grammatical framework for problem‐specific protein languages and apply it to classification of transmembrane helix‐helix pairs configurations. The core of the model consists of a probabilistic context‐free grammar, automatically inferred by a genetic algorithm from only a generic set of expert‐based rules and positive training samples. The model was applied to produce sequence based descriptors of four classes of transmembrane helix‐helix contact site configurations. The highest performance of the classifiers reached AUCROC of 0.70. The analysis of grammar parse trees revealed the ability of representing structural features of helix‐helix contact sites. Conclusions We demonstrated that our probabilistic context‐free framework for analysis of protein sequences outperforms the state of the art in the task of helix‐helix contact site classification. However, this is achieved without necessarily requiring modeling long range dependencies between interacting residues. A significant feature of our approach is that grammar rules and parse trees are human‐readable. Thus they could provide biologically meaningful information for molecular biologists. PMID:24350601
A Lightweight Hierarchical Activity Recognition Framework Using Smartphone Sensors
Han, Manhyung; Bang, Jae Hun; Nugent, Chris; McClean, Sally; Lee, Sungyoung
2014-01-01
Activity recognition for the purposes of recognizing a user's intentions using multimodal sensors is becoming a widely researched topic largely based on the prevalence of the smartphone. Previous studies have reported the difficulty in recognizing life-logs by only using a smartphone due to the challenges with activity modeling and real-time recognition. In addition, recognizing life-logs is difficult due to the absence of an established framework which enables the use of different sources of sensor data. In this paper, we propose a smartphone-based Hierarchical Activity Recognition Framework which extends the Naïve Bayes approach for the processing of activity modeling and real-time activity recognition. The proposed algorithm demonstrates higher accuracy than the Naïve Bayes approach and also enables the recognition of a user's activities within a mobile environment. The proposed algorithm has the ability to classify fifteen activities with an average classification accuracy of 92.96%. PMID:25184486
A Bayesian Approach to Genome/Linguistic Relationships in Native South Americans
Amorim, Carlos Eduardo Guerra; Bisso-Machado, Rafael; Ramallo, Virginia; Bortolini, Maria Cátira; Bonatto, Sandro Luis; Salzano, Francisco Mauro; Hünemeier, Tábita
2013-01-01
The relationship between the evolution of genes and languages has been studied for over three decades. These studies rely on the assumption that languages, as many other cultural traits, evolve in a gene-like manner, accumulating heritable diversity through time and being subjected to evolutionary mechanisms of change. In the present work we used genetic data to evaluate South American linguistic classifications. We compared discordant models of language classifications to the current Native American genome-wide variation using realistic demographic models analyzed under an Approximate Bayesian Computation (ABC) framework. Data on 381 STRs spread along the autosomes were gathered from the literature for populations representing the five main South Amerindian linguistic groups: Andean, Arawakan, Chibchan-Paezan, Macro-Jê, and Tupí. The results indicated a higher posterior probability for the classification proposed by J.H. Greenberg in 1987, although L. Campbell's 1997 classification cannot be ruled out. Based on Greenberg's classification, it was possible to date the time of Tupí-Arawakan divergence (2.8 kya), and the time of emergence of the structure between present day major language groups in South America (3.1 kya). PMID:23696865
A bayesian approach to genome/linguistic relationships in native South Americans.
Amorim, Carlos Eduardo Guerra; Bisso-Machado, Rafael; Ramallo, Virginia; Bortolini, Maria Cátira; Bonatto, Sandro Luis; Salzano, Francisco Mauro; Hünemeier, Tábita
2013-01-01
The relationship between the evolution of genes and languages has been studied for over three decades. These studies rely on the assumption that languages, as many other cultural traits, evolve in a gene-like manner, accumulating heritable diversity through time and being subjected to evolutionary mechanisms of change. In the present work we used genetic data to evaluate South American linguistic classifications. We compared discordant models of language classifications to the current Native American genome-wide variation using realistic demographic models analyzed under an Approximate Bayesian Computation (ABC) framework. Data on 381 STRs spread along the autosomes were gathered from the literature for populations representing the five main South Amerindian linguistic groups: Andean, Arawakan, Chibchan-Paezan, Macro-Jê, and Tupí. The results indicated a higher posterior probability for the classification proposed by J.H. Greenberg in 1987, although L. Campbell's 1997 classification cannot be ruled out. Based on Greenberg's classification, it was possible to date the time of Tupí-Arawakan divergence (2.8 kya), and the time of emergence of the structure between present day major language groups in South America (3.1 kya).
Automatic liver volume segmentation and fibrosis classification
NASA Astrophysics Data System (ADS)
Bal, Evgeny; Klang, Eyal; Amitai, Michal; Greenspan, Hayit
2018-02-01
In this work, we present an automatic method for liver segmentation and fibrosis classification in liver computed-tomography (CT) portal phase scans. The input is a full abdomen CT scan with an unknown number of slices, and the output is a liver volume segmentation mask and a fibrosis grade. A multi-stage analysis scheme is applied to each scan, including: volume segmentation, texture features extraction and SVM based classification. Data contains portal phase CT examinations from 80 patients, taken with different scanners. Each examination has a matching Fibroscan grade. The dataset was subdivided into two groups: first group contains healthy cases and mild fibrosis, second group contains moderate fibrosis, severe fibrosis and cirrhosis. Using our automated algorithm, we achieved an average dice index of 0.93 ± 0.05 for segmentation and a sensitivity of 0.92 and specificity of 0.81for classification. To the best of our knowledge, this is a first end to end automatic framework for liver fibrosis classification; an approach that, once validated, can have a great potential value in the clinic.
Aljunied, Mariam; Frederickson, Norah
2014-01-01
Despite embracing a bio-psycho-social perspective, the World Health Organization’s International Classification of Functioning, Disability and Health (ICF) assessment framework has had limited application to date with children who have special educational needs (SEN). This study examines its utility for educational psychologists’ work with children who have Autism Spectrum Disorders (ASD). Mothers of 40 children with ASD aged eight to 12 years were interviewed using a structured protocol based on the ICF framework. The Diagnostic Interview for Social and Communication Disorder (DISCO) was completed with a subset of 19 mothers. Internal consistency and inter-rater reliability of the interview assessments were found to be acceptable and there was evidence for concurrent and discriminant validity. Despite some limitations, initial support for the utility of the ICF model suggests its potential value across educational, health and care fields. Further consideration of its relevance to educational psychologists in new areas of multi-agency working is warranted. PMID:26157197
The International Classification of Functioning, Disability and Health (ICF) and nursing.
Kearney, Penelope M; Pryor, Julie
2004-04-01
Nursing conceptualizes disability from largely medical and individual perspectives that do not consider its social dimensions. Disabled people are critical of this paradigm and its impact on their health care. The aims of this paper are to review the International Classification of Functioning, Disability and Health (ICF), including its history and the theoretical models upon which it is based and to discuss its relevance as a conceptual framework for nursing. The paper presents a critical overview of concepts of disability and their implications for nursing and argues that a broader view is necessary. It examines ICF and its relationship to changing paradigms of disability and presents some applications for nursing. The ICF, with its acknowledgement of the interaction between people and their environments in health and disability, is a useful conceptual framework for nursing education, practice and research. It has the potential to expand nurses' thinking and practice by increasing awareness of the social, political and cultural dimensions of disability.
Spectral Regression Discriminant Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Pan, Y.; Wu, J.; Huang, H.; Liu, J.
2012-08-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.
A Framework for Real-Time Collection, Analysis, and Classification of Ubiquitous Infrasound Data
NASA Astrophysics Data System (ADS)
Christe, A.; Garces, M. A.; Magana-Zook, S. A.; Schnurr, J. M.
2015-12-01
Traditional infrasound arrays are generally expensive to install and maintain. There are ~10^3 infrasound channels on Earth today. The amount of data currently provided by legacy architectures can be processed on a modest server. However, the growing availability of low-cost, ubiquitous, and dense infrasonic sensor networks presents a substantial increase in the volume, velocity, and variety of data flow. Initial data from a prototype ubiquitous global infrasound network is already pushing the boundaries of traditional research server and communication systems, in particular when serving data products over heterogeneous, international network topologies. We present a scalable, cloud-based approach for capturing and analyzing large amounts of dense infrasonic data (>10^6 channels). We utilize Akka actors with WebSockets to maintain data connections with infrasound sensors. Apache Spark provides streaming, batch, machine learning, and graph processing libraries which will permit signature classification, cross-correlation, and other analytics in near real time. This new framework and approach provide significant advantages in scalability and cost.
Vehicle detection in aerial surveillance using dynamic Bayesian networks.
Cheng, Hsu-Yung; Weng, Chih-Chia; Chen, Yi-Ying
2012-04-01
We present an automatic vehicle detection system for aerial surveillance in this paper. In this system, we escape from the stereotype and existing frameworks of vehicle detection in aerial surveillance, which are either region based or sliding window based. We design a pixelwise classification method for vehicle detection. The novelty lies in the fact that, in spite of performing pixelwise classification, relations among neighboring pixels in a region are preserved in the feature extraction process. We consider features including vehicle colors and local features. For vehicle color extraction, we utilize a color transform to separate vehicle colors and nonvehicle colors effectively. For edge detection, we apply moment preserving to adjust the thresholds of the Canny edge detector automatically, which increases the adaptability and the accuracy for detection in various aerial images. Afterward, a dynamic Bayesian network (DBN) is constructed for the classification purpose. We convert regional local features into quantitative observations that can be referenced when applying pixelwise classification via DBN. Experiments were conducted on a wide variety of aerial videos. The results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial surveillance images taken at different heights and under different camera angles.
Tongue Images Classification Based on Constrained High Dispersal Network.
Meng, Dan; Cao, Guitao; Duan, Ye; Zhu, Minghua; Tu, Liping; Xu, Dong; Xu, Jiatuo
2017-01-01
Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM). However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN), we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet) to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.
Creating a Taxonomy of Local Boards of Health Based on Local Health Departments’ Perspectives
Shah, Gulzar H.; Sotnikov, Sergey; Leep, Carolyn J.; Ye, Jiali; Van Wave, Timothy W.
2017-01-01
Objectives To develop a local board of health (LBoH) classification scheme and empirical definitions to provide a coherent framework for describing variation in the LBoHs. Methods This study is based on data from the 2015 Local Board of Health Survey, conducted among a nationally representative sample of local health department administrators, with 394 responses. The classification development consisted of the following steps: (1) theoretically guided initial domain development, (2) mapping of the survey variables to the proposed domains, (3) data reduction using principal component analysis and group consensus, and (4) scale development and testing for internal consistency. Results The final classification scheme included 60 items across 6 governance function domains and an additional domain—LBoH characteristics and strengths, such as meeting frequency, composition, and diversity of information sources. Application of this classification strongly supports the premise that LBoHs differ in their performance of governance functions and in other characteristics. Conclusions The LBoH taxonomy provides an empirically tested standardized tool for classifying LBoHs from the viewpoint of local health department administrators. Future studies can use this taxonomy to better characterize the impact of LBoHs. PMID:27854524
Classification of kidney and liver tissue using ultrasound backscatter data
NASA Astrophysics Data System (ADS)
Aalamifar, Fereshteh; Rivaz, Hassan; Cerrolaza, Juan J.; Jago, James; Safdar, Nabile; Boctor, Emad M.; Linguraru, Marius G.
2015-03-01
Ultrasound (US) tissue characterization provides valuable information for the initialization of automatic segmentation algorithms, and can further provide complementary information for diagnosis of pathologies. US tissue characterization is challenging due to the presence of various types of image artifacts and dependence on the sonographer's skills. One way of overcoming this challenge is by characterizing images based on the distribution of the backscatter data derived from the interaction between US waves and tissue. The goal of this work is to classify liver versus kidney tissue in 3D volumetric US data using the distribution of backscatter US data recovered from end-user displayed Bmode image available in clinical systems. To this end, we first propose the computation of a large set of features based on the homodyned-K distribution of the speckle as well as the correlation coefficients between small patches in 3D images. We then utilize the random forests framework to select the most important features for classification. Experiments on in-vivo 3D US data from nine pediatric patients with hydronephrosis showed an average accuracy of 94% for the classification of liver and kidney tissues showing a good potential of this work to assist in the classification and segmentation of abdominal soft tissue.
Toward a Reasoned Classification of Diseases Using Physico-Chemical Based Phenotypes
Schwartz, Laurent; Lafitte, Olivier; da Veiga Moreira, Jorgelindo
2018-01-01
Background: Diseases and health conditions have been classified according to anatomical site, etiological, and clinical criteria. Physico-chemical mechanisms underlying the biology of diseases, such as the flow of energy through cells and tissues, have been often overlooked in classification systems. Objective: We propose a conceptual framework toward the development of an energy-oriented classification of diseases, based on the principles of physical chemistry. Methods: A review of literature on the physical chemistry of biological interactions in a number of diseases is traced from the point of view of the fluid and solid mechanics, electricity, and chemistry. Results: We found consistent evidence in literature of decreased and/or increased physical and chemical forces intertwined with biological processes of numerous diseases, which allowed the identification of mechanical, electric and chemical phenotypes of diseases. Discussion: Biological mechanisms of diseases need to be evaluated and integrated into more comprehensive theories that should account with principles of physics and chemistry. A hypothetical model is proposed relating the natural history of diseases to mechanical stress, electric field, and chemical equilibria (ATP) changes. The present perspective toward an innovative disease classification may improve drug-repurposing strategies in the future. PMID:29541031
NASA Astrophysics Data System (ADS)
Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang
2017-01-01
Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.
Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang
2017-01-01
Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883
NASA Astrophysics Data System (ADS)
Anavi, Yaron; Kogan, Ilya; Gelbart, Elad; Geva, Ofer; Greenspan, Hayit
2016-03-01
We explore the combination of text metadata, such as patients' age and gender, with image-based features, for X-ray chest pathology image retrieval. We focus on a feature set extracted from a pre-trained deep convolutional network shown in earlier work to achieve state-of-the-art results. Two distance measures are explored: a descriptor-based measure, which computes the distance between image descriptors, and a classification-based measure, which performed by a comparison of the corresponding SVM classification probabilities. We show that retrieval results increase once the age and gender information combined with the features extracted from the last layers of the network, with best results using the classification-based scheme. Visualization of the X-ray data is presented by embedding the high dimensional deep learning features in a 2-D dimensional space while preserving the pairwise distances using the t-SNE algorithm. The 2-D visualization gives the unique ability to find groups of X-ray images that are similar to the query image and among themselves, which is a characteristic we do not see in a 1-D traditional ranking.
NASA Astrophysics Data System (ADS)
Ghikas, Demetris P. K.; Oikonomou, Fotios D.
2018-04-01
Using the generalized entropies which depend on two parameters we propose a set of quantitative characteristics derived from the Information Geometry based on these entropies. Our aim, at this stage, is to construct first some fundamental geometric objects which will be used in the development of our geometrical framework. We first establish the existence of a two-parameter family of probability distributions. Then using this family we derive the associated metric and we state a generalized Cramer-Rao Inequality. This gives a first two-parameter classification of complex systems. Finally computing the scalar curvature of the information manifold we obtain a further discrimination of the corresponding classes. Our analysis is based on the two-parameter family of generalized entropies of Hanel and Thurner (2011).
Improved Online Support Vector Machines Spam Filtering Using String Kernels
NASA Astrophysics Data System (ADS)
Amayri, Ola; Bouguila, Nizar
A major bottleneck in electronic communications is the enormous dissemination of spam emails. Developing of suitable filters that can adequately capture those emails and achieve high performance rate become a main concern. Support vector machines (SVMs) have made a large contribution to the development of spam email filtering. Based on SVMs, the crucial problems in email classification are feature mapping of input emails and the choice of the kernels. In this paper, we present thorough investigation of several distance-based kernels and propose the use of string kernels and prove its efficiency in blocking spam emails. We detail a feature mapping variants in text classification (TC) that yield improved performance for the standard SVMs in filtering task. Furthermore, to cope for realtime scenarios we propose an online active framework for spam filtering.
High-rise architecture in Ufa, Russia, based on crystallography canons
NASA Astrophysics Data System (ADS)
Narimanovich Sabitov, Ildar; Radikovna Kudasheva, Dilara; Yaroslavovich Vdovin, Denis
2018-03-01
The article considers fundamental steps of high-rise architecture forming stylistic tendencies, based on C. Willis and M. A. Korotich's studies. Crystallographic shaping as a direction is assigned on basis of classification by M. A. Korotich's. This direction is particularly examined and the main high-rise architecture forming aspects on basis of natural polycrystals forming principles are assigned. The article describes crystal forms transformation into an architectural composition, analyzes constructive systems within the framework of CTBUH (Council on Tall Buildings and Urban Habitat) classification, and picks out one of its types as the most optimal for using in buildings-crystals. The last stage of our research is the theoretical principles approbation into an experimental project of high-rise building in Ufa with the description of its contextual dislocation aspects.
Towards an International Classification for Patient Safety: a Delphi survey.
Thomson, Richard; Lewalle, Pierre; Sherman, Heather; Hibbert, Peter; Runciman, William; Castro, Gerard
2009-02-01
Interpretation and comparison of patient safety information have been compromised by the lack of a common understanding of the concepts involved. The World Alliance set out to develop an International Classification for Patient Safety (ICPS) to address this, and to test the relevance and acceptability of the draft ICPS and progressively refine it prior to field testing. Two-stage Delphi survey. Quantitative and qualitative analyses informed the review of the ICPS. International web-based survey of expert opinion. Experts in the fields of patient safety, health policy, reporting systems, safety and quality control, classification theory and development, health informatics, consumer advocacy, law and medicine; 253 responded to the first round survey, 30% of whom responded to the second round. In the first round, 14% felt that the conceptual framework was missing at least one class, although it was apparent that most respondents were actually referring to concepts they felt should be included within the classes rather than the classes themselves. There was a need for clarification of several components of the classification, particularly its purpose, structure and depth. After revision and feedback, round 2 results were more positive, but further significant changes were made to the conceptual framework and to the major classes in response to concerns about terminology and relationships between classes. The Delphi approach proved invaluable, as both a consensus-building exercise and consultation process, in engaging stakeholders to support completion of the final draft version of the ICPS. Further refinement will occur.
Towards an International Classification for Patient Safety: a Delphi survey
Thomson, Richard; Lewalle, Pierre; Sherman, Heather; Hibbert, Peter; Runciman, William; Castro, Gerard
2009-01-01
Objective Interpretation and comparison of patient safety information have been compromised by the lack of a common understanding of the concepts involved. The World Alliance set out to develop an International Classification for Patient Safety (ICPS) to address this, and to test the relevance and acceptability of the draft ICPS and progressively refine it prior to field testing. Design Two-stage Delphi survey. Quantitative and qualitative analyses informed the review of the ICPS. Setting International web-based survey of expert opinion. Participants Experts in the fields of patient safety, health policy, reporting systems, safety and quality control, classification theory and development, health informatics, consumer advocacy, law and medicine; 253 responded to the first round survey, 30% of whom responded to the second round. Results In the first round, 14% felt that the conceptual framework was missing at least one class, although it was apparent that most respondents were actually referring to concepts they felt should be included within the classes rather than the classes themselves. There was a need for clarification of several components of the classification, particularly its purpose, structure and depth. After revision and feedback, round 2 results were more positive, but further significant changes were made to the conceptual framework and to the major classes in response to concerns about terminology and relationships between classes. Conclusions The Delphi approach proved invaluable, as both a consensus-building exercise and consultation process, in engaging stakeholders to support completion of the final draft version of the ICPS. Further refinement will occur. PMID:19147596
NASA Astrophysics Data System (ADS)
Shahiri, Amirah Mohamed; Husain, Wahidah; Rashid, Nur'Aini Abd
2017-10-01
Huge amounts of data in educational datasets may cause the problem in producing quality data. Recently, data mining approach are increasingly used by educational data mining researchers for analyzing the data patterns. However, many research studies have concentrated on selecting suitable learning algorithms instead of performing feature selection process. As a result, these data has problem with computational complexity and spend longer computational time for classification. The main objective of this research is to provide an overview of feature selection techniques that have been used to analyze the most significant features. Then, this research will propose a framework to improve the quality of students' dataset. The proposed framework uses filter and wrapper based technique to support prediction process in future study.
The Carnegie Classification of Institutions of Higher Education. 2000 Edition. A Technical Report.
ERIC Educational Resources Information Center
Carnegie Foundation for the Advancement of Teaching, Menlo Park, CA.
The Carnegie Classification of Institutions of Higher Education is the framework in which institutional diversity in United States higher education is commonly described. Developed in 1971, the Classification was designed to support research in higher education by identifying categories of colleges and universities that would be homogeneous with…
Practical Issues in Estimating Classification Accuracy and Consistency with R Package cacIRT
ERIC Educational Resources Information Center
Lathrop, Quinn N.
2015-01-01
There are two main lines of research in estimating classification accuracy (CA) and classification consistency (CC) under Item Response Theory (IRT). The R package cacIRT provides computer implementations of both approaches in an accessible and unified framework. Even with available implementations, there remains decisions a researcher faces when…
ERIC Educational Resources Information Center
Plante, Jarrad D.; Cox, Thomas D.
2016-01-01
Service-learning has a longstanding history in higher education in and includes three main tenets: academic learning, meaningful community service, and civic learning. The Carnegie Foundation for the Advancement of Teaching created an elective classification system called the Carnegie Community Engagement Classification for higher education…
Collaborative classification of hyperspectral and visible images with convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Mengmeng; Li, Wei; Du, Qian
2017-10-01
Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.
The United Nations Framework Classification for World Petroleum Resources
Ahlbrandt, T.S.; Blystad, P.; Young, E.D.; Slavov, S.; Heiberg, S.
2003-01-01
The United Nations has developed an international framework classification for solid fuels and minerals (UNFC). This is now being extended to petroleum by building on the joint classification of the Society of Petroleum Engineers (SPE), the World Petroleum Congresses (WPC) and the American Association of Petroleum Geologists (AAPG). The UNFC is a 3-dimansional classification. This: Is necessary in order to migrate accounts of resource quantities that are developed on one or two of the axes to the common basis; Provides for more precise reporting and analysis. This is particularly useful in analyses of contingent resources. The characteristics of the SPE/WPC/AAPG classification has been preserved and enhanced to facilitate improved international and national petroleum resource management, corporate business process management and financial reporting. A UN intergovernmental committee responsible for extending the UNFC to extractive energy resources (coal, petroleum and uranium) will meet in Geneva on October 30th and 31st to review experiences gained and comments received during 2003. A recommended classification will then be delivered for consideration to the United Nations through the Committee on Sustainable Energy of the Economic Commission for Europe (UN ECE).
Deep learning decision fusion for the classification of urban remote sensing data
NASA Astrophysics Data System (ADS)
Abdi, Ghasem; Samadzadegan, Farhad; Reinartz, Peter
2018-01-01
Multisensor data fusion is one of the most common and popular remote sensing data classification topics by considering a robust and complete description about the objects of interest. Furthermore, deep feature extraction has recently attracted significant interest and has become a hot research topic in the geoscience and remote sensing research community. A deep learning decision fusion approach is presented to perform multisensor urban remote sensing data classification. After deep features are extracted by utilizing joint spectral-spatial information, a soft-decision made classifier is applied to train high-level feature representations and to fine-tune the deep learning framework. Next, a decision-level fusion classifies objects of interest by the joint use of sensors. Finally, a context-aware object-based postprocessing is used to enhance the classification results. A series of comparative experiments are conducted on the widely used dataset of 2014 IEEE GRSS data fusion contest. The obtained results illustrate the considerable advantages of the proposed deep learning decision fusion over the traditional classifiers.
Alecu, C S; Jitaru, E; Moisil, I
2000-01-01
This paper presents some tools designed and implemented for learning-related purposes; these tools can be downloaded or run on the TeleNurse web site. Among other facilities, TeleNurse web site is hosting now the version 1.2 of SysTerN (terminology system for nursing) which can be downloaded on request and also the "Evaluation of Translation" form which has been designed in order to improve the Romanian translation of the ICNP (the International Classification of Nursing Practice). SysTerN has been developed using the framework of the TeleNurse ID--ENTITY Telematics for Health EU project. This version is using the beta version of ICNP containing Phenomena and Actions classification. This classification is intended to facilitate documentation of nursing practice, by providing a terminology or vocabulary for use in the description of the nursing process. The TeleNurse site is bilingual, Romanian-English, in order to enlarge the discussion forum with members from other CEE (or Non-CEE) countries.
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
Mikhno, Arthur; Nuevo, Pablo Martinez; Devanand, Davangere P.; Parsey, Ramin V.; Laine, Andrew F.
2013-01-01
Multimodality classification of Alzheimer’s disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), is of interest to the medical community. We improve on prior classification frameworks by incorporating multiple features from MRI and PET data obtained with multiple radioligands, fluorodeoxyglucose (FDG) and Pittsburg compound B (PIB). We also introduce a new MRI feature, invariant shape descriptors based on 3D Zernike moments applied to the hippocampus region. Classification performance is evaluated on data from 17 healthy controls (CTR), 22 MCI, and 17 AD subjects. Zernike significantly outperforms volume, accuracy (Zernike to volume): CTR/AD (90.7% to 71.6%), CTR/MCI (76.2% to 60.0%), MCI/AD (84.3% to 65.5%). Zernike also provides comparable and complementary performance to PET. Optimal accuracy is achieved when Zernike and PET features are combined (accuracy, specificity, sensitivity), CTR/AD (98.8%, 99.5%, 98.1%), CTR/MCI (84.3%, 82.9%, 85.9%) and MCI/AD (93.3%, 93.6%, 93.3%). PMID:24576927
Mikhno, Arthur; Nuevo, Pablo Martinez; Devanand, Davangere P; Parsey, Ramin V; Laine, Andrew F
2012-01-01
Multimodality classification of Alzheimer's disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), is of interest to the medical community. We improve on prior classification frameworks by incorporating multiple features from MRI and PET data obtained with multiple radioligands, fluorodeoxyglucose (FDG) and Pittsburg compound B (PIB). We also introduce a new MRI feature, invariant shape descriptors based on 3D Zernike moments applied to the hippocampus region. Classification performance is evaluated on data from 17 healthy controls (CTR), 22 MCI, and 17 AD subjects. Zernike significantly outperforms volume, accuracy (Zernike to volume): CTR/AD (90.7% to 71.6%), CTR/MCI (76.2% to 60.0%), MCI/AD (84.3% to 65.5%). Zernike also provides comparable and complementary performance to PET. Optimal accuracy is achieved when Zernike and PET features are combined (accuracy, specificity, sensitivity), CTR/AD (98.8%, 99.5%, 98.1%), CTR/MCI (84.3%, 82.9%, 85.9%) and MCI/AD (93.3%, 93.6%, 93.3%).
Pipeline Processing With an Iterative, Context-Based Detection Model
2015-04-19
pattern detectors , correlation detectors , subspace detectors , matched field detectors , nuclear explosion monitoring 16. SECURITY CLASSIFICATION OF: 17...38 13. 3 days of SPAO-BHZ data which is dominated by signals from nearby icequakes. .... 39 14. (Top) 94 detections produced by detector ...92532 and (bottom) 148 detections from detector 92541 produced during the first run of the framework. .................................. 40 15. The 49
ERIC Educational Resources Information Center
Snoek, Marco; Swennen, Anja; van der Klink, Marcel
2011-01-01
This study examines how the contemporary European policy debate addresses the further development of the quality of teacher educators. A classification framework based on the literature on professionalism was used to compare European and Member State policy actions and measures on the quality of teacher educators through an analysis of seven…
ERIC Educational Resources Information Center
McCormack, Jane; McLeod, Sharynne; Harrison, Linda J.; McAllister, Lindy
2010-01-01
Purpose: To explore the application of the Activities and Participation component of the International Classification of Functioning, Disability and Health - Children and Youth (ICF-CY, World Health Organization, 2007) as a framework for investigating the perceived impact of speech impairment in childhood. Method: A 32-item questionnaire based on…
Age and gender estimation using Region-SIFT and multi-layered SVM
NASA Astrophysics Data System (ADS)
Kim, Hyunduk; Lee, Sang-Heon; Sohn, Myoung-Kyu; Hwang, Byunghun
2018-04-01
In this paper, we propose an age and gender estimation framework using the region-SIFT feature and multi-layered SVM classifier. The suggested framework entails three processes. The first step is landmark based face alignment. The second step is the feature extraction step. In this step, we introduce the region-SIFT feature extraction method based on facial landmarks. First, we define sub-regions of the face. We then extract SIFT features from each sub-region. In order to reduce the dimensions of features we employ a Principal Component Analysis (PCA) and a Linear Discriminant Analysis (LDA). Finally, we classify age and gender using a multi-layered Support Vector Machines (SVM) for efficient classification. Rather than performing gender estimation and age estimation independently, the use of the multi-layered SVM can improve the classification rate by constructing a classifier that estimate the age according to gender. Moreover, we collect a dataset of face images, called by DGIST_C, from the internet. A performance evaluation of proposed method was performed with the FERET database, CACD database, and DGIST_C database. The experimental results demonstrate that the proposed approach classifies age and performs gender estimation very efficiently and accurately.
A robust sparse-modeling framework for estimating schizophrenia biomarkers from fMRI.
Dillon, Keith; Calhoun, Vince; Wang, Yu-Ping
2017-01-30
Our goal is to identify the brain regions most relevant to mental illness using neuroimaging. State of the art machine learning methods commonly suffer from repeatability difficulties in this application, particularly when using large and heterogeneous populations for samples. We revisit both dimensionality reduction and sparse modeling, and recast them in a common optimization-based framework. This allows us to combine the benefits of both types of methods in an approach which we call unambiguous components. We use this to estimate the image component with a constrained variability, which is best correlated with the unknown disease mechanism. We apply the method to the estimation of neuroimaging biomarkers for schizophrenia, using task fMRI data from a large multi-site study. The proposed approach yields an improvement in both robustness of the estimate and classification accuracy. We find that unambiguous components incorporate roughly two thirds of the same brain regions as sparsity-based methods LASSO and elastic net, while roughly one third of the selected regions differ. Further, unambiguous components achieve superior classification accuracy in differentiating cases from controls. Unambiguous components provide a robust way to estimate important regions of imaging data. Copyright © 2016 Elsevier B.V. All rights reserved.
Systematic evaluation of deep learning based detection frameworks for aerial imagery
NASA Astrophysics Data System (ADS)
Sommer, Lars; Steinmann, Lucas; Schumann, Arne; Beyerer, Jürgen
2018-04-01
Object detection in aerial imagery is crucial for many applications in the civil and military domain. In recent years, deep learning based object detection frameworks significantly outperformed conventional approaches based on hand-crafted features on several datasets. However, these detection frameworks are generally designed and optimized for common benchmark datasets, which considerably differ from aerial imagery especially in object sizes. As already demonstrated for Faster R-CNN, several adaptations are necessary to account for these differences. In this work, we adapt several state-of-the-art detection frameworks including Faster R-CNN, R-FCN, and Single Shot MultiBox Detector (SSD) to aerial imagery. We discuss adaptations that mainly improve the detection accuracy of all frameworks in detail. As the output of deeper convolutional layers comprise more semantic information, these layers are generally used in detection frameworks as feature map to locate and classify objects. However, the resolution of these feature maps is insufficient for handling small object instances, which results in an inaccurate localization or incorrect classification of small objects. Furthermore, state-of-the-art detection frameworks perform bounding box regression to predict the exact object location. Therefore, so called anchor or default boxes are used as reference. We demonstrate how an appropriate choice of anchor box sizes can considerably improve detection performance. Furthermore, we evaluate the impact of the performed adaptations on two publicly available datasets to account for various ground sampling distances or differing backgrounds. The presented adaptations can be used as guideline for further datasets or detection frameworks.
Dasgupta, Nilanjan; Carin, Lawrence
2005-04-01
Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.
Lyons, Ronan A; Finch, Caroline F; McClure, Rod; van Beeck, Ed; Macey, Steven
2010-09-01
Over recent years, there has been increasing recognition that the burden of injuries and violence includes more than just the direct and indirect monetary costs associated with their medical outcomes. However, quantification of the total burden has been seriously hampered by lack of a framework for considering the range of outcomes which comprise the burden, poor identification of the outcomes and their imprecise measurement. This article proposes a new conceptual framework, the List of All Deficits (or LOAD) Framework, that has been developed from extensive expert discussion and consensus meetings to facilitate the measurement of the full burden of injuries and violence. The LOAD Framework recognises the multidimensional nature of injury burden across individual, family and societal domains. This classification of potential consequences of injury was built on the International Classification of Functioning concept of disability. Examples of empirical support for each consequence were obtained from the scientific literature. Determining the multidimensional injury burden requires the assessment and combination of 20 domains of potential consequences. The resulting LOAD Framework classification and concept diagram describes 12 groups of injury consequences for individuals, three for family and close friends and five for wider society. Understanding the extent of the negative implications (or deficits) of injury, through application of the LOAD Framework, is needed to put existing burden of injury studies into context and to highlight the inter-relationship between the direct and indirect burden of injury relative to other conditions.
Karayannis, Nicholas V; Jull, Gwendolen A; Hodges, Paul W
2012-02-20
Several classification schemes, each with its own philosophy and categorizing method, subgroup low back pain (LBP) patients with the intent to guide treatment. Physiotherapy derived schemes usually have a movement impairment focus, but the extent to which other biological, psychological, and social factors of pain are encompassed requires exploration. Furthermore, within the prevailing 'biological' domain, the overlap of subgrouping strategies within the orthopaedic examination remains unexplored. The aim of this study was "to review and clarify through developer/expert survey, the theoretical basis and content of physical movement classification schemes, determine their relative reliability and similarities/differences, and to consider the extent of incorporation of the bio-psycho-social framework within the schemes". A database search for relevant articles related to LBP and subgrouping or classification was conducted. Five dominant movement-based schemes were identified: Mechanical Diagnosis and Treatment (MDT), Treatment Based Classification (TBC), Pathoanatomic Based Classification (PBC), Movement System Impairment Classification (MSI), and O'Sullivan Classification System (OCS) schemes. Data were extracted and a survey sent to the classification scheme developers/experts to clarify operational criteria, reliability, decision-making, and converging/diverging elements between schemes. Survey results were integrated into the review and approval obtained for accuracy. Considerable diversity exists between schemes in how movement informs subgrouping and in the consideration of broader neurosensory, cognitive, emotional, and behavioural dimensions of LBP. Despite differences in assessment philosophy, a common element lies in their objective to identify a movement pattern related to a pain reduction strategy. Two dominant movement paradigms emerge: (i) loading strategies (MDT, TBC, PBC) aimed at eliciting a phenomenon of centralisation of symptoms; and (ii) modified movement strategies (MSI, OCS) targeted towards documenting the movement impairments associated with the pain state. Schemes vary on: the extent to which loading strategies are pursued; the assessment of movement dysfunction; and advocated treatment approaches. A biomechanical assessment predominates in the majority of schemes (MDT, PBC, MSI), certain psychosocial aspects (fear-avoidance) are considered in the TBC scheme, certain neurophysiologic (central versus peripherally mediated pain states) and psychosocial (cognitive and behavioural) aspects are considered in the OCS scheme.
High-order distance-based multiview stochastic learning in image classification.
Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng
2014-12-01
How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.
[The informational support of statistical observation related to children disability].
Son, I M; Polikarpov, A V; Ogrizko, E V; Golubeva, T Yu
2016-01-01
Within the framework of the Convention on rights of the disabled the revision is specified concerning criteria of identification of disability of children and reformation of system of medical social expertise according international standards of indices of health and indices related to health. In connection with it, it is important to consider the relationship between alterations in forms of the Federal statistical monitoring in the part of registration of disabled children in the Russian Federation and classification of health indices and indices related to health applied at identification of disability. The article presents analysis of relationship between alterations in forms of the Federal statistical monitoring in the part of registration of disabled children in the Russian Federation and applied classifications used at identification of disability (International classification of impairments, disabilities and handicap (ICDH), international classification of functioning, disability and health (ICF), international classification of functioning, disability and health, version for children and youth (ICF-CY). The intersectorial interaction is considered within the framework of statistics of children disability.
Development of a definition, classification system, and model for cultural geology
NASA Astrophysics Data System (ADS)
Mitchell, Lloyd W., III
The concept for this study is based upon a personal interest by the author, an American Indian, in promoting cultural perspectives in undergraduate college teaching and learning environments. Most academicians recognize that merged fields can enhance undergraduate curricula. However, conflict may occur when instructors attempt to merge social science fields such as history or philosophy with geoscience fields such as mining and geomorphology. For example, ideologies of Earth structures derived from scientific methodologies may conflict with historical and spiritual understandings of Earth structures held by American Indians. Specifically, this study addresses the problem of how to combine cultural studies with the geosciences into a new merged academic discipline called cultural geology. This study further attempts to develop the merged field of cultural geology using an approach consisting of three research foci: a definition, a classification system, and a model. Literature reviews were conducted for all three foci. Additionally, to better understand merged fields, a literature review was conducted specifically for academic fields that merged social and physical sciences. Methodologies concentrated on the three research foci: definition, classification system, and model. The definition was derived via a two-step process. The first step, developing keyword hierarchical ranking structures, was followed by creating and analyzing semantic word meaning lists. The classification system was developed by reviewing 102 classification systems and incorporating selected components into a system framework. The cultural geology model was created also utilizing a two-step process. A literature review of scientific models was conducted. Then, the definition and classification system were incorporated into a model felt to reflect the realm of cultural geology. A course syllabus was then developed that incorporated the resulting definition, classification system, and model. This study concludes that cultural geology can be introduced as a merged discipline by using a three-foci framework consisting of a definition, classification system, and model. Additionally, this study reveals that cultural beliefs, attitudes, and behaviors, can be incorporated into a geology course during the curriculum development process, using an approach known as 'learner-centered'. This study further concludes that cultural beliefs, derived from class members, are an important source of curriculum materials.
Matrix and Tensor Completion on a Human Activity Recognition Framework.
Savvaki, Sofia; Tsagkatakis, Grigorios; Panousopoulou, Athanasia; Tsakalides, Panagiotis
2017-11-01
Sensor-based activity recognition is encountered in innumerable applications of the arena of pervasive healthcare and plays a crucial role in biomedical research. Nonetheless, the frequent situation of unobserved measurements impairs the ability of machine learning algorithms to efficiently extract context from raw streams of data. In this paper, we study the problem of accurate estimation of missing multimodal inertial data and we propose a classification framework that considers the reconstruction of subsampled data during the test phase. We introduce the concept of forming the available data streams into low-rank two-dimensional (2-D) and 3-D Hankel structures, and we exploit data redundancies using sophisticated imputation techniques, namely matrix and tensor completion. Moreover, we examine the impact of reconstruction on the classification performance by experimenting with several state-of-the-art classifiers. The system is evaluated with respect to different data structuring scenarios, the volume of data available for reconstruction, and various levels of missing values per device. Finally, the tradeoff between subsampling accuracy and energy conservation in wearable platforms is examined. Our analysis relies on two public datasets containing inertial data, which extend to numerous activities, multiple sensing parameters, and body locations. The results highlight that robust classification accuracy can be achieved through recovery, even for extremely subsampled data streams.
Candida albicans Pathogenesis: Fitting within the Host-Microbe Damage Response Framework
Kong, Eric F.; Tsui, Christina; Nguyen, M. Hong; Clancy, Cornelius J.; Fidel, Paul L.; Noverr, Mairi
2016-01-01
Historically, the nature and extent of host damage by a microbe were considered highly dependent on virulence attributes of the microbe. However, it has become clear that disease is a complex outcome which can arise because of pathogen-mediated damage, host-mediated damage, or both, with active participation from the host microbiota. This awareness led to the formulation of the damage response framework (DRF), a revolutionary concept that defined microbial virulence as a function of host immunity. The DRF outlines six classifications of host damage outcomes based on the microbe and the strength of the immune response. In this review, we revisit this concept from the perspective of Candida albicans, a microbial pathogen uniquely adapted to its human host. This fungus commonly colonizes various anatomical sites without causing notable damage. However, depending on environmental conditions, a diverse array of diseases may occur, ranging from mucosal to invasive systemic infections resulting in microbe-mediated and/or host-mediated damage. Remarkably, C. albicans infections can fit into all six DRF classifications, depending on the anatomical site and associated host immune response. Here, we highlight some of these diverse and site-specific diseases and how they fit the DRF classifications, and we describe the animal models available to uncover pathogenic mechanisms and related host immune responses. PMID:27430274
Computer-aided diagnosis system: a Bayesian hybrid classification method.
Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J
2013-10-01
A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Using self-organizing maps to develop ambient air quality classifications: a time series example
2014-01-01
Background Development of exposure metrics that capture features of the multipollutant environment are needed to investigate health effects of pollutant mixtures. This is a complex problem that requires development of new methodologies. Objective Present a self-organizing map (SOM) framework for creating ambient air quality classifications that group days with similar multipollutant profiles. Methods Eight years of day-level data from Atlanta, GA, for ten ambient air pollutants collected at a central monitor location were classified using SOM into a set of day types based on their day-level multipollutant profiles. We present strategies for using SOM to develop a multipollutant metric of air quality and compare results with more traditional techniques. Results Our analysis found that 16 types of days reasonably describe the day-level multipollutant combinations that appear most frequently in our data. Multipollutant day types ranged from conditions when all pollutants measured low to days exhibiting relatively high concentrations for either primary or secondary pollutants or both. The temporal nature of class assignments indicated substantial heterogeneity in day type frequency distributions (~1%-14%), relatively short-term durations (<2 day persistence), and long-term and seasonal trends. Meteorological summaries revealed strong day type weather dependencies and pollutant concentration summaries provided interesting scenarios for further investigation. Comparison with traditional methods found SOM produced similar classifications with added insight regarding between-class relationships. Conclusion We find SOM to be an attractive framework for developing ambient air quality classification because the approach eases interpretation of results by allowing users to visualize classifications on an organized map. The presented approach provides an appealing tool for developing multipollutant metrics of air quality that can be used to support multipollutant health studies. PMID:24990361
Real-time Neuroimaging and Cognitive Monitoring Using Wearable Dry EEG
Mullen, Tim R.; Kothe, Christian A.E.; Chi, Mike; Ojeda, Alejandro; Kerth, Trevor; Makeig, Scott; Jung, Tzyy-Ping; Cauwenberghs, Gert
2015-01-01
Goal We present and evaluate a wearable high-density dry electrode EEG system and an open-source software framework for online neuroimaging and state classification. Methods The system integrates a 64-channel dry EEG form-factor with wireless data streaming for online analysis. A real-time software framework is applied, including adaptive artifact rejection, cortical source localization, multivariate effective connectivity inference, data visualization, and cognitive state classification from connectivity features using a constrained logistic regression approach (ProxConn). We evaluate the system identification methods on simulated 64-channel EEG data. Then we evaluate system performance, using ProxConn and a benchmark ERP method, in classifying response errors in 9 subjects using the dry EEG system. Results Simulations yielded high accuracy (AUC=0.97±0.021) for real-time cortical connectivity estimation. Response error classification using cortical effective connectivity (sdDTF) was significantly above chance with similar performance (AUC) for cLORETA (0.74±0.09) and LCMV (0.72±0.08) source localization. Cortical ERP-based classification was equivalent to ProxConn for cLORETA (0.74±0.16) but significantly better for LCMV (0.82±0.12). Conclusion We demonstrated the feasibility for real-time cortical connectivity analysis and cognitive state classification from high-density wearable dry EEG. Significance This paper is the first validated application of these methods to 64-channel dry EEG. The work addresses a need for robust real-time measurement and interpretation of complex brain activity in the dynamic environment of the wearable setting. Such advances can have broad impact in research, medicine, and brain-computer interfaces. The pipelines are made freely available in the open-source SIFT and BCILAB toolboxes. PMID:26415149
Belmar, Oscar; Velasco, Josefa; Martinez-Capel, Francisco
2011-05-01
Hydrological classification constitutes the first step of a new holistic framework for developing regional environmental flow criteria: the "Ecological Limits of Hydrologic Alteration (ELOHA)". The aim of this study was to develop a classification for 390 stream sections of the Segura River Basin based on 73 hydrological indices that characterize their natural flow regimes. The hydrological indices were calculated with 25 years of natural monthly flows (1980/81-2005/06) derived from a rainfall-runoff model developed by the Spanish Ministry of Environment and Public Works. These indices included, at a monthly or annual basis, measures of duration of droughts and central tendency and dispersion of flow magnitude (average, low and high flow conditions). Principal Component Analysis (PCA) indicated high redundancy among most hydrological indices, as well as two gradients: flow magnitude for mainstream rivers and temporal variability for tributary streams. A classification with eight flow-regime classes was chosen as the most easily interpretable in the Segura River Basin, which was supported by ANOSIM analyses. These classes can be simplified in 4 broader groups, with different seasonal discharge pattern: large rivers, perennial stable streams, perennial seasonal streams and intermittent and ephemeral streams. They showed a high degree of spatial cohesion, following a gradient associated with climatic aridity from NW to SE, and were well defined in terms of the fundamental variables in Mediterranean streams: magnitude and temporal variability of flows. Therefore, this classification is a fundamental tool to support water management and planning in the Segura River Basin. Future research will allow us to study the flow alteration-ecological response relationship for each river type, and set the basis to design scientifically credible environmental flows following the ELOHA framework.
Yang, Jie; Yin, Yingying; Zhang, Zuping; Long, Jun; Dong, Jian; Zhang, Yuqun; Xu, Zhi; Li, Lei; Liu, Jie; Yuan, Yonggui
2018-02-05
Major depressive disorder (MDD) is characterized by dysregulation of distributed structural and functional networks. It is now recognized that structural and functional networks are related at multiple temporal scales. The recent emergence of multimodal fusion methods has made it possible to comprehensively and systematically investigate brain networks and thereby provide essential information for influencing disease diagnosis and prognosis. However, such investigations are hampered by the inconsistent dimensionality features between structural and functional networks. Thus, a semi-multimodal fusion hierarchical feature reduction framework is proposed. Feature reduction is a vital procedure in classification that can be used to eliminate irrelevant and redundant information and thereby improve the accuracy of disease diagnosis. Our proposed framework primarily consists of two steps. The first step considers the connection distances in both structural and functional networks between MDD and healthy control (HC) groups. By adding a constraint based on sparsity regularization, the second step fully utilizes the inter-relationship between the two modalities. However, in contrast to conventional multi-modality multi-task methods, the structural networks were considered to play only a subsidiary role in feature reduction and were not included in the following classification. The proposed method achieved a classification accuracy, specificity, sensitivity, and area under the curve of 84.91%, 88.6%, 81.29%, and 0.91, respectively. Moreover, the frontal-limbic system contributed the most to disease diagnosis. Importantly, by taking full advantage of the complementary information from multimodal neuroimaging data, the selected consensus connections may be highly reliable biomarkers of MDD. Copyright © 2017 Elsevier B.V. All rights reserved.
Ecosystem services classification: A systems ecology perspective of the cascade framework.
La Notte, Alessandra; D'Amato, Dalia; Mäkinen, Hanna; Paracchini, Maria Luisa; Liquete, Camino; Egoh, Benis; Geneletti, Davide; Crossman, Neville D
2017-03-01
Ecosystem services research faces several challenges stemming from the plurality of interpretations of classifications and terminologies. In this paper we identify two main challenges with current ecosystem services classification systems: i) the inconsistency across concepts, terminology and definitions, and; ii) the mix up of processes and end-state benefits, or flows and assets. Although different ecosystem service definitions and interpretations can be valuable for enriching the research landscape, it is necessary to address the existing ambiguity to improve comparability among ecosystem-service-based approaches. Using the cascade framework as a reference, and Systems Ecology as a theoretical underpinning, we aim to address the ambiguity across typologies. The cascade framework links ecological processes with elements of human well-being following a pattern similar to a production chain. Systems Ecology is a long-established discipline which provides insight into complex relationships between people and the environment. We present a refreshed conceptualization of ecosystem services which can support ecosystem service assessment techniques and measurement. We combine the notions of biomass, information and interaction from system ecology, with the ecosystem services conceptualization to improve definitions and clarify terminology. We argue that ecosystem services should be defined as the interactions (i.e. processes) of the ecosystem that produce a change in human well-being, while ecosystem components or goods, i.e. countable as biomass units, are only proxies in the assessment of such changes. Furthermore, Systems Ecology can support a re-interpretation of the ecosystem services conceptualization and related applied research, where more emphasis is needed on the underpinning complexity of the ecological system.
Adding Pluggable and Personalized Natural Control Capabilities to Existing Applications
Lamberti, Fabrizio; Sanna, Andrea; Carlevaris, Gilles; Demartini, Claudio
2015-01-01
Advancements in input device and sensor technologies led to the evolution of the traditional human-machine interaction paradigm based on the mouse and keyboard. Touch-, gesture- and voice-based interfaces are integrated today in a variety of applications running on consumer devices (e.g., gaming consoles and smartphones). However, to allow existing applications running on desktop computers to utilize natural interaction, significant re-design and re-coding efforts may be required. In this paper, a framework designed to transparently add multi-modal interaction capabilities to applications to which users are accustomed is presented. Experimental observations confirmed the effectiveness of the proposed framework and led to a classification of those applications that could benefit more from the availability of natural interaction modalities. PMID:25635410
Adding pluggable and personalized natural control capabilities to existing applications.
Lamberti, Fabrizio; Sanna, Andrea; Carlevaris, Gilles; Demartini, Claudio
2015-01-28
Advancements in input device and sensor technologies led to the evolution of the traditional human-machine interaction paradigm based on the mouse and keyboard. Touch-, gesture- and voice-based interfaces are integrated today in a variety of applications running on consumer devices (e.g., gaming consoles and smartphones). However, to allow existing applications running on desktop computers to utilize natural interaction, significant re-design and re-coding efforts may be required. In this paper, a framework designed to transparently add multi-modal interaction capabilities to applications to which users are accustomed is presented. Experimental observations confirmed the effectiveness of the proposed framework and led to a classification of those applications that could benefit more from the availability of natural interaction modalities.
Classification of US hydropower dams by their modes of operation
McManamay, Ryan A.; Oigbokie, II, Clement O.; Kao, Shih -Chieh; ...
2016-02-19
A key challenge to understanding ecohydrologic responses to dam regulation is the absence of a universally transferable classification framework for how dams operate. In the present paper, we develop a classification system to organize the modes of operation (MOPs) for U.S. hydropower dams and powerplants. To determine the full diversity of MOPs, we mined federal documents, open-access data repositories, and internet sources. W then used CART classification trees to predict MOPs based on physical characteristics, regulation, and project generation. Finally, we evaluated how much variation MOPs explained in sub-daily discharge patterns for stream gages downstream of hydropower dams. After reviewingmore » information for 721 dams and 597 power plants, we developed a 2-tier hierarchical classification based on 1) the storage and control of flows to powerplants, and 2) the presence of a diversion around the natural stream bed. This resulted in nine tier-1 MOPs representing a continuum of operations from strictly peaking, to reregulating, to run-of-river, and two tier-2 MOPs, representing diversion and integral dam-powerhouse configurations. Although MOPs differed in physical characteristics and energy production, classification trees had low accuracies (<62%), which suggested accurate evaluations of MOPs may require individual attention. MOPs and dam storage explained 20% of the variation in downstream subdaily flow characteristics and showed consistent alterations in subdaily flow patterns from reference streams. Lastly, this standardized classification scheme is important for future research including estimating reservoir operations for large-scale hydrologic models and evaluating project economics, environmental impacts, and mitigation.« less
Classification of US hydropower dams by their modes of operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A.; Oigbokie, II, Clement O.; Kao, Shih -Chieh
A key challenge to understanding ecohydrologic responses to dam regulation is the absence of a universally transferable classification framework for how dams operate. In the present paper, we develop a classification system to organize the modes of operation (MOPs) for U.S. hydropower dams and powerplants. To determine the full diversity of MOPs, we mined federal documents, open-access data repositories, and internet sources. W then used CART classification trees to predict MOPs based on physical characteristics, regulation, and project generation. Finally, we evaluated how much variation MOPs explained in sub-daily discharge patterns for stream gages downstream of hydropower dams. After reviewingmore » information for 721 dams and 597 power plants, we developed a 2-tier hierarchical classification based on 1) the storage and control of flows to powerplants, and 2) the presence of a diversion around the natural stream bed. This resulted in nine tier-1 MOPs representing a continuum of operations from strictly peaking, to reregulating, to run-of-river, and two tier-2 MOPs, representing diversion and integral dam-powerhouse configurations. Although MOPs differed in physical characteristics and energy production, classification trees had low accuracies (<62%), which suggested accurate evaluations of MOPs may require individual attention. MOPs and dam storage explained 20% of the variation in downstream subdaily flow characteristics and showed consistent alterations in subdaily flow patterns from reference streams. Lastly, this standardized classification scheme is important for future research including estimating reservoir operations for large-scale hydrologic models and evaluating project economics, environmental impacts, and mitigation.« less
Das, D K; Maiti, A K; Chakraborty, C
2015-03-01
In this paper, we propose a comprehensive image characterization cum classification framework for malaria-infected stage detection using microscopic images of thin blood smears. The methodology mainly includes microscopic imaging of Leishman stained blood slides, noise reduction and illumination correction, erythrocyte segmentation, feature selection followed by machine classification. Amongst three-image segmentation algorithms (namely, rule-based, Chan-Vese-based and marker-controlled watershed methods), marker-controlled watershed technique provides better boundary detection of erythrocytes specially in overlapping situations. Microscopic features at intensity, texture and morphology levels are extracted to discriminate infected and noninfected erythrocytes. In order to achieve subgroup of potential features, feature selection techniques, namely, F-statistic and information gain criteria are considered here for ranking. Finally, five different classifiers, namely, Naive Bayes, multilayer perceptron neural network, logistic regression, classification and regression tree (CART), RBF neural network have been trained and tested by 888 erythrocytes (infected and noninfected) for each features' subset. Performance evaluation of the proposed methodology shows that multilayer perceptron network provides higher accuracy for malaria-infected erythrocytes recognition and infected stage classification. Results show that top 90 features ranked by F-statistic (specificity: 98.64%, sensitivity: 100%, PPV: 99.73% and overall accuracy: 96.84%) and top 60 features ranked by information gain provides better results (specificity: 97.29%, sensitivity: 100%, PPV: 99.46% and overall accuracy: 96.73%) for malaria-infected stage classification. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
Castro, Eduardo; Martínez-Ramón, Manel; Pearlson, Godfrey; Sui, Jing; Calhoun, Vince D.
2011-01-01
Pattern classification of brain imaging data can enable the automatic detection of differences in cognitive processes of specific groups of interest. Furthermore, it can also give neuroanatomical information related to the regions of the brain that are most relevant to detect these differences by means of feature selection procedures, which are also well-suited to deal with the high dimensionality of brain imaging data. This work proposes the application of recursive feature elimination using a machine learning algorithm based on composite kernels to the classification of healthy controls and patients with schizophrenia. This framework, which evaluates nonlinear relationships between voxels, analyzes whole-brain fMRI data from an auditory task experiment that is segmented into anatomical regions and recursively eliminates the uninformative ones based on their relevance estimates, thus yielding the set of most discriminative brain areas for group classification. The collected data was processed using two analysis methods: the general linear model (GLM) and independent component analysis (ICA). GLM spatial maps as well as ICA temporal lobe and default mode component maps were then input to the classifier. A mean classification accuracy of up to 95% estimated with a leave-two-out cross-validation procedure was achieved by doing multi-source data classification. In addition, it is shown that the classification accuracy rate obtained by using multi-source data surpasses that reached by using single-source data, hence showing that this algorithm takes advantage of the complimentary nature of GLM and ICA. PMID:21723948
A Query Expansion Framework in Image Retrieval Domain Based on Local and Global Analysis
Rahman, M. M.; Antani, S. K.; Thoma, G. R.
2011-01-01
We present an image retrieval framework based on automatic query expansion in a concept feature space by generalizing the vector space model of information retrieval. In this framework, images are represented by vectors of weighted concepts similar to the keyword-based representation used in text retrieval. To generate the concept vocabularies, a statistical model is built by utilizing Support Vector Machine (SVM)-based classification techniques. The images are represented as “bag of concepts” that comprise perceptually and/or semantically distinguishable color and texture patches from local image regions in a multi-dimensional feature space. To explore the correlation between the concepts and overcome the assumption of feature independence in this model, we propose query expansion techniques in the image domain from a new perspective based on both local and global analysis. For the local analysis, the correlations between the concepts based on the co-occurrence pattern, and the metrical constraints based on the neighborhood proximity between the concepts in encoded images, are analyzed by considering local feedback information. We also analyze the concept similarities in the collection as a whole in the form of a similarity thesaurus and propose an efficient query expansion based on the global analysis. The experimental results on a photographic collection of natural scenes and a biomedical database of different imaging modalities demonstrate the effectiveness of the proposed framework in terms of precision and recall. PMID:21822350
Audigé, Laurent; Cornelius, Carl-Peter; Ieva, Antonio Di; Prein, Joachim
2014-01-01
Validated trauma classification systems are the sole means to provide the basis for reliable documentation and evaluation of patient care, which will open the gateway to evidence-based procedures and healthcare in the coming years. With the support of AO Investigation and Documentation, a classification group was established to develop and evaluate a comprehensive classification system for craniomaxillofacial (CMF) fractures. Blueprints for fracture classification in the major constituents of the human skull were drafted and then evaluated by a multispecialty group of experienced CMF surgeons and a radiologist in a structured process during iterative agreement sessions. At each session, surgeons independently classified the radiological imaging of up to 150 consecutive cases with CMF fractures. During subsequent review meetings, all discrepancies in the classification outcome were critically appraised for clarification and improvement until consensus was reached. The resulting CMF classification system is structured in a hierarchical fashion with three levels of increasing complexity. The most elementary level 1 simply distinguishes four fracture locations within the skull: mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). Levels 2 and 3 focus on further defining the fracture locations and for fracture morphology, achieving an almost individual mapping of the fracture pattern. This introductory article describes the rationale for the comprehensive AO CMF classification system, discusses the methodological framework, and provides insight into the experiences and interactions during the evaluation process within the core groups. The details of this system in terms of anatomy and levels are presented in a series of focused tutorials illustrated with case examples in this special issue of the Journal. PMID:25489387
Audigé, Laurent; Cornelius, Carl-Peter; Di Ieva, Antonio; Prein, Joachim
2014-12-01
Validated trauma classification systems are the sole means to provide the basis for reliable documentation and evaluation of patient care, which will open the gateway to evidence-based procedures and healthcare in the coming years. With the support of AO Investigation and Documentation, a classification group was established to develop and evaluate a comprehensive classification system for craniomaxillofacial (CMF) fractures. Blueprints for fracture classification in the major constituents of the human skull were drafted and then evaluated by a multispecialty group of experienced CMF surgeons and a radiologist in a structured process during iterative agreement sessions. At each session, surgeons independently classified the radiological imaging of up to 150 consecutive cases with CMF fractures. During subsequent review meetings, all discrepancies in the classification outcome were critically appraised for clarification and improvement until consensus was reached. The resulting CMF classification system is structured in a hierarchical fashion with three levels of increasing complexity. The most elementary level 1 simply distinguishes four fracture locations within the skull: mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). Levels 2 and 3 focus on further defining the fracture locations and for fracture morphology, achieving an almost individual mapping of the fracture pattern. This introductory article describes the rationale for the comprehensive AO CMF classification system, discusses the methodological framework, and provides insight into the experiences and interactions during the evaluation process within the core groups. The details of this system in terms of anatomy and levels are presented in a series of focused tutorials illustrated with case examples in this special issue of the Journal.
NASA Astrophysics Data System (ADS)
Cheng, Guanhui; Huang, Guohe; Dong, Cong; Zhu, Jinxin; Zhou, Xiong; Yao, Y.
2017-03-01
An evaluation-classification-downscaling-based climate projection (ECDoCP) framework is developed to fill a methodological gap of general circulation models (GCMs)-driven statistical-downscaling-based climate projections. ECDoCP includes four interconnected modules: GCM evaluation, climate classification, statistical downscaling, and climate projection. Monthly averages of daily minimum (Tmin) and maximum (Tmax) temperature and daily cumulative precipitation (Prec) over the Athabasca River Basin (ARB) at a 10 km resolution in the 21st century under four Representative Concentration Pathways (RCPs) are projected through ECDoCP. At the octodecadal scale, temperature and precipitation would increase; after bias correction, temperature would increase with a decreased increment, while precipitation would increase only under RCP 8.5. Interannual variability of climate anomalies would increase from RCPs 4.5, 2.6, 6.0 to 8.5 for temperature and from RCPs 2.6, 4.5, 6.0 to 8.5 for precipitation. Bidecadal averaged climate anomalies would decrease from December-January-February (DJF), March-April-May (MAM), September-October-November (SON) to June-July-August (JJA) for Tmin, from DJF, SON, MAM to JJA for Tmax, and from JJA, MAM, SON to DJF for Prec. Climate projection uncertainties would decrease in May to September for temperature and in November to April for precipitation. Spatial climatic variability would not obviously change with RCPs; climatic anomalies are highly correlated with climate-variable magnitudes. Climate anomalies would decrease from upstream to downstream for temperature, and precipitation would follow an opposite pattern. The north end and the other zones would have colder and warmer days, respectively; precipitation would decrease in the upstream and increase in the remaining region. Climate changes might lead to issues, e.g., accelerated glacier/snow melting, deserving attentions of researchers and the public.
Na, Tong; Xie, Jianyang; Zhao, Yitian; Zhao, Yifan; Liu, Yue; Wang, Yongtian; Liu, Jiang
2018-05-09
Automatic methods of analyzing of retinal vascular networks, such as retinal blood vessel detection, vascular network topology estimation, and arteries/veins classification are of great assistance to the ophthalmologist in terms of diagnosis and treatment of a wide spectrum of diseases. We propose a new framework for precisely segmenting retinal vasculatures, constructing retinal vascular network topology, and separating the arteries and veins. A nonlocal total variation inspired Retinex model is employed to remove the image intensity inhomogeneities and relatively poor contrast. For better generalizability and segmentation performance, a superpixel-based line operator is proposed as to distinguish between lines and the edges, thus allowing more tolerance in the position of the respective contours. The concept of dominant sets clustering is adopted to estimate retinal vessel topology and classify the vessel network into arteries and veins. The proposed segmentation method yields competitive results on three public data sets (STARE, DRIVE, and IOSTAR), and it has superior performance when compared with unsupervised segmentation methods, with accuracy of 0.954, 0.957, and 0.964, respectively. The topology estimation approach has been applied to five public databases (DRIVE,STARE, INSPIRE, IOSTAR, and VICAVR) and achieved high accuracy of 0.830, 0.910, 0.915, 0.928, and 0.889, respectively. The accuracies of arteries/veins classification based on the estimated vascular topology on three public databases (INSPIRE, DRIVE and VICAVR) are 0.90.9, 0.910, and 0.907, respectively. The experimental results show that the proposed framework has effectively addressed crossover problem, a bottleneck issue in segmentation and vascular topology reconstruction. The vascular topology information significantly improves the accuracy on arteries/veins classification. © 2018 American Association of Physicists in Medicine.
Pettine, Maurizio; Casentini, Barbara; Fazi, Stefano; Giovanardi, Franco; Pagnotta, Romano
2007-09-01
The trophic status classification of coastal waters at the European scale requires the availability of harmonised indicators and procedures. The composite trophic status index (TRIX) provides useful metrics for the assessment of the trophic status of coastal waters. It was originally developed for Italian coastal waters and then applied in many European seas (Adriatic, Tyrrhenian, Baltic, Black and Northern seas). The TRIX index does not fulfil the classification procedure suggested by the WFD for two reasons: (a) it is based on an absolute trophic scale without any normalization to type-specific reference conditions; (b) it makes an ex ante aggregation of biological (Chl-a) and physico-chemical (oxygen, nutrients) quality elements, instead of an ex post integration of separate evaluations of biological and subsequent chemical quality elements. A revisitation of the TRIX index in the light of the European Water Framework Directive (WFD, 2000/60/EC) and new TRIX derived tools are presented in this paper. A number of Italian coastal sites were grouped into different types based on a thorough analysis of their hydro-morphological conditions, and type-specific reference sites were selected. Unscaled TRIX values (UNTRIX) for reference and impacted sites have been calculated and two alternative UNTRIX-based classification procedures are discussed. The proposed procedures, to be validated on a broader scale, provide users with simple tools that give an integrated view of nutrient enrichment and its effects on algal biomass (Chl-a) and on oxygen levels. This trophic evaluation along with phytoplankton indicator species and algal blooms contribute to the comprehensive assessment of phytoplankton, one of the biological quality elements in coastal waters.
Zhou, Xi-Yin; Lei, Kun; Meng, Wei
2017-09-01
Coastal zones are population and economy highly intensity regions all over the world, and coastal habitat supports the sustainable development of human society. The accurate assessment of coastal habitat degradation is the essential prerequisite for coastal zone protection. In this study, an integrated framework of coastal habitat degradation assessment including landuse classification, habitat classifying and zoning, evaluation criterion of coastal habitat degradation and coastal habitat degradation index has been established for better regional coastal habitat assessment. Through establishment of detailed three-class landuse classification, the fine landscape change is revealed, the evaluation criterion of coastal habitat degradation through internal comparison based on the results of habitat classifying and zoning could indicate the levels of habitat degradation and distinguish the intensity of human disturbances in different habitat subareas under the same habitat classification. Finally, the results of coastal habitat degradation assessment could be achieved through coastal habitat degradation index (CHI). A case study of the framework is carried out in the Circum-Bohai-Sea-Coast, China, and the main results show the following: (1) The accuracy of all land use classes are above 90%, which indicates a satisfactory accuracy for the classification map. (2) The Circum-Bohai-Sea-Coast is divided into 3 kinds of habitats and 5 subareas. (3) In the five subareas of the Circum-Bohai-Sea-Coast, the levels of coastal habitat degradation own significant difference. The whole Circum-Bohai-Sea-Coast generally is in a worse state according to area weighting of each habitat subarea. This assessment framework of coastal habitat degradation would characterize the landuse change trend, realize better coastal habitat degradation assessment, reveal the habitat conservation tendency and distinguish intensity of human disturbances. Furthermore, it would support for accurate coastal zone protection measures for the specific coastal area. Copyright © 2017 Elsevier B.V. All rights reserved.
Tanantong, Tanatorn; Nantajeewarawat, Ekawit; Thiemjarus, Surapa
2015-02-09
False alarms in cardiac monitoring affect the quality of medical care, impacting on both patients and healthcare providers. In continuous cardiac monitoring using wireless Body Sensor Networks (BSNs), the quality of ECG signals can be deteriorated owing to several factors, e.g., noises, low battery power, and network transmission problems, often resulting in high false alarm rates. In addition, body movements occurring from activities of daily living (ADLs) can also create false alarms. This paper presents a two-phase framework for false arrhythmia alarm reduction in continuous cardiac monitoring, using signals from an ECG sensor and a 3D accelerometer. In the first phase, classification models constructed using machine learning algorithms are used for labeling input signals. ECG signals are labeled with heartbeat types and signal quality levels, while 3D acceleration signals are labeled with ADL types. In the second phase, a rule-based expert system is used for combining classification results in order to determine whether arrhythmia alarms should be accepted or suppressed. The proposed framework was validated on datasets acquired using BSNs and the MIT-BIH arrhythmia database. For the BSN dataset, acceleration and ECG signals were collected from 10 young and 10 elderly subjects while they were performing ADLs. The framework reduced the false alarm rate from 9.58% to 1.43% in our experimental study, showing that it can potentially assist physicians in diagnosing a vast amount of data acquired from wireless sensors and enhance the performance of continuous cardiac monitoring.
Liarokapis, Minas V; Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J; Manolakos, Elias S
2013-09-01
A learning scheme based on random forests is used to discriminate between different reach to grasp movements in 3-D space, based on the myoelectric activity of human muscles of the upper-arm and the forearm. Task specificity for motion decoding is introduced in two different levels: Subspace to move toward and object to be grasped. The discrimination between the different reach to grasp strategies is accomplished with machine learning techniques for classification. The classification decision is then used in order to trigger an EMG-based task-specific motion decoding model. Task specific models manage to outperform "general" models providing better estimation accuracy. Thus, the proposed scheme takes advantage of a framework incorporating both a classifier and a regressor that cooperate advantageously in order to split the task space. The proposed learning scheme can be easily used to a series of EMG-based interfaces that must operate in real time, providing data-driven capabilities for multiclass problems, that occur in everyday life complex environments.
Sun, Xiyang; Miao, Jiacheng; Wang, You; Luo, Zhiyuan; Li, Guang
2017-01-01
An estimate on the reliability of prediction in the applications of electronic nose is essential, which has not been paid enough attention. An algorithm framework called conformal prediction is introduced in this work for discriminating different kinds of ginsengs with a home-made electronic nose instrument. Nonconformity measure based on k-nearest neighbors (KNN) is implemented separately as underlying algorithm of conformal prediction. In offline mode, the conformal predictor achieves a classification rate of 84.44% based on 1NN and 80.63% based on 3NN, which is better than that of simple KNN. In addition, it provides an estimate of reliability for each prediction. In online mode, the validity of predictions is guaranteed, which means that the error rate of region predictions never exceeds the significance level set by a user. The potential of this framework for detecting borderline examples and outliers in the application of E-nose is also investigated. The result shows that conformal prediction is a promising framework for the application of electronic nose to make predictions with reliability and validity. PMID:28805721
Influence of leaching conditions for ecotoxicological classification of ash.
Stiernström, S; Enell, A; Wik, O; Hemström, K; Breitholtz, M
2014-02-01
The Waste Framework Directive (WFD; 2008/98/EC) states that classification of hazardous ecotoxicological properties of wastes (i.e. criteria H-14), should be based on the Community legislation on chemicals (i.e. CLP Regulation 1272/2008). However, harmonizing the waste and chemical classification may involve drastic changes related to choice of leaching tests as compared to e.g. the current European standard for ecotoxic characterization of waste (CEN 14735). The primary aim of the present study was therefore to evaluate the influence of leaching conditions, i.e. pH (inherent pH (∼10), and 7), liquid to solid (L/S) ratio (10 and 1000 L/kg) and particle size (<4 mm, <1 mm, and <0.125 mm), for subsequent chemical analysis and ecotoxicity testing in relation to classification of municipal waste incineration bottom ash. The hazard potential, based on either comparisons between element levels in leachate and literature toxicity data or ecotoxicity testing of the leachates, was overall significantly higher at low particle size (<0.125 mm) as compared to particle fractions <1mm and <4mm, at pH 10 as compared to pH 7, and at L/S 10 as compared to L/S 1000. These results show that the choice of leaching conditions is crucial for H-14 classification of ash and must be carefully considered in deciding on future guidance procedures in Europe. Copyright © 2013 Elsevier Ltd. All rights reserved.
Epileptic seizure detection in EEG signal using machine learning techniques.
Jaiswal, Abeg Kumar; Banka, Haider
2018-03-01
Epilepsy is a well-known nervous system disorder characterized by seizures. Electroencephalograms (EEGs), which capture brain neural activity, can detect epilepsy. Traditional methods for analyzing an EEG signal for epileptic seizure detection are time-consuming. Recently, several automated seizure detection frameworks using machine learning technique have been proposed to replace these traditional methods. The two basic steps involved in machine learning are feature extraction and classification. Feature extraction reduces the input pattern space by keeping informative features and the classifier assigns the appropriate class label. In this paper, we propose two effective approaches involving subpattern based PCA (SpPCA) and cross-subpattern correlation-based PCA (SubXPCA) with Support Vector Machine (SVM) for automated seizure detection in EEG signals. Feature extraction was performed using SpPCA and SubXPCA. Both techniques explore the subpattern correlation of EEG signals, which helps in decision-making process. SVM is used for classification of seizure and non-seizure EEG signals. The SVM was trained with radial basis kernel. All the experiments have been carried out on the benchmark epilepsy EEG dataset. The entire dataset consists of 500 EEG signals recorded under different scenarios. Seven different experimental cases for classification have been conducted. The classification accuracy was evaluated using tenfold cross validation. The classification results of the proposed approaches have been compared with the results of some of existing techniques proposed in the literature to establish the claim.
Cluster categorization of urban roads to optimize their noise monitoring.
Zambon, G; Benocci, R; Brambilla, G
2016-01-01
Road traffic in urban areas is recognized to be associated with urban mobility and public health, and it is often the main source of noise pollution. Lately, noise maps have been considered a powerful tool to estimate the population exposure to environmental noise, but they need to be validated by measured noise data. The project Dynamic Acoustic Mapping (DYNAMAP), co-funded in the framework of the LIFE 2013 program, is aimed to develop a statistically based method to optimize the choice and the number of monitoring sites and to automate the noise mapping update using the data retrieved from a low-cost monitoring network. Indeed, the first objective should improve the spatial sampling based on the legislative road classification, as this classification is mainly based on the geometrical characteristics of the road, rather than its noise emission. The present paper describes the statistical approach of the methodology under development and the results of its preliminary application to a limited sample of roads in the city of Milan. The resulting categorization of roads, based on clustering the 24-h hourly L Aeqh, looks promising to optimize the spatial sampling of noise monitoring toward a description of the noise pollution due to complex urban road networks more efficient than that based on the legislative road classification.
Virtual shelves in a digital library: a framework for access to networked information sources.
Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E
1995-01-01
Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.
Lauber, Chris
2012-01-01
Virus taxonomy has received little attention from the research community despite its broad relevance. In an accompanying paper (C. Lauber and A. E. Gorbalenya, J. Virol. 86:3890–3904, 2012), we have introduced a quantitative approach to hierarchically classify viruses of a family using pairwise evolutionary distances (PEDs) as a measure of genetic divergence. When applied to the six most conserved proteins of the Picornaviridae, it clustered 1,234 genome sequences in groups at three hierarchical levels (to which we refer as the “GENETIC classification”). In this study, we compare the GENETIC classification with the expert-based picornavirus taxonomy and outline differences in the underlying frameworks regarding the relation of virus groups and genetic diversity that represent, respectively, the structure and content of a classification. To facilitate the analysis, we introduce two novel diagrams. The first connects the genetic diversity of taxa to both the PED distribution and the phylogeny of picornaviruses. The second depicts a classification and the accommodated genetic diversity in a standardized manner. Generally, we found striking agreement between the two classifications on species and genus taxa. A few disagreements concern the species Human rhinovirus A and Human rhinovirus C and the genus Aphthovirus, which were split in the GENETIC classification. Furthermore, we propose a new supergenus level and universal, level-specific PED thresholds, not reached yet by many taxa. Since the species threshold is approached mostly by taxa with large sampling sizes and those infecting multiple hosts, it may represent an upper limit on divergence, beyond which homologous recombination in the six most conserved genes between two picornaviruses might not give viable progeny. PMID:22278238
Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.
2011-01-01
The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106
Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N
2011-01-01
The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.
On Developing a Taxonomy for Multidisciplinary Design Optimization: A Decision-Based Perspective
NASA Technical Reports Server (NTRS)
Lewis, Kemper; Mistree, Farrokh
1995-01-01
In this paper, we approach MDO from a Decision-Based Design (DBD) perspective and explore classification schemes for designing complex systems and processes. Specifically, we focus on decisions, which are only a small portion of the Decision Support Problem (DSP) Technique, our implementation of DBD. We map coupled nonhierarchical and hierarchical representations from the DSP Technique into the Balling-Sobieski (B-S) framework (Balling and Sobieszczanski-Sobieski, 1994), and integrate domain-independent linguistic terms to complete our taxonomy. Application of DSPs to the design of complex, multidisciplinary systems include passenger aircraft, ships, damage tolerant structural and mechanical systems, and thermal energy systems. In this paper we show that Balling-Sobieski framework is consistent with that of the Decision Support Problem Technique through the use of linguistic entities to describe the same type of formulations. We show that the underlying linguistics of the solution approaches are the same and can be coalesced into a homogeneous framework with which to base the research, application, and technology MDO upon. We introduce, in the Balling-Sobieski framework, examples of multidisciplinary design, namely, aircraft, damage tolerant structural and mechanical systems, and thermal energy systems.
Jiang, Guoqian; Wang, Chen; Zhu, Qian; Chute, Christopher G
2013-01-01
Knowledge-driven text mining is becoming an important research area for identifying pharmacogenomics target genes. However, few of such studies have been focused on the pharmacogenomics targets of adverse drug events (ADEs). The objective of the present study is to build a framework of knowledge integration and discovery that aims to support pharmacogenomics target predication of ADEs. We integrate a semantically annotated literature corpus Semantic MEDLINE with a semantically coded ADE knowledgebase known as ADEpedia using a semantic web based framework. We developed a knowledge discovery approach combining a network analysis of a protein-protein interaction (PPI) network and a gene functional classification approach. We performed a case study of drug-induced long QT syndrome for demonstrating the usefulness of the framework in predicting potential pharmacogenomics targets of ADEs.
AI User Support System for SAP ERP
NASA Astrophysics Data System (ADS)
Vlasov, Vladimir; Chebotareva, Victoria; Rakhimov, Marat; Kruglikov, Sergey
2017-10-01
An intelligent system for SAP ERP user support is proposed in this paper. It enables automatic replies on users’ requests for support, saving time for problem analysis and resolution and improving responsiveness for end users. The system is based on an ensemble of machine learning algorithms of multiclass text classification, providing efficient question understanding, and a special framework for evidence retrieval, providing the best answer derivation.
Mete, Mutlu; Sakoglu, Unal; Spence, Jeffrey S; Devous, Michael D; Harris, Thomas S; Adinoff, Bryon
2016-10-06
Neuroimaging studies have yielded significant advances in the understanding of neural processes relevant to the development and persistence of addiction. However, these advances have not explored extensively for diagnostic accuracy in human subjects. The aim of this study was to develop a statistical approach, using a machine learning framework, to correctly classify brain images of cocaine-dependent participants and healthy controls. In this study, a framework suitable for educing potential brain regions that differed between the two groups was developed and implemented. Single Photon Emission Computerized Tomography (SPECT) images obtained during rest or a saline infusion in three cohorts of 2-4 week abstinent cocaine-dependent participants (n = 93) and healthy controls (n = 69) were used to develop a classification model. An information theoretic-based feature selection algorithm was first conducted to reduce the number of voxels. A density-based clustering algorithm was then used to form spatially connected voxel clouds in three-dimensional space. A statistical classifier, Support Vectors Machine (SVM), was then used for participant classification. Statistically insignificant voxels of spatially connected brain regions were removed iteratively and classification accuracy was reported through the iterations. The voxel-based analysis identified 1,500 spatially connected voxels in 30 distinct clusters after a grid search in SVM parameters. Participants were successfully classified with 0.88 and 0.89 F-measure accuracies in 10-fold cross validation (10xCV) and leave-one-out (LOO) approaches, respectively. Sensitivity and specificity were 0.90 and 0.89 for LOO; 0.83 and 0.83 for 10xCV. Many of the 30 selected clusters are highly relevant to the addictive process, including regions relevant to cognitive control, default mode network related self-referential thought, behavioral inhibition, and contextual memories. Relative hyperactivity and hypoactivity of regional cerebral blood flow in brain regions in cocaine-dependent participants are presented with corresponding level of significance. The SVM-based approach successfully classified cocaine-dependent and healthy control participants using voxels selected with information theoretic-based and statistical methods from participants' SPECT data. The regions found in this study align with brain regions reported in the literature. These findings support the future use of brain imaging and SVM-based classifier in the diagnosis of substance use disorders and furthering an understanding of their underlying pathology.
LISA Framework for Enhancing Gravitational Wave Signal Extraction Techniques
NASA Technical Reports Server (NTRS)
Thompson, David E.; Thirumalainambi, Rajkumar
2006-01-01
This paper describes the development of a Framework for benchmarking and comparing signal-extraction and noise-interference-removal methods that are applicable to interferometric Gravitational Wave detector systems. The primary use is towards comparing signal and noise extraction techniques at LISA frequencies from multiple (possibly confused) ,gravitational wave sources. The Framework includes extensive hybrid learning/classification algorithms, as well as post-processing regularization methods, and is based on a unique plug-and-play (component) architecture. Published methods for signal extraction and interference removal at LISA Frequencies are being encoded, as well as multiple source noise models, so that the stiffness of GW Sensitivity Space can be explored under each combination of methods. Furthermore, synthetic datasets and source models can be created and imported into the Framework, and specific degraded numerical experiments can be run to test the flexibility of the analysis methods. The Framework also supports use of full current LISA Testbeds, Synthetic data systems, and Simulators already in existence through plug-ins and wrappers, thus preserving those legacy codes and systems in tact. Because of the component-based architecture, all selected procedures can be registered or de-registered at run-time, and are completely reusable, reconfigurable, and modular.
a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data
NASA Astrophysics Data System (ADS)
Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.
2018-04-01
Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.
Convex formulation of multiple instance learning from positive and unlabeled bags.
Bao, Han; Sakai, Tomoya; Sato, Issei; Sugiyama, Masashi
2018-05-24
Multiple instance learning (MIL) is a variation of traditional supervised learning problems where data (referred to as bags) are composed of sub-elements (referred to as instances) and only bag labels are available. MIL has a variety of applications such as content-based image retrieval, text categorization, and medical diagnosis. Most of the previous work for MIL assume that training bags are fully labeled. However, it is often difficult to obtain an enough number of labeled bags in practical situations, while many unlabeled bags are available. A learning framework called PU classification (positive and unlabeled classification) can address this problem. In this paper, we propose a convex PU classification method to solve an MIL problem. We experimentally show that the proposed method achieves better performance with significantly lower computation costs than an existing method for PU-MIL. Copyright © 2018 Elsevier Ltd. All rights reserved.
Schwaibold, M; Schöchlin, J; Bolz, A
2002-01-01
For classification tasks in biosignal processing, several strategies and algorithms can be used. Knowledge-based systems allow prior knowledge about the decision process to be integrated, both by the developer and by self-learning capabilities. For the classification stages in a sleep stage detection framework, three inference strategies were compared regarding their specific strengths: a classical signal processing approach, artificial neural networks and neuro-fuzzy systems. Methodological aspects were assessed to attain optimum performance and maximum transparency for the user. Due to their effective and robust learning behavior, artificial neural networks could be recommended for pattern recognition, while neuro-fuzzy systems performed best for the processing of contextual information.
Self-adjoint realisations of the Dirac-Coulomb Hamiltonian for heavy nuclei
NASA Astrophysics Data System (ADS)
Gallone, Matteo; Michelangeli, Alessandro
2018-02-01
We derive a classification of the self-adjoint extensions of the three-dimensional Dirac-Coulomb operator in the critical regime of the Coulomb coupling. Our approach is solely based upon the Kreĭn-Višik-Birman extension scheme, or also on Grubb's universal classification theory, as opposite to previous works within the standard von Neumann framework. This let the boundary condition of self-adjointness emerge, neatly and intrinsically, as a multiplicative constraint between regular and singular part of the functions in the domain of the extension, the multiplicative constant giving also immediate information on the invertibility property and on the resolvent and spectral gap of the extension.
Representation learning via Dual-Autoencoder for recommendation.
Zhuang, Fuzhen; Zhang, Zhiqiang; Qian, Mingda; Shi, Chuan; Xie, Xing; He, Qing
2017-06-01
Recommendation has provoked vast amount of attention and research in recent decades. Most previous works employ matrix factorization techniques to learn the latent factors of users and items. And many subsequent works consider external information, e.g., social relationships of users and items' attributions, to improve the recommendation performance under the matrix factorization framework. However, matrix factorization methods may not make full use of the limited information from rating or check-in matrices, and achieve unsatisfying results. Recently, deep learning has proven able to learn good representation in natural language processing, image classification, and so on. Along this line, we propose a new representation learning framework called Recommendation via Dual-Autoencoder (ReDa). In this framework, we simultaneously learn the new hidden representations of users and items using autoencoders, and minimize the deviations of training data by the learnt representations of users and items. Based on this framework, we develop a gradient descent method to learn hidden representations. Extensive experiments conducted on several real-world data sets demonstrate the effectiveness of our proposed method compared with state-of-the-art matrix factorization based methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, S.; Zhang, S.; Yang, D.
2017-09-01
Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.
The fusion of large scale classified side-scan sonar image mosaics.
Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan
2006-07-01
This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.
A robust probabilistic collaborative representation based classification for multimodal biometrics
NASA Astrophysics Data System (ADS)
Zhang, Jing; Liu, Huanxi; Ding, Derui; Xiao, Jianli
2018-04-01
Most of the traditional biometric recognition systems perform recognition with a single biometric indicator. These systems have suffered noisy data, interclass variations, unacceptable error rates, forged identity, and so on. Due to these inherent problems, it is not valid that many researchers attempt to enhance the performance of unimodal biometric systems with single features. Thus, multimodal biometrics is investigated to reduce some of these defects. This paper proposes a new multimodal biometric recognition approach by fused faces and fingerprints. For more recognizable features, the proposed method extracts block local binary pattern features for all modalities, and then combines them into a single framework. For better classification, it employs the robust probabilistic collaborative representation based classifier to recognize individuals. Experimental results indicate that the proposed method has improved the recognition accuracy compared to the unimodal biometrics.
A SVM framework for fault detection of the braking system in a high speed train
NASA Astrophysics Data System (ADS)
Liu, Jie; Li, Yan-Fu; Zio, Enrico
2017-03-01
In April 2015, the number of operating High Speed Trains (HSTs) in the world has reached 3603. An efficient, effective and very reliable braking system is evidently very critical for trains running at a speed around 300 km/h. Failure of a highly reliable braking system is a rare event and, consequently, informative recorded data on fault conditions are scarce. This renders the fault detection problem a classification problem with highly unbalanced data. In this paper, a Support Vector Machine (SVM) framework, including feature selection, feature vector selection, model construction and decision boundary optimization, is proposed for tackling this problem. Feature vector selection can largely reduce the data size and, thus, the computational burden. The constructed model is a modified version of the least square SVM, in which a higher cost is assigned to the error of classification of faulty conditions than the error of classification of normal conditions. The proposed framework is successfully validated on a number of public unbalanced datasets. Then, it is applied for the fault detection of braking systems in HST: in comparison with several SVM approaches for unbalanced datasets, the proposed framework gives better results.
Designing a robust activity recognition framework for health and exergaming using wearable sensors.
Alshurafa, Nabil; Xu, Wenyao; Liu, Jason J; Huang, Ming-Chun; Mortazavi, Bobak; Roberts, Christian K; Sarrafzadeh, Majid
2014-09-01
Detecting human activity independent of intensity is essential in many applications, primarily in calculating metabolic equivalent rates and extracting human context awareness. Many classifiers that train on an activity at a subset of intensity levels fail to recognize the same activity at other intensity levels. This demonstrates weakness in the underlying classification method. Training a classifier for an activity at every intensity level is also not practical. In this paper, we tackle a novel intensity-independent activity recognition problem where the class labels exhibit large variability, the data are of high dimensionality, and clustering algorithms are necessary. We propose a new robust stochastic approximation framework for enhanced classification of such data. Experiments are reported using two clustering techniques, K-Means and Gaussian Mixture Models. The stochastic approximation algorithm consistently outperforms other well-known classification schemes which validate the use of our proposed clustered data representation. We verify the motivation of our framework in two applications that benefit from intensity-independent activity recognition. The first application shows how our framework can be used to enhance energy expenditure calculations. The second application is a novel exergaming environment aimed at using games to reward physical activity performed throughout the day, to encourage a healthy lifestyle.
C-learning: A new classification framework to estimate optimal dynamic treatment regimes.
Zhang, Baqun; Zhang, Min
2017-12-11
A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.
Chen, Zhiru; Hong, Wenxue
2016-02-01
Considering the low accuracy of prediction in the positive samples and poor overall classification effects caused by unbalanced sample data of MicroRNA (miRNA) target, we proposes a support vector machine (SVM)-integration of under-sampling and weight (IUSM) algorithm in this paper, an under-sampling based on the ensemble learning algorithm. The algorithm adopts SVM as learning algorithm and AdaBoost as integration framework, and embeds clustering-based under-sampling into the iterative process, aiming at reducing the degree of unbalanced distribution of positive and negative samples. Meanwhile, in the process of adaptive weight adjustment of the samples, the SVM-IUSM algorithm eliminates the abnormal ones in negative samples with robust sample weights smoothing mechanism so as to avoid over-learning. Finally, the prediction of miRNA target integrated classifier is achieved with the combination of multiple weak classifiers through the voting mechanism. The experiment revealed that the SVM-IUSW, compared with other algorithms on unbalanced dataset collection, could not only improve the accuracy of positive targets and the overall effect of classification, but also enhance the generalization ability of miRNA target classifier.
CHANGING OUR DIAGNOSTIC PARADIGM: MOVEMENT SYSTEM DIAGNOSTIC CLASSIFICATION
Kamonseki, Danilo H.; Staker, Justin L.; Lawrence, Rebekah L.; Braman, Jonathan P.
2017-01-01
Proper diagnosis is a first step in applying best available treatments, and prognosticating outcomes for clients. Currently, the majority of musculoskeletal diagnoses are classified according to pathoanatomy. However, the majority of physical therapy treatments are applied toward movement system impairments or pain. While advocated within the physical therapy profession for over thirty years, diagnostic classification within a movement system framework has not been uniformly developed or adopted. We propose a basic framework and rationale for application of a movement system diagnostic classification for atraumatic shoulder pain conditions, as a case for the broader development of movement system diagnostic labels. Shifting our diagnostic paradigm has potential to enhance communication, improve educational efficiency, facilitate research, directly link to function, improve clinical care, and accelerate preventive interventions. PMID:29158950
Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks.
Yu, Lequan; Chen, Hao; Dou, Qi; Qin, Jing; Heng, Pheng-Ann
2017-04-01
Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.
ERIC Educational Resources Information Center
Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver
2012-01-01
Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…
Classification of BRCA1 missense variants of unknown clinical significance
Phelan, C; Dapic, V; Tice, B; Favis, R; Kwan, E; Barany, F; Manoukian, S; Radice, P; van der Luijt, R B; van Nesselrooij, B P M; Chenevix-Trench, G; kConFab; Caldes, T; de La Hoya, M; Lindquist, S; Tavtigian, S; Goldgar, D; Borg, A; Narod, S; Monteiro, A
2005-01-01
Background: BRCA1 is a tumour suppressor with pleiotropic actions. Germline mutations in BRCA1 are responsible for a large proportion of breast–ovarian cancer families. Several missense variants have been identified throughout the gene but because of lack of information about their impact on the function of BRCA1, predictive testing is not always informative. Classification of missense variants into deleterious/high risk or neutral/low clinical significance is essential to identify individuals at risk. Objective: To investigate a panel of missense variants. Methods and results: The panel was investigated in a comprehensive framework that included (1) a functional assay based on transcription activation; (2) segregation analysis and a method of using incomplete pedigree data to calculate the odds of causality; (3) a method based on interspecific sequence variation. It was shown that the transcriptional activation assay could be used as a test to characterise mutations in the carboxy-terminus region of BRCA1 encompassing residues 1396–1863. Thirteen missense variants (H1402Y, L1407P, H1421Y, S1512I, M1628T, M1628V, T1685I, G1706A, T1720A, A1752P, G1788V, V1809F, and W1837R) were specifically investigated. Conclusions: While individual classification schemes for BRCA1 alleles still present limitations, a combination of several methods provides a more powerful way of identifying variants that are causally linked to a high risk of breast and ovarian cancer. The framework presented here brings these variants nearer to clinical applicability. PMID:15689452
Mehrang, Saeed; Pietilä, Julia; Korhonen, Ilkka
2018-02-22
Wrist-worn sensors have better compliance for activity monitoring compared to hip, waist, ankle or chest positions. However, wrist-worn activity monitoring is challenging due to the wide degree of freedom for the hand movements, as well as similarity of hand movements in different activities such as varying intensities of cycling. To strengthen the ability of wrist-worn sensors in detecting human activities more accurately, motion signals can be complemented by physiological signals such as optical heart rate (HR) based on photoplethysmography. In this paper, an activity monitoring framework using an optical HR sensor and a triaxial wrist-worn accelerometer is presented. We investigated a range of daily life activities including sitting, standing, household activities and stationary cycling with two intensities. A random forest (RF) classifier was exploited to detect these activities based on the wrist motions and optical HR. The highest overall accuracy of 89.6 ± 3.9% was achieved with a forest of a size of 64 trees and 13-s signal segments with 90% overlap. Removing the HR-derived features decreased the classification accuracy of high-intensity cycling by almost 7%, but did not affect the classification accuracies of other activities. A feature reduction utilizing the feature importance scores of RF was also carried out and resulted in a shrunken feature set of only 21 features. The overall accuracy of the classification utilizing the shrunken feature set was 89.4 ± 4.2%, which is almost equivalent to the above-mentioned peak overall accuracy.
Massar, Melody L; Bhagavatula, Ramamurthy; Ozolek, John A; Castro, Carlos A; Fickus, Matthew; Kovačević, Jelena
2011-10-19
We present the current state of our work on a mathematical framework for identification and delineation of histopathology images-local histograms and occlusion models. Local histograms are histograms computed over defined spatial neighborhoods whose purpose is to characterize an image locally. This unit of description is augmented by our occlusion models that describe a methodology for image formation. In the context of this image formation model, the power of local histograms with respect to appropriate families of images will be shown through various proved statements about expected performance. We conclude by presenting a preliminary study to demonstrate the power of the framework in the context of histopathology image classification tasks that, while differing greatly in application, both originate from what is considered an appropriate class of images for this framework.
Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification.
Sladojevic, Srdjan; Arsenovic, Marko; Anderla, Andras; Culibrk, Dubravko; Stefanovic, Darko
2016-01-01
The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%.
Fish Ontology framework for taxonomy-based fish recognition
Ali, Najib M.; Khan, Haris A.; Then, Amy Y-Hui; Ving Ching, Chong; Gaur, Manas
2017-01-01
Life science ontologies play an important role in Semantic Web. Given the diversity in fish species and the associated wealth of information, it is imperative to develop an ontology capable of linking and integrating this information in an automated fashion. As such, we introduce the Fish Ontology (FO), an automated classification architecture of existing fish taxa which provides taxonomic information on unknown fish based on metadata restrictions. It is designed to support knowledge discovery, provide semantic annotation of fish and fisheries resources, data integration, and information retrieval. Automated classification for unknown specimens is a unique feature that currently does not appear to exist in other known ontologies. Examples of automated classification for major groups of fish are demonstrated, showing the inferred information by introducing several restrictions at the species or specimen level. The current version of FO has 1,830 classes, includes widely used fisheries terminology, and models major aspects of fish taxonomy, grouping, and character. With more than 30,000 known fish species globally, the FO will be an indispensable tool for fish scientists and other interested users. PMID:28929028
Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification
Sladojevic, Srdjan; Arsenovic, Marko; Culibrk, Dubravko; Stefanovic, Darko
2016-01-01
The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%. PMID:27418923
Ravi, Daniele; Fabelo, Himar; Callic, Gustavo Marrero; Yang, Guang-Zhong
2017-09-01
Recent advances in hyperspectral imaging have made it a promising solution for intra-operative tissue characterization, with the advantages of being non-contact, non-ionizing, and non-invasive. Working with hyperspectral images in vivo, however, is not straightforward as the high dimensionality of the data makes real-time processing challenging. In this paper, a novel dimensionality reduction scheme and a new processing pipeline are introduced to obtain a detailed tumor classification map for intra-operative margin definition during brain surgery. However, existing approaches to dimensionality reduction based on manifold embedding can be time consuming and may not guarantee a consistent result, thus hindering final tissue classification. The proposed framework aims to overcome these problems through a process divided into two steps: dimensionality reduction based on an extension of the T-distributed stochastic neighbor approach is first performed and then a semantic segmentation technique is applied to the embedded results by using a Semantic Texton Forest for tissue classification. Detailed in vivo validation of the proposed method has been performed to demonstrate the potential clinical value of the system.
Postacute rehabilitation quality of care: toward a shared conceptual framework.
Jesus, Tiago Silva; Hoenig, Helen
2015-05-01
There is substantial interest in mechanisms for measuring, reporting, and improving the quality of health care, including postacute care (PAC) and rehabilitation. Unfortunately, current activities generally are either too narrow or too poorly specified to reflect PAC rehabilitation quality of care. In part, this is caused by a lack of a shared conceptual understanding of what construes quality of care in PAC rehabilitation. This article presents the PAC-rehab quality framework: an evidence-based conceptual framework articulating elements specifically pertaining to PAC rehabilitation quality of care. The widely recognized Donabedian structure, process, and outcomes (SPO) model furnished the underlying structure for the PAC-rehab quality framework, and the International Classification of Functioning, Disability and Health (ICF) framed the functional outcomes. A comprehensive literature review provided the evidence base to specify elements within the SPO model and ICF-derived framework. A set of macrolevel-outcomes (functional performance, quality of life of patient and caregivers, consumers' experience, place of discharge, health care utilization) were defined for PAC rehabilitation and then related to their (1) immediate and intermediate outcomes, (2) underpinning care processes, (3) supportive team functioning and improvement processes, and (4) underlying care structures. The role of environmental factors and centrality of patients in the framework are explicated as well. Finally, we discuss why outcomes may best measure and reflect the quality of PAC rehabilitation. The PAC-rehab quality framework provides a conceptually sound, evidence-based framework appropriate for quality of care activities across the PAC rehabilitation continuum. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Das, Dev Kumar; Ghosh, Madhumala; Pal, Mallika; Maiti, Asok K; Chakraborty, Chandan
2013-02-01
The aim of this paper is to address the development of computer assisted malaria parasite characterization and classification using machine learning approach based on light microscopic images of peripheral blood smears. In doing this, microscopic image acquisition from stained slides, illumination correction and noise reduction, erythrocyte segmentation, feature extraction, feature selection and finally classification of different stages of malaria (Plasmodium vivax and Plasmodium falciparum) have been investigated. The erythrocytes are segmented using marker controlled watershed transformation and subsequently total ninety six features describing shape-size and texture of erythrocytes are extracted in respect to the parasitemia infected versus non-infected cells. Ninety four features are found to be statistically significant in discriminating six classes. Here a feature selection-cum-classification scheme has been devised by combining F-statistic, statistical learning techniques i.e., Bayesian learning and support vector machine (SVM) in order to provide the higher classification accuracy using best set of discriminating features. Results show that Bayesian approach provides the highest accuracy i.e., 84% for malaria classification by selecting 19 most significant features while SVM provides highest accuracy i.e., 83.5% with 9 most significant features. Finally, the performance of these two classifiers under feature selection framework has been compared toward malaria parasite classification. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Brink, Henrik; Crellin-Quick, Arien
2012-12-01
With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.
2012-12-15
With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In additionmore » to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.« less
Maraví-Poma, E; Patchen Dellinger, E; Forsmark, C E; Layer, P; Lévy, P; Shimosegawa, T; Siriwardena, A K; Uomo, G; Whitcomb, D C; Windsor, J A; Petrov, M S
2014-05-01
To develop a new classification of acute pancreatitis severity on the basis of a sound conceptual framework, comprehensive review of the published evidence, and worldwide consultation. The Atlanta definitions of acute pancreatitis severity are ingrained in the lexicon of specialist in pancreatic diseases, but are suboptimal because these definitions are based on the empiric description of events not associated with severity. A personal invitation to contribute to the development of a new classification of acute pancreatitis severity was sent to all surgeons, gastroenterologists, internists, intensivists and radiologists currently active in the field of clinical acute pancreatitis. The invitation was not limited to members of certain associations or residents of certain countries. A global web-based survey was conducted, and a dedicated international symposium was organized to bring contributors from different disciplines together and discuss the concept and definitions. The new classification of severity is based on the actual local and systemic determinants of severity, rather than on the description of events that are non-causally associated with severity. The local determinant relates to whether there is (peri) pancreatic necrosis or not, and if present, whether it is sterile or infected. The systemic determinant relates to whether there is organ failure or not, and if present, whether it is transient or persistent. The presence of one determinant can modify the effect of another, whereby the presence of both infected (peri) pancreatic necrosis and persistent organ failure has a greater impact upon severity than either determinant alone. The derivation of a classification based on the above principles results in four categories of severity: mild, moderate, severe, and critical. This classification is the result of a consultative process among specialists in pancreatic diseases from 49 countries spanning North America, South America, Europe, Asia, Oceania and Africa. It provides a set of concise up to date definitions of all the main entities pertinent to classifying the severity of acute pancreatitis in clinical practice and research. This ensures that the determinant-based classification can be used in a uniform manner throughout the world. Copyright © 2013 Elsevier España, S.L. and SEMICYUC. All rights reserved.
Integration of Network Topological and Connectivity Properties for Neuroimaging Classification
Jie, Biao; Gao, Wei; Wang, Qian; Wee, Chong-Yaw
2014-01-01
Rapid advances in neuroimaging techniques have provided an efficient and noninvasive way for exploring the structural and functional connectivity of the human brain. Quantitative measurement of abnormality of brain connectivity in patients with neurodegenerative diseases, such as mild cognitive impairment (MCI) and Alzheimer’s disease (AD), have also been widely reported, especially at a group level. Recently, machine learning techniques have been applied to the study of AD and MCI, i.e., to identify the individuals with AD/MCI from the healthy controls (HCs). However, most existing methods focus on using only a single property of a connectivity network, although multiple network properties, such as local connectivity and global topological properties, can potentially be used. In this paper, by employing multikernel based approach, we propose a novel connectivity based framework to integrate multiple properties of connectivity network for improving the classification performance. Specifically, two different types of kernels (i.e., vector-based kernel and graph kernel) are used to quantify two different yet complementary properties of the network, i.e., local connectivity and global topological properties. Then, multikernel learning (MKL) technique is adopted to fuse these heterogeneous kernels for neuroimaging classification. We test the performance of our proposed method on two different data sets. First, we test it on the functional connectivity networks of 12 MCI and 25 HC subjects. The results show that our method achieves significant performance improvement over those using only one type of network property. Specifically, our method achieves a classification accuracy of 91.9%, which is 10.8% better than those by single network-property-based methods. Then, we test our method for gender classification on a large set of functional connectivity networks with 133 infants scanned at birth, 1 year, and 2 years, also demonstrating very promising results. PMID:24108708
Classifying clinical decision making: a unifying approach.
Buckingham, C D; Adams, A
2000-10-01
This is the first of two linked papers exploring decision making in nursing which integrate research evidence from different clinical and academic disciplines. Currently there are many decision-making theories, each with their own distinctive concepts and terminology, and there is a tendency for separate disciplines to view their own decision-making processes as unique. Identifying good nursing decisions and where improvements can be made is therefore problematic, and this can undermine clinical and organizational effectiveness, as well as nurses' professional status. Within the unifying framework of psychological classification, the overall aim of the two papers is to clarify and compare terms, concepts and processes identified in a diversity of decision-making theories, and to demonstrate their underlying similarities. It is argued that the range of explanations used across disciplines can usefully be re-conceptualized as classification behaviour. This paper explores problems arising from multiple theories of decision making being applied to separate clinical disciplines. Attention is given to detrimental effects on nursing practice within the context of multidisciplinary health-care organizations and the changing role of nurses. The different theories are outlined and difficulties in applying them to nursing decisions highlighted. An alternative approach based on a general model of classification is then presented in detail to introduce its terminology and the unifying framework for interpreting all types of decisions. The classification model is used to provide the context for relating alternative philosophical approaches and to define decision-making activities common to all clinical domains. This may benefit nurses by improving multidisciplinary collaboration and weakening clinical elitism.
Classification of Sporting Activities Using Smartphone Accelerometers
Mitchell, Edmond; Monaghan, David; O'Connor, Noel E.
2013-01-01
In this paper we present a framework that allows for the automatic identification of sporting activities using commonly available smartphones. We extract discriminative informational features from smartphone accelerometers using the Discrete Wavelet Transform (DWT). Despite the poor quality of their accelerometers, smartphones were used as capture devices due to their prevalence in today's society. Successful classification on this basis potentially makes the technology accessible to both elite and non-elite athletes. Extracted features are used to train different categories of classifiers. No one classifier family has a reportable direct advantage in activity classification problems to date; thus we examine classifiers from each of the most widely used classifier families. We investigate three classification approaches; a commonly used SVM-based approach, an optimized classification model and a fusion of classifiers. We also investigate the effect of changing several of the DWT input parameters, including mother wavelets, window lengths and DWT decomposition levels. During the course of this work we created a challenging sports activity analysis dataset, comprised of soccer and field-hockey activities. The average maximum F-measure accuracy of 87% was achieved using a fusion of classifiers, which was 6% better than a single classifier model and 23% better than a standard SVM approach. PMID:23604031
Oregon Hydrologic Landscapes: A Classification Framework
There is a growing need for hydrologic classification systems that can provide a basis for broad-scale assessments of the hydrologic functions of landscapes and watersheds and their responses to stressors such as climate change. We developed a hydrologic landscape (HL) classifica...
A Scatter-Based Prototype Framework and Multi-Class Extension of Support Vector Machines
Jenssen, Robert; Kloft, Marius; Zien, Alexander; Sonnenburg, Sören; Müller, Klaus-Robert
2012-01-01
We provide a novel interpretation of the dual of support vector machines (SVMs) in terms of scatter with respect to class prototypes and their mean. As a key contribution, we extend this framework to multiple classes, providing a new joint Scatter SVM algorithm, at the level of its binary counterpart in the number of optimization variables. This enables us to implement computationally efficient solvers based on sequential minimal and chunking optimization. As a further contribution, the primal problem formulation is developed in terms of regularized risk minimization and the hinge loss, revealing the score function to be used in the actual classification of test patterns. We investigate Scatter SVM properties related to generalization ability, computational efficiency, sparsity and sensitivity maps, and report promising results. PMID:23118845
Ghasemzadeh, Hassan; Loseu, Vitali; Jafari, Roozbeh
2010-03-01
Mobile sensor-based systems are emerging as promising platforms for healthcare monitoring. An important goal of these systems is to extract physiological information about the subject wearing the network. Such information can be used for life logging, quality of life measures, fall detection, extraction of contextual information, and many other applications. Data collected by these sensor nodes are overwhelming, and hence, an efficient data processing technique is essential. In this paper, we present a system using inexpensive, off-the-shelf inertial sensor nodes that constructs motion transcripts from biomedical signals and identifies movements by taking collaboration between the nodes into consideration. Transcripts are built of motion primitives and aim to reduce the complexity of the original data. We then label each primitive with a unique symbol and generate a sequence of symbols, known as motion template, representing a particular action. This model leads to a distributed algorithm for action recognition using edit distance with respect to motion templates. The algorithm reduces the number of active nodes during every classification decision. We present our results using data collected from five normal subjects performing transitional movements. The results clearly illustrate the effectiveness of our framework. In particular, we obtain a classification accuracy of 84.13% with only one sensor node involved in the classification process.
Benchmarking protein classification algorithms via supervised cross-validation.
Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor
2008-04-24
Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.
Duraisamy, Baskar; Shanmugam, Jayanthi Venkatraman; Annamalai, Jayanthi
2018-02-19
An early intervention of Alzheimer's disease (AD) is highly essential due to the fact that this neuro degenerative disease generates major life-threatening issues, especially memory loss among patients in society. Moreover, categorizing NC (Normal Control), MCI (Mild Cognitive Impairment) and AD early in course allows the patients to experience benefits from new treatments. Therefore, it is important to construct a reliable classification technique to discriminate the patients with or without AD from the bio medical imaging modality. Hence, we developed a novel FCM based Weighted Probabilistic Neural Network (FWPNN) classification algorithm and analyzed the brain images related to structural MRI modality for better discrimination of class labels. Initially our proposed framework begins with brain image normalization stage. In this stage, ROI regions related to Hippo-Campus (HC) and Posterior Cingulate Cortex (PCC) from the brain images are extracted using Automated Anatomical Labeling (AAL) method. Subsequently, nineteen highly relevant AD related features are selected through Multiple-criterion feature selection method. At last, our novel FWPNN classification algorithm is imposed to remove suspicious samples from the training data with an end goal to enhance the classification performance. This newly developed classification algorithm combines both the goodness of supervised and unsupervised learning techniques. The experimental validation is carried out with the ADNI subset and then to the Bordex-3 city dataset. Our proposed classification approach achieves an accuracy of about 98.63%, 95.4%, 96.4% in terms of classification with AD vs NC, MCI vs NC and AD vs MCI. The experimental results suggest that the removal of noisy samples from the training data can enhance the decision generation process of the expert systems.
Khalilzadeh, Omid; Baerlocher, Mark O; Shyn, Paul B; Connolly, Bairbre L; Devane, A Michael; Morris, Christopher S; Cohen, Alan M; Midia, Mehran; Thornton, Raymond H; Gross, Kathleen; Caplin, Drew M; Aeron, Gunjan; Misra, Sanjay; Patel, Nilesh H; Walker, T Gregory; Martinez-Salazar, Gloria; Silberzweig, James E; Nikolic, Boris
2017-10-01
To develop a new adverse event (AE) classification for the interventional radiology (IR) procedures and evaluate its clinical, research, and educational value compared with the existing Society of Interventional Radiology (SIR) classification via an SIR member survey. A new AE classification was developed by members of the Standards of Practice Committee of the SIR. Subsequently, a survey was created by a group of 18 members from the SIR Standards of Practice Committee and Service Lines. Twelve clinical AE case scenarios were generated that encompassed a broad spectrum of IR procedures and potential AEs. Survey questions were designed to evaluate the following domains: educational and research values, accountability for intraprocedural challenges, consistency of AE reporting, unambiguity, and potential for incorporation into existing quality-assurance framework. For each AE scenario, the survey participants were instructed to answer questions about the proposed and existing SIR classifications. SIR members were invited via online survey links, and 68 members participated among 140 surveyed. Answers on new and existing classifications were evaluated and compared statistically. Overall comparison between the two surveys was performed by generalized linear modeling. The proposed AE classification received superior evaluations in terms of consistency of reporting (P < .05) and potential for incorporation into existing quality-assurance framework (P < .05). Respondents gave a higher overall rating to the educational and research value of the new compared with the existing classification (P < .05). This study proposed an AE classification system that outperformed the existing SIR classification in the studied domains. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.
A software framework for real-time multi-modal detection of microsleeps.
Knopp, Simon J; Bones, Philip J; Weddell, Stephen J; Jones, Richard D
2017-09-01
A software framework is described which was designed to process EEG, video of one eye, and head movement in real time, towards achieving early detection of microsleeps for prevention of fatal accidents, particularly in transport sectors. The framework is based around a pipeline structure with user-replaceable signal processing modules. This structure can encapsulate a wide variety of feature extraction and classification techniques and can be applied to detecting a variety of aspects of cognitive state. Users of the framework can implement signal processing plugins in C++ or Python. The framework also provides a graphical user interface and the ability to save and load data to and from arbitrary file formats. Two small studies are reported which demonstrate the capabilities of the framework in typical applications: monitoring eye closure and detecting simulated microsleeps. While specifically designed for microsleep detection/prediction, the software framework can be just as appropriately applied to (i) other measures of cognitive state and (ii) development of biomedical instruments for multi-modal real-time physiological monitoring and event detection in intensive care, anaesthesiology, cardiology, neurosurgery, etc. The software framework has been made freely available for researchers to use and modify under an open source licence.
NASA Technical Reports Server (NTRS)
Sabol, Donald E., Jr.; Roberts, Dar A.; Adams, John B.; Smith, Milton O.
1993-01-01
An important application of remote sensing is to map and monitor changes over large areas of the land surface. This is particularly significant with the current interest in monitoring vegetation communities. Most of traditional methods for mapping different types of plant communities are based upon statistical classification techniques (i.e., parallel piped, nearest-neighbor, etc.) applied to uncalibrated multispectral data. Classes from these techniques are typically difficult to interpret (particularly to a field ecologist/botanist). Also, classes derived for one image can be very different from those derived from another image of the same area, making interpretation of observed temporal changes nearly impossible. More recently, neural networks have been applied to classification. Neural network classification, based upon spectral matching, is weak in dealing with spectral mixtures (a condition prevalent in images of natural surfaces). Another approach to mapping vegetation communities is based on spectral mixture analysis, which can provide a consistent framework for image interpretation. Roberts et al. (1990) mapped vegetation using the band residuals from a simple mixing model (the same spectral endmembers applied to all image pixels). Sabol et al. (1992b) and Roberts et al. (1992) used different methods to apply the most appropriate spectral endmembers to each image pixel, thereby allowing mapping of vegetation based upon the the different endmember spectra. In this paper, we describe a new approach to classification of vegetation communities based upon the spectra fractions derived from spectral mixture analysis. This approach was applied to three 1992 AVIRIS images of Jasper Ridge, California to observe seasonal changes in surface composition.
Leontidis, Georgios
2017-11-01
Human retina is a diverse and important tissue, vastly studied for various retinal and other diseases. Diabetic retinopathy (DR), a leading cause of blindness, is one of them. This work proposes a novel and complete framework for the accurate and robust extraction and analysis of a series of retinal vascular geometric features. It focuses on studying the registered bifurcations in successive years of progression from diabetes (no DR) to DR, in order to identify the vascular alterations. Retinal fundus images are utilised, and multiple experimental designs are employed. The framework includes various steps, such as image registration and segmentation, extraction of features, statistical analysis and classification models. Linear mixed models are utilised for making the statistical inferences, alongside the elastic-net logistic regression, boruta algorithm, and regularised random forests for the feature selection and classification phases, in order to evaluate the discriminative potential of the investigated features and also build classification models. A number of geometric features, such as the central retinal artery and vein equivalents, are found to differ significantly across the experiments and also have good discriminative potential. The classification systems yield promising results with the area under the curve values ranging from 0.821 to 0.968, across the four different investigated combinations. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tsevas, S.; Iakovidis, D. K.
2011-11-01
Pulmonary infiltrates are common radiological findings indicating the filling of airspaces with fluid, inflammatory exudates, or cells. They are most common in cases of pneumonia, acute respiratory syndrome, atelectasis, pulmonary oedema and haemorrhage, whereas their extent is usually correlated with the extent or the severity of the underlying disease. In this paper we propose a novel pattern recognition framework for the measurement of the extent of pulmonary infiltrates in routine chest radiographs. The proposed framework follows a hierarchical approach to the assessment of image content. It includes the following: (a) sampling of the lung fields; (b) extraction of patient-specific grey-level histogram signatures from each sample; (c) classification of the extracted signatures into classes representing normal lung parenchyma and pulmonary infiltrates; (d) the samples for which the probability of belonging to one of the two classes does not reach an acceptable level are rejected and classified according to their textural content; (e) merging of the classification results of the two classification stages. The proposed framework has been evaluated on real radiographic images with pulmonary infiltrates caused by bacterial infections. The results show that accurate measurements of the infiltration areas can be obtained with respect to each lung field area. The average measurement error rate on the considered dataset reached 9.7% ± 1.0%.
NASA Astrophysics Data System (ADS)
Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen
2017-12-01
Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.
2010-09-01
secure ad-hoc networks of mobile sensors deployed in a hostile environment . These sensors are normally small 86 and resource...Communications Magazine, 51, 2008. 45. Kumar, S.A. “Classification and Review of Security Schemes in Mobile Comput- ing”. Wireless Sensor Network , 2010... Networks ”. Wireless /Mobile Network Security , 2008. 85. Xiao, Y. “Accountability for Wireless LANs, Ad Hoc Networks , and Wireless
Kuhlmann-Gottke, Johanna; Duchow, Karin
2015-11-01
At present, there is no separate regulatory framework for cell-based medicinal products (CBMP) for veterinary use at the European or German level. Current European and national regulations exclusively apply to the corresponding medicinal products for human use. An increasing number of requests for the regulatory classification of CBMP for veterinary use, such as allogeneic stem cell preparations and dendritic cell-based autologous tumour vaccines, and a rise in scientific advice for companies developing these products, illustrate the need for adequate legislation. Currently, advice is given and decisions are made on a case-by-case basis regarding the regulatory classification and authorisation requirements.Since some of the CBMP - in particular in the area of stem-cell products - are developed in parallel for human and veterinary use, there is an urgent need to create specific legal definitions, regulations, and guidelines for these complex innovative products in the veterinary sector as well. Otherwise, there is a risk that that the current legal grey area regarding veterinary medicinal products will impede therapeutic innovations in the long run. A harmonised EU-wide approach is desirable. Currently the European legislation on veterinary medicinal products is under revision. In this context, veterinary therapeutics based on allogeneic cells and tissues will be defined and regulated. Certainly, the legal framework does not have to be as comprehensive as for human CBMP; a leaner solution is conceivable, similar to the special provisions for advanced-therapy medicinal products laid down in the German Medicines Act.
A kinematic classification of the cosmic web
NASA Astrophysics Data System (ADS)
Hoffman, Yehuda; Metuki, Ofer; Yepes, Gustavo; Gottlöber, Stefan; Forero-Romero, Jaime E.; Libeskind, Noam I.; Knebe, Alexander
2012-09-01
A new approach for the classification of the cosmic web is presented. In extension of the previous work of Hahn et al. and Forero-Romero et al., the new algorithm is based on the analysis of the velocity shear tensor rather than the gravitational tidal tensor. The procedure consists of the construction of the shear tensor at each (grid) point in space and the evaluation of its three eigenvectors. A given point is classified to be either a void, sheet, filament or a knot according to the number of eigenvalues above a certain threshold, 0, 1, 2 or 3, respectively. The threshold is treated as a free parameter that defines the web. The algorithm has been applied to a dark matter only simulation of a box of side length 64 h-1 Mpc and N = 10243 particles within the framework of the 5-year Wilkinson and Microwave Anisotropy Probe/Λ cold dark matter (ΛCDM) model. The resulting velocity-based cosmic web resolves structures down to ≲0.1 h-1 Mpc scales, as opposed to the ≈1 h-1 Mpc scale of the tidal-based web. The underdense regions are made of extended voids bisected by planar sheets, whose density is also below the mean. The overdense regions are vastly dominated by the linear filaments and knots. The resolution achieved by the velocity-based cosmic web provides a platform for studying the formation of haloes and galaxies within the framework of the cosmic web.
NASA Astrophysics Data System (ADS)
Jiménez Jaramillo, M. A.; Camacho Botero, L. A.; Vélez Upegui, J. I.
2010-12-01
Variation in stream morphology along a basin drainage network leads to different hydraulic patterns and sediment transport processes. Moreover, solute transport processes along streams, and stream habitats for fisheries and microorganisms, rely on stream corridor structure, including elements such as bed forms, channel patterns, riparian vegetation, and the floodplain. In this work solute transport processes simulation and stream habitat identification are carried out at the basin scale. A reach-scale morphological classification system based on channel slope and specific stream power was implemented by using digital elevation models and hydraulic geometry relationships. Although the morphological framework allows identification of cascade, step-pool, plane bed and pool-riffle morphologies along the drainage network, it still does not account for floodplain configuration and bed-forms identification of those channel types. Hence, as a first application case in order to obtain parsimonious three-dimensional characterizations of drainage channels, the morphological framework has been updated by including topographical floodplain delimitation through a Multi-resolution Valley Bottom Flatness Index assessing, and a stochastic bed form representation of the step-pool morphology. Model outcomes were tested in relation to in-stream water storage for different flow conditions and representative travel times according to the Aggregated Dead Zone -ADZ- model conceptualization of solute transport processes.
Practical aspects of chemometrics for oil spill fingerprinting.
Christensen, Jan H; Tomasi, Giorgio
2007-10-26
Tiered approaches for oil spill fingerprinting have evolved rapidly since the 1990s. Chemometrics provides a large number of tools for pattern recognition, calibration and classification that can increase the speed and the objectivity of the analysis and allow for more extensive use of the available data in this field. However, although the chemometric literature is extensive, it does not focus on practical issues that are relevant to oil spill fingerprinting. The aim of this review is to provide a framework for the use of chemometric approaches in tiered oil spill fingerprinting and to provide clear-cut practical details and experiences that can be used by the forensic chemist. The framework is based on methods for initial screening, which include classification of samples into oil type, detection of non matches and of weathering state, and detailed oil spill fingerprinting, in which a more rigorous matching of an oil spill sample to suspected source oils is obtained. This review is intended as a tutorial, and is based on two examples of initial screening using respectively gas chromatography with flame ionization detection and fluorescence spectroscopy; and two of detailed oil spill fingerprinting where gas chromatography-mass spectrometry data are analyzed according to two approaches: The first relying on sections of processed chromatograms and the second on diagnostic ratios.
Miciak, Jeremy; Taylor, Pat; Denton, Carolyn A.; Fletcher, Jack M.
2014-01-01
Purpose Few empirical investigations have evaluated learning disabilities (LD) identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability of LD classification decisions of the concordance/discordance method (C/DM) across different psychoeducational assessment batteries. Methods C/DM criteria were applied to assessment data from 177 second grade students based on two psychoeducational assessment batteries. The achievement tests were different, but were highly correlated and measured the same latent construct. Resulting LD identifications were then evaluated for agreement across batteries on LD status and the academic domain of eligibility. Results The two batteries identified a similar number of participants as having LD (80 and 74). However, indices of agreement for classification decisions were low (kappa = .29), especially for percent positive agreement (62%). The two batteries demonstrated agreement on the academic domain of eligibility for only 25 participants. Conclusions Cognitive discrepancy frameworks for LD identification are inherently unstable because of imperfect reliability and validity at the observed level. Methods premised on identifying a PSW profile may never achieve high reliability because of these underlying psychometric factors. An alternative is to directly assess academic skills to identify students in need of intervention. PMID:25243467
The challenges of describing rehabilitation services: A discussion paper.
Røe, Cecilie; Kirkevold, Marit; Andelic, Nada; Soberg, Helene L; Sveen, Unni; Bautz-Holter, Erik; Jahnsen, Reidun; van Walsem, Marleen R; Kildal Bragstad, Line; Gabrielsen Hjelle, Ellen; Klevberg, Gunvor; Oretorp, Per; Habberstad, Andreas; Hagfors, Jon; Væhle, Randi; Engen, Grace; Gutenbrunner, Christoph
2018-02-13
To apply the Classification of Service Organization in Rehabilitation (ICSO-R) classification of services to different target groups, include the user perspective, identify missing categories, and propose standardized descriptors for the categories from a Norwegian perspective. Expert-based consensus conferences with user involvement. Health professionals, stakeholders and users. Participants were divided into 5 panels, which applied the ICSO-R to describe the habilitation and rehabilitation services provided to children with cerebral palsy and people with Huntington's disease, acquired brain injuries (traumatic brain injuries and stroke) and painful musculoskeletal conditions. Based on the Problem/Population, Intervention, Comparison, Outcome (PICO) framework, the services were described according to the ICSO-R. Missing categories were identified. The ICSO-R was found to be feasible and applicable for describing a variety of services provided to different target groups in Norway, but the user perspective was lacking, categories were missing, and a need for standardized description of the categories was identified. The present work supports the need to produce an updated version of the ICSO-R and to encourage national and international discussion of the framework. The ICSO-R has the potential to become a tool for the standardized assessment of rehabilitation services. For such purposes, more standardized descriptions of subcategories are necessary.
Extending cluster Lot Quality Assurance Sampling designs for surveillance programs
Hund, Lauren; Pagano, Marcello
2014-01-01
Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible non-parametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. PMID:24633656
NASA Astrophysics Data System (ADS)
Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng
2018-02-01
Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided.
Achleitner, Stefan; De Toffol, Sara; Engelhard, Carolina; Rauch, Wolfgang
2005-04-01
The European Water framework directive (WFD) is probably the most important environmental management directive that has been enacted over the last decade in the European Union. The directive aims at achieving an overall good ecological status in all European water bodies. In this article, we discuss the implementation steps of the WFD and their implications for environmental engineering practice while focusing on rivers as the main receiving waters. Arising challenges for engineers and scientists are seen in the quantitative assessment of water quality, where standardized systems are needed to estimate the biological status. This is equally of concern in engineering planning, where the prediction of ecological impacts is required. Studies dealing with both classification and prediction of the ecological water quality are reviewed. Further, the combined emission-water quality approach is discussed. Common understanding of this combined approach is to apply the most stringent of either water quality or emission standard to a certain case. In contrast, for example, the Austrian water act enables the application of only the water quality based approach--at least on a temporary basis.
Classification and grading of muscle injuries: a narrative review
Hamilton, Bruce; Valle, Xavier; Rodas, Gil; Til, Luis; Grive, Ricard Pruna; Rincon, Josep Antoni Gutierrez; Tol, Johannes L
2015-01-01
A limitation to the accurate study of muscle injuries and their management has been the lack of a uniform approach to the categorisation and grading of muscle injuries. The goal of this narrative review was to provide a framework from which to understand the historical progression of the classification and grading of muscle injuries. We reviewed the classification and grading of muscle injuries in the literature to critically illustrate the strengths, weaknesses, contradictions or controversies. A retrospective, citation-based methodology was applied to search for English language literature which evaluated or utilised a novel muscle classification or grading system. While there is an abundance of literature classifying and grading muscle injuries, it is predominantly expert opinion, and there remains little evidence relating any of the clinical or radiological features to an established pathology or clinical outcome. While the categorical grading of injury severity may have been a reasonable solution to a clinical challenge identified in the middle of the 20th century, it is time to recognise the complexity of the injury, cease trying to oversimplify it and to develop appropriately powered research projects to answer important questions. PMID:25394420
A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.
Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous
2017-08-30
While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.
Cross-entropy clustering framework for catchment classification
NASA Astrophysics Data System (ADS)
Tongal, Hakan; Sivakumar, Bellie
2017-09-01
There is an increasing interest in catchment classification and regionalization in hydrology, as they are useful for identification of appropriate model complexity and transfer of information from gauged catchments to ungauged ones, among others. This study introduces a nonlinear cross-entropy clustering (CEC) method for classification of catchments. The method specifically considers embedding dimension (m), sample entropy (SampEn), and coefficient of variation (CV) to represent dimensionality, complexity, and variability of the time series, respectively. The method is applied to daily streamflow time series from 217 gauging stations across Australia. The results suggest that a combination of linear and nonlinear parameters (i.e. m, SampEn, and CV), representing different aspects of the underlying dynamics of streamflows, could be useful for determining distinct patterns of flow generation mechanisms within a nonlinear clustering framework. For the 217 streamflow time series, nine hydrologically homogeneous clusters that have distinct patterns of flow regime characteristics and specific dominant hydrological attributes with different climatic features are obtained. Comparison of the results with those obtained using the widely employed k-means clustering method (which results in five clusters, with the loss of some information about the features of the clusters) suggests the superiority of the cross-entropy clustering method. The outcomes from this study provide a useful guideline for employing the nonlinear dynamic approaches based on hydrologic signatures and for gaining an improved understanding of streamflow variability at a large scale.
ERIC Educational Resources Information Center
Lester, Stan
2018-01-01
Purpose: The purpose of this paper is to review three international frameworks, including the International Standard Classification of Education (ISCED), in relation to one country's higher professional and vocational education system. Design/methodology/approach: The frameworks were examined in the context of English higher work-related…
Naik, Hsiang Sing; Zhang, Jiaoping; Lofquist, Alec; Assefa, Teshale; Sarkar, Soumik; Ackerman, David; Singh, Arti; Singh, Asheesh K; Ganapathysubramanian, Baskar
2017-01-01
Phenotyping is a critical component of plant research. Accurate and precise trait collection, when integrated with genetic tools, can greatly accelerate the rate of genetic gain in crop improvement. However, efficient and automatic phenotyping of traits across large populations is a challenge; which is further exacerbated by the necessity of sampling multiple environments and growing replicated trials. A promising approach is to leverage current advances in imaging technology, data analytics and machine learning to enable automated and fast phenotyping and subsequent decision support. In this context, the workflow for phenotyping (image capture → data storage and curation → trait extraction → machine learning/classification → models/apps for decision support) has to be carefully designed and efficiently executed to minimize resource usage and maximize utility. We illustrate such an end-to-end phenotyping workflow for the case of plant stress severity phenotyping in soybean, with a specific focus on the rapid and automatic assessment of iron deficiency chlorosis (IDC) severity on thousands of field plots. We showcase this analytics framework by extracting IDC features from a set of ~4500 unique canopies representing a diverse germplasm base that have different levels of IDC, and subsequently training a variety of classification models to predict plant stress severity. The best classifier is then deployed as a smartphone app for rapid and real time severity rating in the field. We investigated 10 different classification approaches, with the best classifier being a hierarchical classifier with a mean per-class accuracy of ~96%. We construct a phenotypically meaningful 'population canopy graph', connecting the automatically extracted canopy trait features with plant stress severity rating. We incorporated this image capture → image processing → classification workflow into a smartphone app that enables automated real-time evaluation of IDC scores using digital images of the canopy. We expect this high-throughput framework to help increase the rate of genetic gain by providing a robust extendable framework for other abiotic and biotic stresses. We further envision this workflow embedded onto a high throughput phenotyping ground vehicle and unmanned aerial system that will allow real-time, automated stress trait detection and quantification for plant research, breeding and stress scouting applications.
Harmonising Nursing Terminologies Using a Conceptual Framework.
Jansen, Kay; Kim, Tae Youn; Coenen, Amy; Saba, Virginia; Hardiker, Nicholas
2016-01-01
The International Classification for Nursing Practice (ICNP®) and the Clinical Care Classification (CCC) System are standardised nursing terminologies that identify discrete elements of nursing practice, including nursing diagnoses, interventions, and outcomes. While CCC uses a conceptual framework or model with 21 Care Components to classify these elements, ICNP, built on a formal Web Ontology Language (OWL) description logic foundation, uses a logical hierarchical framework that is useful for computing and maintenance of ICNP. Since the logical framework of ICNP may not always align with the needs of nursing practice, an informal framework may be a more useful organisational tool to represent nursing content. The purpose of this study was to classify ICNP nursing diagnoses using the 21 Care Components of the CCC as a conceptual framework to facilitate usability and inter-operability of nursing diagnoses in electronic health records. Findings resulted in all 521 ICNP diagnoses being assigned to one of the 21 CCC Care Components. Further research is needed to validate the resulting product of this study with practitioners and develop recommendations for improvement of both terminologies.
Lyons-Weiler, James; Pelikan, Richard; Zeh, Herbert J; Whitcomb, David C; Malehorn, David E; Bigbee, William L; Hauskrecht, Milos
2005-01-01
Peptide profiles generated using SELDI/MALDI time of flight mass spectrometry provide a promising source of patient-specific information with high potential impact on the early detection and classification of cancer and other diseases. The new profiling technology comes, however, with numerous challenges and concerns. Particularly important are concerns of reproducibility of classification results and their significance. In this work we describe a computational validation framework, called PACE (Permutation-Achieved Classification Error), that lets us assess, for a given classification model, the significance of the Achieved Classification Error (ACE) on the profile data. The framework compares the performance statistic of the classifier on true data samples and checks if these are consistent with the behavior of the classifier on the same data with randomly reassigned class labels. A statistically significant ACE increases our belief that a discriminative signal was found in the data. The advantage of PACE analysis is that it can be easily combined with any classification model and is relatively easy to interpret. PACE analysis does not protect researchers against confounding in the experimental design, or other sources of systematic or random error. We use PACE analysis to assess significance of classification results we have achieved on a number of published data sets. The results show that many of these datasets indeed possess a signal that leads to a statistically significant ACE.
Kagawa, Rina; Kawazoe, Yoshimasa; Ida, Yusuke; Shinohara, Emiko; Tanaka, Katsuya; Imai, Takeshi; Ohe, Kazuhiko
2017-07-01
Phenotyping is an automated technique that can be used to distinguish patients based on electronic health records. To improve the quality of medical care and advance type 2 diabetes mellitus (T2DM) research, the demand for T2DM phenotyping has been increasing. Some existing phenotyping algorithms are not sufficiently accurate for screening or identifying clinical research subjects. We propose a practical phenotyping framework using both expert knowledge and a machine learning approach to develop 2 phenotyping algorithms: one is for screening; the other is for identifying research subjects. We employ expert knowledge as rules to exclude obvious control patients and machine learning to increase accuracy for complicated patients. We developed phenotyping algorithms on the basis of our framework and performed binary classification to determine whether a patient has T2DM. To facilitate development of practical phenotyping algorithms, this study introduces new evaluation metrics: area under the precision-sensitivity curve (AUPS) with a high sensitivity and AUPS with a high positive predictive value. The proposed phenotyping algorithms based on our framework show higher performance than baseline algorithms. Our proposed framework can be used to develop 2 types of phenotyping algorithms depending on the tuning approach: one for screening, the other for identifying research subjects. We develop a novel phenotyping framework that can be easily implemented on the basis of proper evaluation metrics, which are in accordance with users' objectives. The phenotyping algorithms based on our framework are useful for extraction of T2DM patients in retrospective studies.
HOTEX: An Approach for Global Mapping of Human Built-Up and Settlement Extent
NASA Technical Reports Server (NTRS)
Wang, Panshi; Huang, Chengquan; Tilton, James C.; Tan, Bin; Brown De Colstoun, Eric C.
2017-01-01
Understanding the impacts of urbanization requires accurate and updatable urban extent maps. Here we present an algorithm for mapping urban extent at global scale using Landsat data. An innovative hierarchical object-based texture (HOTex) classification approach was designed to overcome spectral confusion between urban and nonurban land cover types. VIIRS nightlights data and MODIS vegetation index datasets are integrated as high-level features under an object-based framework. We applied the HOTex method to the GLS-2010 Landsat images to produce a global map of human built-up and settlement extent. As shown by visual assessments, our method could effectively map urban extent and generate consistent results using images with inconsistent acquisition time and vegetation phenology. Using scene-level cross validation for results in Europe, we assessed the performance of HOTex and achieved a kappa coefficient of 0.91, compared to 0.74 from a baseline method using per-pixel classification using spectral information.
Multiple-instance ensemble learning for hyperspectral images
NASA Astrophysics Data System (ADS)
Ergul, Ugur; Bilgin, Gokhan
2017-10-01
An ensemble framework for multiple-instance (MI) learning (MIL) is introduced for use in hyperspectral images (HSIs) by inspiring the bagging (bootstrap aggregation) method in ensemble learning. Ensemble-based bagging is performed by a small percentage of training samples, and MI bags are formed by a local windowing process with variable window sizes on selected instances. In addition to bootstrap aggregation, random subspace is another method used to diversify base classifiers. The proposed method is implemented using four MIL classification algorithms. The classifier model learning phase is carried out with MI bags, and the estimation phase is performed over single-test instances. In the experimental part of the study, two different HSIs that have ground-truth information are used, and comparative results are demonstrated with state-of-the-art classification methods. In general, the MI ensemble approach produces more compact results in terms of both diversity and error compared to equipollent non-MIL algorithms.
CellCognition: time-resolved phenotype annotation in high-throughput live cell imaging.
Held, Michael; Schmitz, Michael H A; Fischer, Bernd; Walter, Thomas; Neumann, Beate; Olma, Michael H; Peter, Matthias; Ellenberg, Jan; Gerlich, Daniel W
2010-09-01
Fluorescence time-lapse imaging has become a powerful tool to investigate complex dynamic processes such as cell division or intracellular trafficking. Automated microscopes generate time-resolved imaging data at high throughput, yet tools for quantification of large-scale movie data are largely missing. Here we present CellCognition, a computational framework to annotate complex cellular dynamics. We developed a machine-learning method that combines state-of-the-art classification with hidden Markov modeling for annotation of the progression through morphologically distinct biological states. Incorporation of time information into the annotation scheme was essential to suppress classification noise at state transitions and confusion between different functional states with similar morphology. We demonstrate generic applicability in different assays and perturbation conditions, including a candidate-based RNA interference screen for regulators of mitotic exit in human cells. CellCognition is published as open source software, enabling live-cell imaging-based screening with assays that directly score cellular dynamics.
Virtual shelves in a digital library: a framework for access to networked information sources.
Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E
1995-01-01
OBJECTIVE: Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. DESIGN: This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. RESULTS: The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. CONCLUSIONS: This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources. PMID:8581554
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Bevelhimer, Mark S; Frimpong, Dr. Emmanuel A,
2014-01-01
Classification systems are valuable to ecological management in that they organize information into consolidated units thereby providing efficient means to achieve conservation objectives. Of the many ways classifications benefit management, hypothesis generation has been discussed as the most important. However, in order to provide templates for developing and testing ecologically relevant hypotheses, classifications created using environmental variables must be linked to ecological patterns. Herein, we develop associations between a recent US hydrologic classification and fish traits in order to form a template for generating flow ecology hypotheses and supporting environmental flow standard development. Tradeoffs in adaptive strategies for fish weremore » observed across a spectrum of stable, perennial flow to unstable intermittent flow. In accordance with theory, periodic strategists were associated with stable, predictable flow, whereas opportunistic strategists were more affiliated with intermittent, variable flows. We developed linkages between the uniqueness of hydrologic character and ecological distinction among classes, which may translate into predictions between losses in hydrologic uniqueness and ecological community response. Comparisons of classification strength between hydrologic classifications and other frameworks suggested that spatially contiguous classifications with higher regionalization will tend to explain more variation in ecological patterns. Despite explaining less ecological variation than other frameworks, we contend that hydrologic classifications are still useful because they provide a conceptual linkage between hydrologic variation and ecological communities to support flow ecology relationships. Mechanistic associations among fish traits and hydrologic classes support the presumption that environmental flow standards should be developed uniquely for stream classes and ecological communities, therein.« less
Shore zone land use and land cover: Central Atlantic Regional Ecological Test Site
Dolan, R.; Hayden, B.P.; Vincent, C.L.
1974-01-01
Anderson's 1972 United States Geological Survey classification in modified form was applied to the barrier-island coastline within the CARETS region. High-altitude, color-infrared photography of December, 1972, and January, 1973, served as the primary data base in this study. The CARETS shore zone studied was divided into six distinct geographical regions; area percentages for each class in the modified Anderson classification are presented. Similarities and differences between regions are discussed within the framework of man's modification of these landscapes. The results of this study are presented as a series of 19 maps of land-use categories. Recommendations are made for a remote-sensing system for monitoring the CARETS shore zone within the context of the dynamics of the landscapes studied.
Page layout analysis and classification for complex scanned documents
NASA Astrophysics Data System (ADS)
Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan
2011-09-01
A framework for region/zone classification in color and gray-scale scanned documents is proposed in this paper. The algorithm includes modules for extracting text, photo, and strong edge/line regions. Firstly, a text detection module which is based on wavelet analysis and Run Length Encoding (RLE) technique is employed. Local and global energy maps in high frequency bands of the wavelet domain are generated and used as initial text maps. Further analysis using RLE yields a final text map. The second module is developed to detect image/photo and pictorial regions in the input document. A block-based classifier using basis vector projections is employed to identify photo candidate regions. Then, a final photo map is obtained by applying probabilistic model based on Markov random field (MRF) based maximum a posteriori (MAP) optimization with iterated conditional mode (ICM). The final module detects lines and strong edges using Hough transform and edge-linkages analysis, respectively. The text, photo, and strong edge/line maps are combined to generate a page layout classification of the scanned target document. Experimental results and objective evaluation show that the proposed technique has a very effective performance on variety of simple and complex scanned document types obtained from MediaTeam Oulu document database. The proposed page layout classifier can be used in systems for efficient document storage, content based document retrieval, optical character recognition, mobile phone imagery, and augmented reality.
A Classification System to Guide Physical Therapy Management in Huntington Disease: A Case Series.
Fritz, Nora E; Busse, Monica; Jones, Karen; Khalil, Hanan; Quinn, Lori
2017-07-01
Individuals with Huntington disease (HD), a rare neurological disease, experience impairments in mobility and cognition throughout their disease course. The Medical Research Council framework provides a schema that can be applied to the development and evaluation of complex interventions, such as those provided by physical therapists. Treatment-based classifications, based on expert consensus and available literature, are helpful in guiding physical therapy management across the stages of HD. Such classifications also contribute to the development and further evaluation of well-defined complex interventions in this highly variable and complex neurodegenerative disease. The purpose of this case series was to illustrate the use of these classifications in the management of 2 individuals with late-stage HD. Two females, 40 and 55 years of age, with late-stage HD participated in this case series. Both experienced progressive declines in ambulatory function and balance as well as falls or fear of falling. Both individuals received daily care in the home for activities of daily living. Physical therapy Treatment-Based Classifications for HD guided the interventions and outcomes. Eight weeks of in-home balance training, strength training, task-specific practice of functional activities including transfers and walking tasks, and family/carer education were provided. Both individuals demonstrated improvements that met or exceeded the established minimal detectible change values for gait speed and Timed Up and Go performance. Both also demonstrated improvements on Berg Balance Scale and Physical Performance Test performance, with 1 of the 2 individuals exceeding the established minimal detectible changes for both tests. Reductions in fall risk were evident in both cases. These cases provide proof-of-principle to support use of treatment-based classifications for physical therapy management in individuals with HD. Traditional classification of early-, mid-, and late-stage disease progression may not reflect patients' true capabilities; those with late-stage HD may be as responsive to interventions as those at an earlier disease stage.Video Abstract available for additional insights from the authors (see Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A172).
Alternative Classification Framework for Engineering Capability Enhancement
ERIC Educational Resources Information Center
Patamakajonpong, Mana; Chandarasupsang, Tirapot
2015-01-01
Purpose: This paper aims to present an alternative practical framework to classify the skill and knowledge of the individual trainees by comparing it with the expert in an organization. This framework gives the benefit to the organization in order to know the ability level of the personnel and to be able to provide the personnel development method…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-18
... these errors. DATES: Effective October 18, 2010. FOR FURTHER INFORMATION CONTACT: Emily Bryant, Fishery... incurred prior to Framework 21's effectiveness would be applied to FY 2011 allocations. NMFS received one...) and final rules to implement Framework 21. Classification Pursuant to 5 U.S.C. 553(b)(B), the...
Assessing classification systems that describe natural variation across regions is an important first step for developing indicators. We evaluated a hydrogeologic framework for first order streams in the mid-Atlantic Coastal Plain as part of the LIPS-MACS (Landscape Indicators f...
FRAN and RBF-PSO as two components of a hyper framework to recognize protein folds.
Abbasi, Elham; Ghatee, Mehdi; Shiri, M E
2013-09-01
In this paper, an intelligent hyper framework is proposed to recognize protein folds from its amino acid sequence which is a fundamental problem in bioinformatics. This framework includes some statistical and intelligent algorithms for proteins classification. The main components of the proposed framework are the Fuzzy Resource-Allocating Network (FRAN) and the Radial Bases Function based on Particle Swarm Optimization (RBF-PSO). FRAN applies a dynamic method to tune up the RBF network parameters. Due to the patterns complexity captured in protein dataset, FRAN classifies the proteins under fuzzy conditions. Also, RBF-PSO applies PSO to tune up the RBF classifier. Experimental results demonstrate that FRAN improves prediction accuracy up to 51% and achieves acceptable multi-class results for protein fold prediction. Although RBF-PSO provides reasonable results for protein fold recognition up to 48%, it is weaker than FRAN in some cases. However the proposed hyper framework provides an opportunity to use a great range of intelligent methods and can learn from previous experiences. Thus it can avoid the weakness of some intelligent methods in terms of memory, computational time and static structure. Furthermore, the performance of this system can be enhanced throughout the system life-cycle. Copyright © 2013 Elsevier Ltd. All rights reserved.
Assessing the benefits and risks of translocations in changing environments: a genetic perspective
Weeks, Andrew R; Sgro, Carla M; Young, Andrew G; Frankham, Richard; Mitchell, Nicki J; Miller, Kim A; Byrne, Margaret; Coates, David J; Eldridge, Mark D B; Sunnucks, Paul; Breed, Martin F; James, Elizabeth A; Hoffmann, Ary A
2011-01-01
Translocations are being increasingly proposed as a way of conserving biodiversity, particularly in the management of threatened and keystone species, with the aims of maintaining biodiversity and ecosystem function under the combined pressures of habitat fragmentation and climate change. Evolutionary genetic considerations should be an important part of translocation strategies, but there is often confusion about concepts and goals. Here, we provide a classification of translocations based on specific genetic goals for both threatened species and ecological restoration, separating targets based on ‘genetic rescue’ of current population fitness from those focused on maintaining adaptive potential. We then provide a framework for assessing the genetic benefits and risks associated with translocations and provide guidelines for managers focused on conserving biodiversity and evolutionary processes. Case studies are developed to illustrate the framework. PMID:22287981
Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng
2007-01-01
The recent availability of low cost and miniaturized hardware has allowed wireless sensor networks (WSNs) to retrieve audio and video data in real world applications, which has fostered the development of wireless multimedia sensor networks (WMSNs). Resource constraints and challenging multimedia data volume make development of efficient algorithms to perform in-network processing of multimedia contents imperative. This paper proposes solving problems in the domain of WMSNs from the perspective of multi-agent systems. The multi-agent framework enables flexible network configuration and efficient collaborative in-network processing. The focus is placed on target classification in WMSNs where audio information is retrieved by microphones. To deal with the uncertainties related to audio information retrieval, the statistical approaches of power spectral density estimates, principal component analysis and Gaussian process classification are employed. A multi-agent negotiation mechanism is specially developed to efficiently utilize limited resources and simultaneously enhance classification accuracy and reliability. The negotiation is composed of two phases, where an auction based approach is first exploited to allocate the classification task among the agents and then individual agent decisions are combined by the committee decision mechanism. Simulation experiments with real world data are conducted and the results show that the proposed statistical approaches and negotiation mechanism not only reduce memory and computation requirements in WMSNs but also significantly enhance classification accuracy and reliability. PMID:28903223
Agile convolutional neural network for pulmonary nodule classification using CT images.
Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei
2018-04-01
To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.
Pollettini, Juliana T; Panico, Sylvia R G; Daneluzzi, Julio C; Tinós, Renato; Baranauskas, José A; Macedo, Alessandra A
2012-12-01
Surveillance Levels (SLs) are categories for medical patients (used in Brazil) that represent different types of medical recommendations. SLs are defined according to risk factors and the medical and developmental history of patients. Each SL is associated with specific educational and clinical measures. The objective of the present paper was to verify computer-aided, automatic assignment of SLs. The present paper proposes a computer-aided approach for automatic recommendation of SLs. The approach is based on the classification of information from patient electronic records. For this purpose, a software architecture composed of three layers was developed. The architecture is formed by a classification layer that includes a linguistic module and machine learning classification modules. The classification layer allows for the use of different classification methods, including the use of preprocessed, normalized language data drawn from the linguistic module. We report the verification and validation of the software architecture in a Brazilian pediatric healthcare institution. The results indicate that selection of attributes can have a great effect on the performance of the system. Nonetheless, our automatic recommendation of surveillance level can still benefit from improvements in processing procedures when the linguistic module is applied prior to classification. Results from our efforts can be applied to different types of medical systems. The results of systems supported by the framework presented in this paper may be used by healthcare and governmental institutions to improve healthcare services in terms of establishing preventive measures and alerting authorities about the possibility of an epidemic.
NASA Astrophysics Data System (ADS)
Esteban, Pere; Beck, Christoph; Philipp, Andreas
2010-05-01
Using data associated with accidents or damages caused by snow avalanches over the eastern Pyrenees (Andorra and Catalonia) several atmospheric circulation type catalogues have been obtained. For this purpose, different circulation type classification methods based on Principal Component Analysis (T-mode and S-mode using the extreme scores) and on optimization procedures (Improved K-means and SANDRA) were applied . Considering the characteristics of the phenomena studied, not only single day circulation patterns were taken into account but also sequences of circulation types of varying length. Thus different classifications with different numbers of types and for different sequence lengths were obtained using the different classification methods. Simple between type variability, within type variability, and outlier detection procedures have been applied for selecting the best result concerning snow avalanches type classifications. Furthermore, days without occurrence of the hazards were also related to the avalanche centroids using pattern-correlations, facilitating the calculation of the anomalies between hazardous and no hazardous days, and also frequencies of occurrence of hazardous events for each circulation type. Finally, the catalogues statistically considered the best results are evaluated using the avalanche forecaster expert knowledge. Consistent explanation of snow avalanches occurrence by means of circulation sequences is obtained, but always considering results from classifications with different sequence length. This work has been developed in the framework of the COST Action 733 (Harmonisation and Applications of Weather Type Classifications for European regions).
NASA Astrophysics Data System (ADS)
Keefer, Matthew L.; Peery, Christopher A.; Wright, Nancy; Daigle, William R.; Caudill, Christopher C.; Clabough, Tami S.; Griffith, David W.; Zacharias, Mark A.
2008-06-01
A common first step in conservation planning and resource management is to identify and classify habitat types, and this has led to a proliferation of habitat classification systems. Ideally, classifications should be scientifically and conceptually rigorous, with broad applicability across spatial and temporal scales. Successful systems will also be flexible and adaptable, with a framework and supporting lexicon accessible to users from a variety of disciplines and locations. A new, continental-scale classification system for coastal and marine habitats—the Coastal and Marine Ecological Classification Standard (CMECS)—is currently being developed for North America by NatureServe and the National Oceanic and Atmospheric Administration (NOAA). CMECS is a nested, hierarchical framework that applies a uniform set of rules and terminology across multiple habitat scales using a combination of oceanographic (e.g. salinity, temperature), physiographic (e.g. depth, substratum), and biological (e.g. community type) criteria. Estuaries are arguably the most difficult marine environments to classify due to large spatio-temporal variability resulting in rapidly shifting benthic and water column conditions. We simultaneously collected data at eleven subtidal sites in the Columbia River Estuary (CRE) in fall 2004 to evaluate whether the estuarine component of CMECS could adequately classify habitats across several scales for representative sites within the estuary spanning a range of conditions. Using outputs from an acoustic Doppler current profiler (ADCP), CTD (conductivity, temperature, depth) sensor, and PONAR (benthic dredge) we concluded that the CMECS hierarchy provided a spatially explicit framework in which to integrate multiple parameters to define macro-habitats at the 100 m 2 to >1000 m 2 scales, or across several tiers of the CMECS system. The classification's strengths lie in its nested, hierarchical structure and in the development of a standardized, yet flexible classification lexicon. The application of the CMECS to other estuaries in North America should therefore identify similar habitat types at similar scales as we identified in the CRE. We also suggest that the CMECS could be improved by refining classification thresholds to better reflect ecological processes, by direct integration of temporal variability, and by more explicitly linking physical and biological processes with habitat patterns.
Applicability of Hydrologic Landscapes for Model Calibration ...
The Pacific Northwest Hydrologic Landscapes (PNW HL) at the assessment unit scale has provided a solid conceptual classification framework to relate and transfer hydrologically meaningful information between watersheds without access to streamflow time series. A collection of techniques were applied to the HL assessment unit composition in watersheds across the Pacific Northwest to aggregate the hydrologic behavior of the Hydrologic Landscapes from the assessment unit scale to the watershed scale. This non-trivial solution both emphasizes HL classifications within the watershed that provide that majority of moisture surplus/deficit and considers the relative position (upstream vs. downstream) of these HL classifications. A clustering algorithm was applied to the HL-based characterization of assessment units within 185 watersheds to help organize watersheds into nine classes hypothesized to have similar hydrologic behavior. The HL-based classes were used to organize and describe hydrologic behavior information about watershed classes and both predictions and validations were independently performed with regard to the general magnitude of six hydroclimatic signature values. A second cluster analysis was then performed using the independently calculated signature values as similarity metrics, and it was found that the six signature clusters showed substantial overlap in watershed class membership to those in the HL-based classes. One hypothesis set forward from thi
Ecosystem Services Linking People to Coastal Habitats ...
Background/Question/Methods: There is a growing need to incorporate and prioritize ecosystem services/condition information into land-use decision making. While there are a number of place-based studies looking at how land-use decisions affect the availability and delivery of coastal services, many of these methods require data, funding and/or expertise that may be inaccessible to many coastal communities. Using existing classification standards for beneficiaries and coastal habitats, (i.e., Final Ecosystem Goods and Services Classification System (FEGS-CS) and Coastal and Marine Ecological Classification Standard (CMECS)), a comprehensive literature review was coupled with a “weight of evidence” approach to evaluate linkages between beneficiaries and coastal habitat features most relevant to community needs. An initial search of peer-reviewed journal articles was conducted using JSTOR and ScienceDirect repositories identifying sources that provide evidence for coastal beneficiary:habitat linkages. Potential sources were further refined based on a double-blind review of titles, abstracts, and full-texts, when needed. Articles in the final list were then scored based on habitat/beneficiary specificity and data quality (e.g., indirect evidence from literature reviews was scored lower than direct evidence from case studies with valuation results). Scores were then incorporated into a weight of evidence framework summarizing the support for each benefici
Wang, Huiya; Feng, Jun; Wang, Hongyu
2017-07-20
Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.
Concept-oriented indexing of video databases: toward semantic sensitive retrieval and browsing.
Fan, Jianping; Luo, Hangzai; Elmagarmid, Ahmed K
2004-07-01
Digital video now plays an important role in medical education, health care, telemedicine and other medical applications. Several content-based video retrieval (CBVR) systems have been proposed in the past, but they still suffer from the following challenging problems: semantic gap, semantic video concept modeling, semantic video classification, and concept-oriented video database indexing and access. In this paper, we propose a novel framework to make some advances toward the final goal to solve these problems. Specifically, the framework includes: 1) a semantic-sensitive video content representation framework by using principal video shots to enhance the quality of features; 2) semantic video concept interpretation by using flexible mixture model to bridge the semantic gap; 3) a novel semantic video-classifier training framework by integrating feature selection, parameter estimation, and model selection seamlessly in a single algorithm; and 4) a concept-oriented video database organization technique through a certain domain-dependent concept hierarchy to enable semantic-sensitive video retrieval and browsing.
Boehm, Udo; Steingroever, Helen; Wagenmakers, Eric-Jan
2018-06-01
An important tool in the advancement of cognitive science are quantitative models that represent different cognitive variables in terms of model parameters. To evaluate such models, their parameters are typically tested for relationships with behavioral and physiological variables that are thought to reflect specific cognitive processes. However, many models do not come equipped with the statistical framework needed to relate model parameters to covariates. Instead, researchers often revert to classifying participants into groups depending on their values on the covariates, and subsequently comparing the estimated model parameters between these groups. Here we develop a comprehensive solution to the covariate problem in the form of a Bayesian regression framework. Our framework can be easily added to existing cognitive models and allows researchers to quantify the evidential support for relationships between covariates and model parameters using Bayes factors. Moreover, we present a simulation study that demonstrates the superiority of the Bayesian regression framework to the conventional classification-based approach.
Video Games: Instructional Potential and Classification.
ERIC Educational Resources Information Center
Nawrocki, Leon H.; Winner, Janet L.
1983-01-01
Intended to provide a framework and impetus for future investigations of video games, this paper summarizes activities investigating the instructional use of such games, observations by the authors, and a proposed classification scheme and a paradigm to assist in the preliminary selection of instructional video games. Nine references are listed.…
A Framework for Automated Marmoset Vocalization Detection And Classification
2016-09-08
recent push to automate vocalization monitoring in a range of mammals. Such efforts have been used to classify bird songs [11], African elephants [12... Elephant ( Loxodonta africana ) Vocalizations,” vol. 117, no. 2, pp. 956–963, 2005. [13] J. C. Brown, “Automatic classification of killer whale
Use of Classification Agreement Analyses to Evaluate RTI Implementation
ERIC Educational Resources Information Center
VanDerHeyden, Amanda
2010-01-01
RTI as a framework for decision making has implications for the diagnosis of specific learning disabilities. Any diagnostic tool must meet certain standards to demonstrate that its use leads to predictable decisions with minimal risk. Classification agreement analyses are described as optimal for demonstrating the technical adequacy of RTI…
Validity: Applying Current Concepts and Standards to Gynecologic Surgery Performance Assessments
ERIC Educational Resources Information Center
LeClaire, Edgar L.; Nihira, Mikio A.; Hardré, Patricia L.
2015-01-01
Validity is critical for meaningful assessment of surgical competency. According to the Standards for Educational and Psychological Testing, validation involves the integration of data from well-defined classifications of evidence. In the authoritative framework, data from all classifications support construct validity claims. The two aims of this…
Sobre prestamos y clasificaciones linguisticas (Regarding Borrowing and Linguistic Classification).
ERIC Educational Resources Information Center
Key, Mary Ritchie
1988-01-01
This article explores the traditionally accepted etymologies of several lexical borrowings in the indigenous languages of the Americas within the framework of comparative linguistics and linguistic classification. The first section presents a general discussion of the problem of tracing lexical borrowings in this context. The section features a…
A Semi-supervised Heat Kernel Pagerank MBO Algorithm for Data Classification
2016-07-01
financial predictions, etc. and is finding growing use in text mining studies. In this paper, we present an efficient algorithm for classification of high...video data, set of images, hyperspectral data, medical data, text data, etc. Moreover, the framework provides a way to analyze data whose different...also be incorporated. For text classification, one can use tfidf (term frequency inverse document frequency) to form feature vectors for each document
A developmental and genetic classification for midbrain-hindbrain malformations
Millen, Kathleen J.; Dobyns, William B.
2009-01-01
Advances in neuroimaging, developmental biology and molecular genetics have increased the understanding of developmental disorders affecting the midbrain and hindbrain, both as isolated anomalies and as part of larger malformation syndromes. However, the understanding of these malformations and their relationships with other malformations, within the central nervous system and in the rest of the body, remains limited. A new classification system is proposed, based wherever possible, upon embryology and genetics. Proposed categories include: (i) malformations secondary to early anteroposterior and dorsoventral patterning defects, or to misspecification of mid-hindbrain germinal zones; (ii) malformations associated with later generalized developmental disorders that significantly affect the brainstem and cerebellum (and have a pathogenesis that is at least partly understood); (iii) localized brain malformations that significantly affect the brain stem and cerebellum (pathogenesis partly or largely understood, includes local proliferation, cell specification, migration and axonal guidance); and (iv) combined hypoplasia and atrophy of putative prenatal onset degenerative disorders. Pertinent embryology is discussed and the classification is justified. This classification will prove useful for both physicians who diagnose and treat patients with these disorders and for clinical scientists who wish to understand better the perturbations of developmental processes that produce them. Importantly, both the classification and its framework remain flexible enough to be easily modified when new embryologic processes are described or new malformations discovered. PMID:19933510
Automated Tissue Classification Framework for Reproducible Chronic Wound Assessment
Mukherjee, Rashmi; Manohar, Dhiraj Dhane; Das, Dev Kumar; Achar, Arun; Mitra, Analava; Chakraborty, Chandan
2014-01-01
The aim of this paper was to develop a computer assisted tissue classification (granulation, necrotic, and slough) scheme for chronic wound (CW) evaluation using medical image processing and statistical machine learning techniques. The red-green-blue (RGB) wound images grabbed by normal digital camera were first transformed into HSI (hue, saturation, and intensity) color space and subsequently the “S” component of HSI color channels was selected as it provided higher contrast. Wound areas from 6 different types of CW were segmented from whole images using fuzzy divergence based thresholding by minimizing edge ambiguity. A set of color and textural features describing granulation, necrotic, and slough tissues in the segmented wound area were extracted using various mathematical techniques. Finally, statistical learning algorithms, namely, Bayesian classification and support vector machine (SVM), were trained and tested for wound tissue classification in different CW images. The performance of the wound area segmentation protocol was further validated by ground truth images labeled by clinical experts. It was observed that SVM with 3rd order polynomial kernel provided the highest accuracies, that is, 86.94%, 90.47%, and 75.53%, for classifying granulation, slough, and necrotic tissues, respectively. The proposed automated tissue classification technique achieved the highest overall accuracy, that is, 87.61%, with highest kappa statistic value (0.793). PMID:25114925
Foley, Margaret M; Glenn, Regina M; Meli, Peggy L; Scichilone, Rita A
2009-01-01
Introduction Health information management (HIM) professionals' involvement with disease classification and nomenclature in the United States can be traced back to the early 20th century. In 1914, Grace Whiting Myers, the founder of the association known today as the American Health Information Management Association (AHIMA), served on the Committee on Uniform Nomenclature, which developed a disease classification system based upon etiological groupings. The profession's expertise and leadership in the collection, classification, and reporting of health data has continued since then. For example, in the early 1960s, another HIM professional (a medical record librarian) served as the associate editor of the fifth edition of the Standard Nomenclature of Disease (SNDO), a forerunner of the widely used clinical terminology, Systematized Nomenclature of Medicine Clinical Terms (SNOMED-CT). During the same period in history, the medical record professionals working in hospitals throughout the country were responsible for manually collecting and reporting disease and procedure information from medical records using SNDO.1 Because coded data have played a pivotal role in the ability to record and share health information through the years, creating the appropriate policy framework for the graceful evolution and harmonization of classification systems and clinical terminologies is essential. PMID:20169015
Elvrum, Ann-Kristin G; Andersen, Guro L; Himmelmann, Kate; Beckung, Eva; Öhrvall, Ann-Marie; Lydersen, Stian; Vik, Torstein
2016-01-01
The Bimanual Fine Motor Function (BFMF) is currently the principal classification of hand function recorded by the Surveillance of Cerebral Palsy in Europe (SCPE) register. The BFMF is used in a number of epidemiological studies, but has not yet been validated. To examine aspects of construct and content validity of the BFMF. Construct validity of the BFMF was assessed by comparison with the Manual Ability Classification System (MACS) using register-based data from 539 children born 1999-2003 (304 boys; 4-12 years). The high correlation with the MACS (Spearman's rho = 0.89, CI: 0.86-0.91, p<.001) supports construct validity of the BFMF. The content of the BFMF was appraised through literature review, and by using the ICF-CY as a framework to compare the BFMF and MACS. The items hold, grasp and manipulate were found to be relevant to describe increasingly advanced fine motor abilities in children with CP, but the description of the BFMF does not state whether it is a classification of fine motor capacity or performance. Our results suggest that the BFMF may provide complementary information to the MACS regarding fine motor function and actual use of the hands, particularly if used as a classification of fine motor capacity.
A framework for quantification of groundwater dynamics - concepts and hydro(geo-)logical metrics
NASA Astrophysics Data System (ADS)
Haaf, Ezra; Heudorfer, Benedikt; Stahl, Kerstin; Barthel, Roland
2017-04-01
Fluctuation patterns in groundwater hydrographs are generally assumed to contain information on aquifer characteristics, climate and environmental controls. However, attempts to disentangle this information and map the dominant controls have been few. This is due to the substantial heterogeneity and complexity of groundwater systems, which is reflected in the abundance of morphologies of groundwater time series. To describe the structure and shape of hydrographs, descriptive terms like "slow"/ "fast" or "flashy"/ "inert" are frequently used, which are subjective, irreproducible and limited. This lack of objective and refined concepts limit approaches for regionalization of hydrogeological characteristics as well as our understanding of dominant processes controlling groundwater dynamics. Therefore, we propose a novel framework for groundwater hydrograph characterization in an attempt to categorize morphologies explicitly and quantitatively based on perceptual concepts of aspects of the dynamics. This quantitative framework is inspired by the existing and operational eco-hydrological classification frameworks for streamflow. The need for a new framework for groundwater systems is justified by the fundamental differences between the state variable groundwater head and the flow variable streamflow. Conceptually, we extracted exemplars of specific dynamic patterns, attributing descriptive terms for means of systematisation. Metrics, primarily taken from streamflow literature, were subsequently adapted to groundwater and assigned to the described patterns for means of quantification. In this study, we focused on the particularities of groundwater as a state variable. Furthermore, we investigated the descriptive skill of individual metrics as well as their usefulness for groundwater hydrographs. The ensemble of categorized metrics result in a framework, which can be used to describe and quantify groundwater dynamics. It is a promising tool for the setup of a successful similarity classification framework for groundwater hydrographs. However, the overabundance of metrics available calls for a systematic redundancy analysis of the metrics, which we describe in a second study (Heudorfer et al., 2017). Heudorfer, B., Haaf, E., Barthel, R., Stahl, K., 2017. A framework for quantification of groundwater dynamics - redundancy and transferability of hydro(geo-)logical metrics. EGU General Assembly 2017, Vienna, Austria.