Sample records for agent based classification

  1. Agent Collaborative Target Localization and Classification in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng

    2007-01-01

    Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.

  2. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  3. Multi-Agent Information Classification Using Dynamic Acquaintance Lists.

    ERIC Educational Resources Information Center

    Mukhopadhyay, Snehasis; Peng, Shengquan; Raje, Rajeev; Palakal, Mathew; Mostafa, Javed

    2003-01-01

    Discussion of automated information services focuses on information classification and collaborative agents, i.e. intelligent computer programs. Highlights include multi-agent systems; distributed artificial intelligence; thesauri; document representation and classification; agent modeling; acquaintances, or remote agents discovered through…

  4. Multi-agent Negotiation Mechanisms for Statistical Target Classification in Wireless Multimedia Sensor Networks

    PubMed Central

    Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng

    2007-01-01

    The recent availability of low cost and miniaturized hardware has allowed wireless sensor networks (WSNs) to retrieve audio and video data in real world applications, which has fostered the development of wireless multimedia sensor networks (WMSNs). Resource constraints and challenging multimedia data volume make development of efficient algorithms to perform in-network processing of multimedia contents imperative. This paper proposes solving problems in the domain of WMSNs from the perspective of multi-agent systems. The multi-agent framework enables flexible network configuration and efficient collaborative in-network processing. The focus is placed on target classification in WMSNs where audio information is retrieved by microphones. To deal with the uncertainties related to audio information retrieval, the statistical approaches of power spectral density estimates, principal component analysis and Gaussian process classification are employed. A multi-agent negotiation mechanism is specially developed to efficiently utilize limited resources and simultaneously enhance classification accuracy and reliability. The negotiation is composed of two phases, where an auction based approach is first exploited to allocate the classification task among the agents and then individual agent decisions are combined by the committee decision mechanism. Simulation experiments with real world data are conducted and the results show that the proposed statistical approaches and negotiation mechanism not only reduce memory and computation requirements in WMSNs but also significantly enhance classification accuracy and reliability. PMID:28903223

  5. Using an object-based grid system to evaluate a newly developed EP approach to formulate SVMs as applied to the classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun

    2004-04-01

    This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.

  6. 75 FR 7548 - Amendments to the Select Agents Controls in Export Control Classification Number (ECCN) 1C360 on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-22

    ...-91434-01] RIN 0694-AE67 Amendments to the Select Agents Controls in Export Control Classification Number... controls on certain select agents identified in Export Control Classification Number (ECCN) 1C360 on the...) list of select agents and toxins. The changes made by APHIS were part of a biennial review and...

  7. Adenosine monophosphate-activated protein kinase-based classification of diabetes pharmacotherapy

    PubMed Central

    Dutta, D; Kalra, S; Sharma, M

    2017-01-01

    The current classification of both diabetes and antidiabetes medication is complex, preventing a treating physician from choosing the most appropriate treatment for an individual patient, sometimes resulting in patient-drug mismatch. We propose a novel, simple systematic classification of drugs, based on their effect on adenosine monophosphate-activated protein kinase (AMPK). AMPK is the master regular of energy metabolism, an energy sensor, activated when cellular energy levels are low, resulting in activation of catabolic process, and inactivation of anabolic process, having a beneficial effect on glycemia in diabetes. This listing of drugs makes it easier for students and practitioners to analyze drug profiles and match them with patient requirements. It also facilitates choice of rational combinations, with complementary modes of action. Drugs are classified as stimulators, inhibitors, mixed action, possible action, and no action on AMPK activity. Metformin and glitazones are pure stimulators of AMPK. Incretin-based therapies have a mixed action on AMPK. Sulfonylureas either inhibit AMPK or have no effect on AMPK. Glycemic efficacy of alpha-glucosidase inhibitors, sodium glucose co-transporter-2 inhibitor, colesevelam, and bromocriptine may also involve AMPK activation, which warrants further evaluation. Berberine, salicylates, and resveratrol are newer promising agents in the management of diabetes, having well-documented evidence of AMPK stimulation medicated glycemic efficacy. Hence, AMPK-based classification of antidiabetes medications provides a holistic unifying understanding of pharmacotherapy in diabetes. This classification is flexible with a scope for inclusion of promising agents of future. PMID:27652986

  8. Intelligent agent-based intrusion detection system using enhanced multiclass SVM.

    PubMed

    Ganapathy, S; Yogesh, P; Kannan, A

    2012-01-01

    Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set.

  9. Intelligent Agent-Based Intrusion Detection System Using Enhanced Multiclass SVM

    PubMed Central

    Ganapathy, S.; Yogesh, P.; Kannan, A.

    2012-01-01

    Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set. PMID:23056036

  10. Patterns of Use of an Agent-Based Model and a System Dynamics Model: The Application of Patterns of Use and the Impacts on Learning Outcomes

    ERIC Educational Resources Information Center

    Thompson, Kate; Reimann, Peter

    2010-01-01

    A classification system that was developed for the use of agent-based models was applied to strategies used by school-aged students to interrogate an agent-based model and a system dynamics model. These were compared, and relationships between learning outcomes and the strategies used were also analysed. It was found that the classification system…

  11. Integration of multi-array sensors and support vector machines for the detection and classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Sadik, Omowunmi A.; Embrechts, Mark J.; Leibensperger, Dale; Wong, Lut; Wanekaya, Adam; Uematsu, Michiko

    2003-08-01

    Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. Furthermore, recent events have highlighted awareness that chemical and biological agents (CBAs) may become the preferred, cheap alternative WMD, because these agents can effectively attack large populations while leaving infrastructures intact. Despite the availability of numerous sensing devices, intelligent hybrid sensors that can detect and degrade CBAs are virtually nonexistent. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using parathion and dichlorvos as model stimulants compounds. SVMs were used for the design and evaluation of new and more accurate data extraction, preprocessing and classification. Experimental results for the paradigms developed using Structural Risk Minimization, show a significant increase in classification accuracy when compared to the existing AromaScan baseline system. Specifically, the results of this research has demonstrated that, for the Parathion versus Dichlorvos pair, when compared to the AromaScan baseline system: (1) a 23% improvement in the overall ROC Az index using the S2000 kernel, with similar improvements with the Gaussian and polynomial (of degree 2) kernels, (2) a significant 173% improvement in specificity with the S2000 kernel. This means that the number of false negative errors were reduced by 173%, while making no false positive errors, when compared to the AromaScan base line performance. (3) The Gaussian and polynomial kernels demonstrated similar specificity at 100% sensitivity. All SVM classifiers provided essentially perfect classification performance for the Dichlorvos versus Trichlorfon pair. For the most difficult classification task, the Parathion versus

  12. Mass classification in mammography with multi-agent based fusion of human and machine intelligence

    NASA Astrophysics Data System (ADS)

    Xi, Dongdong; Fan, Ming; Li, Lihua; Zhang, Juan; Shan, Yanna; Dai, Gang; Zheng, Bin

    2016-03-01

    Although the computer-aided diagnosis (CAD) system can be applied for classifying the breast masses, the effects of this method on improvement of the radiologist' accuracy for distinguishing malignant from benign lesions still remain unclear. This study provided a novel method to classify breast masses by integrating the intelligence of human and machine. In this research, 224 breast masses were selected in mammography from database of DDSM with Breast Imaging Reporting and Data System (BI-RADS) categories. Three observers (a senior and a junior radiologist, as well as a radiology resident) were employed to independently read and classify these masses utilizing the Positive Predictive Values (PPV) for each BI-RADS category. Meanwhile, a CAD system was also implemented for classification of these breast masses between malignant and benign. To combine the decisions from the radiologists and CAD, the fusion method of the Multi-Agent was provided. Significant improvements are observed for the fusion system over solely radiologist or CAD. The area under the receiver operating characteristic curve (AUC) of the fusion system increased by 9.6%, 10.3% and 21% compared to that of radiologists with senior, junior and resident level, respectively. In addition, the AUC of this method based on the fusion of each radiologist and CAD are 3.5%, 3.6% and 3.3% higher than that of CAD alone. Finally, the fusion of the three radiologists with CAD achieved AUC value of 0.957, which was 5.6% larger compared to CAD. Our results indicated that the proposed fusion method has better performance than radiologist or CAD alone.

  13. Agent Persuasion Mechanism of Acquaintance

    NASA Astrophysics Data System (ADS)

    Jinghua, Wu; Wenguang, Lu; Hailiang, Meng

    Agent persuasion can improve negotiation efficiency in dynamic environment based on its initiative and autonomy, and etc., which is being affected much more by acquaintance. Classification of acquaintance on agent persuasion is illustrated, and the agent persuasion model of acquaintance is also illustrated. Then the concept of agent persuasion degree of acquaintance is given. Finally, relative interactive mechanism is elaborated.

  14. A Multiagent-based Intrusion Detection System with the Support of Multi-Class Supervised Classification

    NASA Astrophysics Data System (ADS)

    Shyu, Mei-Ling; Sainani, Varsha

    The increasing number of network security related incidents have made it necessary for the organizations to actively protect their sensitive data with network intrusion detection systems (IDSs). IDSs are expected to analyze a large volume of data while not placing a significantly added load on the monitoring systems and networks. This requires good data mining strategies which take less time and give accurate results. In this study, a novel data mining assisted multiagent-based intrusion detection system (DMAS-IDS) is proposed, particularly with the support of multiclass supervised classification. These agents can detect and take predefined actions against malicious activities, and data mining techniques can help detect them. Our proposed DMAS-IDS shows superior performance compared to central sniffing IDS techniques, and saves network resources compared to other distributed IDS with mobile agents that activate too many sniffers causing bottlenecks in the network. This is one of the major motivations to use a distributed model based on multiagent platform along with a supervised classification technique.

  15. A web-based land cover classification system based on ontology model of different classification systems

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Chen, X.

    2016-12-01

    Land cover classification systems used in remote sensing image data have been developed to meet the needs for depicting land covers in scientific investigations and policy decisions. However, accuracy assessments of a spate of data sets demonstrate that compared with the real physiognomy, each of the thematic map of specific land cover classification system contains some unavoidable flaws and unintended deviation. This work proposes a web-based land cover classification system, an integrated prototype, based on an ontology model of various classification systems, each of which is assigned the same weight in the final determination of land cover type. Ontology, a formal explication of specific concepts and relations, is employed in this prototype to build up the connections among different systems to resolve the naming conflicts. The process is initialized by measuring semantic similarity between terminologies in the systems and the search key to produce certain set of satisfied classifications, and carries on through searching the predefined relations in concepts of all classification systems to generate classification maps with user-specified land cover type highlighted, based on probability calculated by votes from data sets with different classification system adopted. The present system is verified and validated by comparing the classification results with those most common systems. Due to full consideration and meaningful expression of each classification system using ontology and the convenience that the web brings with itself, this system, as a preliminary model, proposes a flexible and extensible architecture for classification system integration and data fusion, thereby providing a strong foundation for the future work.

  16. Towards a framework for agent-based image analysis of remote-sensing data.

    PubMed

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  17. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    NASA Technical Reports Server (NTRS)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  18. Towards a framework for agent-based image analysis of remote-sensing data

    PubMed Central

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-01-01

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916

  19. Online Learning for Classification of Alzheimer Disease based on Cortical Thickness and Hippocampal Shape Analysis.

    PubMed

    Lee, Ga-Young; Kim, Jeonghun; Kim, Ju Han; Kim, Kiwoong; Seong, Joon-Kyung

    2014-01-01

    Mobile healthcare applications are becoming a growing trend. Also, the prevalence of dementia in modern society is showing a steady growing trend. Among degenerative brain diseases that cause dementia, Alzheimer disease (AD) is the most common. The purpose of this study was to identify AD patients using magnetic resonance imaging in the mobile environment. We propose an incremental classification for mobile healthcare systems. Our classification method is based on incremental learning for AD diagnosis and AD prediction using the cortical thickness data and hippocampus shape. We constructed a classifier based on principal component analysis and linear discriminant analysis. We performed initial learning and mobile subject classification. Initial learning is the group learning part in our server. Our smartphone agent implements the mobile classification and shows various results. With use of cortical thickness data analysis alone, the discrimination accuracy was 87.33% (sensitivity 96.49% and specificity 64.33%). When cortical thickness data and hippocampal shape were analyzed together, the achieved accuracy was 87.52% (sensitivity 96.79% and specificity 63.24%). In this paper, we presented a classification method based on online learning for AD diagnosis by employing both cortical thickness data and hippocampal shape analysis data. Our method was implemented on smartphone devices and discriminated AD patients for normal group.

  20. Exploring complex dynamics in multi agent-based intelligent systems: Theoretical and experimental approaches using the Multi Agent-based Behavioral Economic Landscape (MABEL) model

    NASA Astrophysics Data System (ADS)

    Alexandridis, Konstantinos T.

    This dissertation adopts a holistic and detailed approach to modeling spatially explicit agent-based artificial intelligent systems, using the Multi Agent-based Behavioral Economic Landscape (MABEL) model. The research questions that addresses stem from the need to understand and analyze the real-world patterns and dynamics of land use change from a coupled human-environmental systems perspective. Describes the systemic, mathematical, statistical, socio-economic and spatial dynamics of the MABEL modeling framework, and provides a wide array of cross-disciplinary modeling applications within the research, decision-making and policy domains. Establishes the symbolic properties of the MABEL model as a Markov decision process, analyzes the decision-theoretic utility and optimization attributes of agents towards comprising statistically and spatially optimal policies and actions, and explores the probabilogic character of the agents' decision-making and inference mechanisms via the use of Bayesian belief and decision networks. Develops and describes a Monte Carlo methodology for experimental replications of agent's decisions regarding complex spatial parcel acquisition and learning. Recognizes the gap on spatially-explicit accuracy assessment techniques for complex spatial models, and proposes an ensemble of statistical tools designed to address this problem. Advanced information assessment techniques such as the Receiver-Operator Characteristic curve, the impurity entropy and Gini functions, and the Bayesian classification functions are proposed. The theoretical foundation for modular Bayesian inference in spatially-explicit multi-agent artificial intelligent systems, and the ensembles of cognitive and scenario assessment modular tools build for the MABEL model are provided. Emphasizes the modularity and robustness as valuable qualitative modeling attributes, and examines the role of robust intelligent modeling as a tool for improving policy-decisions related to land

  1. Protein classification based on text document classification techniques.

    PubMed

    Cheng, Betty Yee Man; Carbonell, Jaime G; Klein-Seetharaman, Judith

    2005-03-01

    The need for accurate, automated protein classification methods continues to increase as advances in biotechnology uncover new proteins. G-protein coupled receptors (GPCRs) are a particularly difficult superfamily of proteins to classify due to extreme diversity among its members. Previous comparisons of BLAST, k-nearest neighbor (k-NN), hidden markov model (HMM) and support vector machine (SVM) using alignment-based features have suggested that classifiers at the complexity of SVM are needed to attain high accuracy. Here, analogous to document classification, we applied Decision Tree and Naive Bayes classifiers with chi-square feature selection on counts of n-grams (i.e. short peptide sequences of length n) to this classification task. Using the GPCR dataset and evaluation protocol from the previous study, the Naive Bayes classifier attained an accuracy of 93.0 and 92.4% in level I and level II subfamily classification respectively, while SVM has a reported accuracy of 88.4 and 86.3%. This is a 39.7 and 44.5% reduction in residual error for level I and level II subfamily classification, respectively. The Decision Tree, while inferior to SVM, outperforms HMM in both level I and level II subfamily classification. For those GPCR families whose profiles are stored in the Protein FAMilies database of alignments and HMMs (PFAM), our method performs comparably to a search against those profiles. Finally, our method can be generalized to other protein families by applying it to the superfamily of nuclear receptors with 94.5, 97.8 and 93.6% accuracy in family, level I and level II subfamily classification respectively. Copyright 2005 Wiley-Liss, Inc.

  2. Efficient Agent-Based Cluster Ensembles

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian; Tumer, Kagan

    2006-01-01

    Numerous domains ranging from distributed data acquisition to knowledge reuse need to solve the cluster ensemble problem of combining multiple clusterings into a single unified clustering. Unfortunately current non-agent-based cluster combining methods do not work in a distributed environment, are not robust to corrupted clusterings and require centralized access to all original clusterings. Overcoming these issues will allow cluster ensembles to be used in fundamentally distributed and failure-prone domains such as data acquisition from satellite constellations, in addition to domains demanding confidentiality such as combining clusterings of user profiles. This paper proposes an efficient, distributed, agent-based clustering ensemble method that addresses these issues. In this approach each agent is assigned a small subset of the data and votes on which final cluster its data points should belong to. The final clustering is then evaluated by a global utility, computed in a distributed way. This clustering is also evaluated using an agent-specific utility that is shown to be easier for the agents to maximize. Results show that agents using the agent-specific utility can achieve better performance than traditional non-agent based methods and are effective even when up to 50% of the agents fail.

  3. Cloud field classification based on textural features

    NASA Technical Reports Server (NTRS)

    Sengupta, Sailes Kumar

    1989-01-01

    An essential component in global climate research is accurate cloud cover and type determination. Of the two approaches to texture-based classification (statistical and textural), only the former is effective in the classification of natural scenes such as land, ocean, and atmosphere. In the statistical approach that was adopted, parameters characterizing the stochastic properties of the spatial distribution of grey levels in an image are estimated and then used as features for cloud classification. Two types of textural measures were used. One is based on the distribution of the grey level difference vector (GLDV), and the other on a set of textural features derived from the MaxMin cooccurrence matrix (MMCM). The GLDV method looks at the difference D of grey levels at pixels separated by a horizontal distance d and computes several statistics based on this distribution. These are then used as features in subsequent classification. The MaxMin tectural features on the other hand are based on the MMCM, a matrix whose (I,J)th entry give the relative frequency of occurrences of the grey level pair (I,J) that are consecutive and thresholded local extremes separated by a given pixel distance d. Textural measures are then computed based on this matrix in much the same manner as is done in texture computation using the grey level cooccurrence matrix. The database consists of 37 cloud field scenes from LANDSAT imagery using a near IR visible channel. The classification algorithm used is the well known Stepwise Discriminant Analysis. The overall accuracy was estimated by the percentage or correct classifications in each case. It turns out that both types of classifiers, at their best combination of features, and at any given spatial resolution give approximately the same classification accuracy. A neural network based classifier with a feed forward architecture and a back propagation training algorithm is used to increase the classification accuracy, using these two classes

  4. Intelligent Interoperable Agent Toolkit (I2AT)

    DTIC Science & Technology

    2005-02-01

    Agents, Agent Infrastructure, Intelligent Agents 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT UNCLASSIFIED 18. SECURITY ...CLASSIFICATION OF THIS PAGE UNCLASSIFIED 19. SECURITY CLASSIFICATION OF ABSTRACT UNCLASSIFIED 20. LIMITATION OF ABSTRACT UL NSN 7540-01...those that occur while the submarine is submerged. Using CoABS Grid/Jini service discovery events backed up with a small amount of internal bookkeeping

  5. Prostate segmentation by sparse representation based classification

    PubMed Central

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-01-01

    Purpose: The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. Methods: To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by

  6. Prostate segmentation by sparse representation based classification.

    PubMed

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-10-01

    The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in

  7. Sentiment classification technology based on Markov logic networks

    NASA Astrophysics Data System (ADS)

    He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe

    2016-07-01

    With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.

  8. CATS-based Agents That Err

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.

    2002-01-01

    This report describes preliminary research on intelligent agents that make errors. Such agents are crucial to the development of novel agent-based techniques for assessing system safety. The agents extend an agent architecture derived from the Crew Activity Tracking System that has been used as the basis for air traffic controller agents. The report first reviews several error taxonomies. Next, it presents an overview of the air traffic controller agents, then details several mechanisms for causing the agents to err in realistic ways. The report presents a performance assessment of the error-generating agents, and identifies directions for further research. The research was supported by the System-Wide Accident Prevention element of the FAA/NASA Aviation Safety Program.

  9. Deep learning for EEG-Based preference classification

    NASA Astrophysics Data System (ADS)

    Teo, Jason; Hou, Chew Lin; Mountstephens, James

    2017-10-01

    Electroencephalogram (EEG)-based emotion classification is rapidly becoming one of the most intensely studied areas of brain-computer interfacing (BCI). The ability to passively identify yet accurately correlate brainwaves with our immediate emotions opens up truly meaningful and previously unattainable human-computer interactions such as in forensic neuroscience, rehabilitative medicine, affective entertainment and neuro-marketing. One particularly useful yet rarely explored areas of EEG-based emotion classification is preference recognition [1], which is simply the detection of like versus dislike. Within the limited investigations into preference classification, all reported studies were based on musically-induced stimuli except for a single study which used 2D images. The main objective of this study is to apply deep learning, which has been shown to produce state-of-the-art results in diverse hard problems such as in computer vision, natural language processing and audio recognition, to 3D object preference classification over a larger group of test subjects. A cohort of 16 users was shown 60 bracelet-like objects as rotating visual stimuli on a computer display while their preferences and EEGs were recorded. After training a variety of machine learning approaches which included deep neural networks, we then attempted to classify the users' preferences for the 3D visual stimuli based on their EEGs. Here, we show that that deep learning outperforms a variety of other machine learning classifiers for this EEG-based preference classification task particularly in a highly challenging dataset with large inter- and intra-subject variability.

  10. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  11. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  12. Land Cover Analysis by Using Pixel-Based and Object-Based Image Classification Method in Bogor

    NASA Astrophysics Data System (ADS)

    Amalisana, Birohmatin; Rokhmatullah; Hernina, Revi

    2017-12-01

    The advantage of image classification is to provide earth’s surface information like landcover and time-series changes. Nowadays, pixel-based image classification technique is commonly performed with variety of algorithm such as minimum distance, parallelepiped, maximum likelihood, mahalanobis distance. On the other hand, landcover classification can also be acquired by using object-based image classification technique. In addition, object-based classification uses image segmentation from parameter such as scale, form, colour, smoothness and compactness. This research is aimed to compare the result of landcover classification and its change detection between parallelepiped pixel-based and object-based classification method. Location of this research is Bogor with 20 years range of observation from 1996 until 2016. This region is famous as urban areas which continuously change due to its rapid development, so that time-series landcover information of this region will be interesting.

  13. EXTENDING AQUATIC CLASSIFICATION TO THE LANDSCAPE SCALE HYDROLOGY-BASED STRATEGIES

    EPA Science Inventory

    Aquatic classification of single water bodies (lakes, wetlands, estuaries) is often based on geologic origin, while stream classification has relied on multiple factors related to landform, geomorphology, and soils. We have developed an approach to aquatic classification based o...

  14. Evaluation of new antiemetic agents and definition of antineoplastic agent emetogenicity--an update.

    PubMed

    Grunberg, Steven M; Osoba, David; Hesketh, Paul J; Gralla, Richard J; Borjeson, Sussanne; Rapoport, Bernardo L; du Bois, Andreas; Tonato, Maurizio

    2005-02-01

    Development of effective antiemetic therapy depends upon an understanding of both the antiemetic agents and the emetogenic challenges these agents are designed to address. New potential antiemetic agents should be studied in an orderly manner, proceeding from phase I to phase II open-label trials and then to randomized double-blind phase III trials comparing new agents and regimens to best standard therapy. Use of placebos in place of antiemetic therapy against highly or moderately emetogenic chemotherapy is unacceptable. Nausea and vomiting should be evaluated separately and for both the acute and delayed periods. Defining the emetogenicity of new antineoplastic agents is a challenge, since such data are often not reliably recorded during early drug development. A four-level classification system is proposed for emetogenicity of intravenous antineoplastic agents. A separate four-level classification system for emetogenicity of oral antineoplastic agents, which are often given over an extended period of time, is also proposed.

  15. Structure-based classification and ontology in chemistry

    PubMed Central

    2012-01-01

    Background Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies. Results We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches. Conclusion Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic

  16. The generalization ability of online SVM classification based on Markov sampling.

    PubMed

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  17. Multi-label literature classification based on the Gene Ontology graph.

    PubMed

    Jin, Bo; Muller, Brian; Zhai, Chengxiang; Lu, Xinghua

    2008-12-08

    The Gene Ontology is a controlled vocabulary for representing knowledge related to genes and proteins in a computable form. The current effort of manually annotating proteins with the Gene Ontology is outpaced by the rate of accumulation of biomedical knowledge in literature, which urges the development of text mining approaches to facilitate the process by automatically extracting the Gene Ontology annotation from literature. The task is usually cast as a text classification problem, and contemporary methods are confronted with unbalanced training data and the difficulties associated with multi-label classification. In this research, we investigated the methods of enhancing automatic multi-label classification of biomedical literature by utilizing the structure of the Gene Ontology graph. We have studied three graph-based multi-label classification algorithms, including a novel stochastic algorithm and two top-down hierarchical classification methods for multi-label literature classification. We systematically evaluated and compared these graph-based classification algorithms to a conventional flat multi-label algorithm. The results indicate that, through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods can significantly improve predictions of the Gene Ontology terms implied by the analyzed text. Furthermore, the graph-based multi-label classifiers are capable of suggesting Gene Ontology annotations (to curators) that are closely related to the true annotations even if they fail to predict the true ones directly. A software package implementing the studied algorithms is available for the research community. Through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods have better potential than the conventional flat multi-label classification approach to facilitate protein annotation based on the literature.

  18. An unbalanced spectra classification method based on entropy

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-bao; Zhao, Wen-juan

    2017-05-01

    How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.

  19. A review of supervised object-based land-cover image classification

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Li, Manchun; Ma, Xiaoxue; Cheng, Liang; Du, Peijun; Liu, Yongxue

    2017-08-01

    Object-based image classification for land-cover mapping purposes using remote-sensing imagery has attracted significant attention in recent years. Numerous studies conducted over the past decade have investigated a broad array of sensors, feature selection, classifiers, and other factors of interest. However, these research results have not yet been synthesized to provide coherent guidance on the effect of different supervised object-based land-cover classification processes. In this study, we first construct a database with 28 fields using qualitative and quantitative information extracted from 254 experimental cases described in 173 scientific papers. Second, the results of the meta-analysis are reported, including general characteristics of the studies (e.g., the geographic range of relevant institutes, preferred journals) and the relationships between factors of interest (e.g., spatial resolution and study area or optimal segmentation scale, accuracy and number of targeted classes), especially with respect to the classification accuracy of different sensors, segmentation scale, training set size, supervised classifiers, and land-cover types. Third, useful data on supervised object-based image classification are determined from the meta-analysis. For example, we find that supervised object-based classification is currently experiencing rapid advances, while development of the fuzzy technique is limited in the object-based framework. Furthermore, spatial resolution correlates with the optimal segmentation scale and study area, and Random Forest (RF) shows the best performance in object-based classification. The area-based accuracy assessment method can obtain stable classification performance, and indicates a strong correlation between accuracy and training set size, while the accuracy of the point-based method is likely to be unstable due to mixed objects. In addition, the overall accuracy benefits from higher spatial resolution images (e.g., unmanned aerial

  20. A Micro-Level Data-Calibrated Agent-Based Model: The Synergy between Microsimulation and Agent-Based Modeling.

    PubMed

    Singh, Karandeep; Ahn, Chang-Won; Paik, Euihyun; Bae, Jang Won; Lee, Chun-Hee

    2018-01-01

    Artificial life (ALife) examines systems related to natural life, its processes, and its evolution, using simulations with computer models, robotics, and biochemistry. In this article, we focus on the computer modeling, or "soft," aspects of ALife and prepare a framework for scientists and modelers to be able to support such experiments. The framework is designed and built to be a parallel as well as distributed agent-based modeling environment, and does not require end users to have expertise in parallel or distributed computing. Furthermore, we use this framework to implement a hybrid model using microsimulation and agent-based modeling techniques to generate an artificial society. We leverage this artificial society to simulate and analyze population dynamics using Korean population census data. The agents in this model derive their decisional behaviors from real data (microsimulation feature) and interact among themselves (agent-based modeling feature) to proceed in the simulation. The behaviors, interactions, and social scenarios of the agents are varied to perform an analysis of population dynamics. We also estimate the future cost of pension policies based on the future population structure of the artificial society. The proposed framework and model demonstrates how ALife techniques can be used by researchers in relation to social issues and policies.

  1. Orientation selectivity based structure for texture classification

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Lin, Weisi; Shi, Guangming; Zhang, Yazhong; Lu, Liu

    2014-10-01

    Local structure, e.g., local binary pattern (LBP), is widely used in texture classification. However, LBP is too sensitive to disturbance. In this paper, we introduce a novel structure for texture classification. Researches on cognitive neuroscience indicate that the primary visual cortex presents remarkable orientation selectivity for visual information extraction. Inspired by this, we investigate the orientation similarities among neighbor pixels, and propose an orientation selectivity based pattern for local structure description. Experimental results on texture classification demonstrate that the proposed structure descriptor is quite robust to disturbance.

  2. AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*

    PubMed Central

    Bruch, Elizabeth; Atwell, Jon

    2014-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351

  3. Building a common pipeline for rule-based document classification.

    PubMed

    Patterson, Olga V; Ginter, Thomas; DuVall, Scott L

    2013-01-01

    Instance-based classification of clinical text is a widely used natural language processing task employed as a step for patient classification, document retrieval, or information extraction. Rule-based approaches rely on concept identification and context analysis in order to determine the appropriate class. We propose a five-step process that enables even small research teams to develop simple but powerful rule-based NLP systems by taking advantage of a common UIMA AS based pipeline for classification. Our proposed methodology coupled with the general-purpose solution provides researchers with access to the data locked in clinical text in cases of limited human resources and compact timelines.

  4. EMG finger movement classification based on ANFIS

    NASA Astrophysics Data System (ADS)

    Caesarendra, W.; Tjahjowidodo, T.; Nico, Y.; Wahyudati, S.; Nurhasanah, L.

    2018-04-01

    An increase number of people suffering from stroke has impact to the rapid development of finger hand exoskeleton to enable an automatic physical therapy. Prior to the development of finger exoskeleton, a research topic yet important i.e. machine learning of finger gestures classification is conducted. This paper presents a study on EMG signal classification of 5 finger gestures as a preliminary study toward the finger exoskeleton design and development in Indonesia. The EMG signals of 5 finger gestures were acquired using Myo EMG sensor. The EMG signal features were extracted and reduced using PCA. The ANFIS based learning is used to classify reduced features of 5 finger gestures. The result shows that the classification of finger gestures is less than the classification of 7 hand gestures.

  5. Chinese Sentence Classification Based on Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Gu, Chengwei; Wu, Ming; Zhang, Chuang

    2017-10-01

    Sentence classification is one of the significant issues in Natural Language Processing (NLP). Feature extraction is often regarded as the key point for natural language processing. Traditional ways based on machine learning can not take high level features into consideration, such as Naive Bayesian Model. The neural network for sentence classification can make use of contextual information to achieve greater results in sentence classification tasks. In this paper, we focus on classifying Chinese sentences. And the most important is that we post a novel architecture of Convolutional Neural Network (CNN) to apply on Chinese sentence classification. In particular, most of the previous methods often use softmax classifier for prediction, we embed a linear support vector machine to substitute softmax in the deep neural network model, minimizing a margin-based loss to get a better result. And we use tanh as an activation function, instead of ReLU. The CNN model improve the result of Chinese sentence classification tasks. Experimental results on the Chinese news title database validate the effectiveness of our model.

  6. Classification of cloud fields based on textural characteristics

    NASA Technical Reports Server (NTRS)

    Welch, R. M.; Sengupta, S. K.; Chen, D. W.

    1987-01-01

    The present study reexamines the applicability of texture-based features for automatic cloud classification using very high spatial resolution (57 m) Landsat multispectral scanner digital data. It is concluded that cloud classification can be accomplished using only a single visible channel.

  7. Voxel classification based airway tree segmentation

    NASA Astrophysics Data System (ADS)

    Lo, Pechin; de Bruijne, Marleen

    2008-03-01

    This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.

  8. Ecology Based Decentralized Agent Management System

    NASA Technical Reports Server (NTRS)

    Peysakhov, Maxim D.; Cicirello, Vincent A.; Regli, William C.

    2004-01-01

    The problem of maintaining a desired number of mobile agents on a network is not trivial, especially if we want a completely decentralized solution. Decentralized control makes a system more r e bust and less susceptible to partial failures. The problem is exacerbated on wireless ad hoc networks where host mobility can result in significant changes in the network size and topology. In this paper we propose an ecology-inspired approach to the management of the number of agents. The approach associates agents with living organisms and tasks with food. Agents procreate or die based on the abundance of uncompleted tasks (food). We performed a series of experiments investigating properties of such systems and analyzed their stability under various conditions. We concluded that the ecology based metaphor can be successfully applied to the management of agent populations on wireless ad hoc networks.

  9. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  10. Diverse Region-Based CNN for Hyperspectral Image Classification.

    PubMed

    Zhang, Mengmeng; Li, Wei; Du, Qian

    2018-06-01

    Convolutional neural network (CNN) is of great interest in machine learning and has demonstrated excellent performance in hyperspectral image classification. In this paper, we propose a classification framework, called diverse region-based CNN, which can encode semantic context-aware representation to obtain promising features. With merging a diverse set of discriminative appearance factors, the resulting CNN-based representation exhibits spatial-spectral context sensitivity that is essential for accurate pixel classification. The proposed method exploiting diverse region-based inputs to learn contextual interactional features is expected to have more discriminative power. The joint representation containing rich spectral and spatial information is then fed to a fully connected network and the label of each pixel vector is predicted by a softmax layer. Experimental results with widely used hyperspectral image data sets demonstrate that the proposed method can surpass any other conventional deep learning-based classifiers and other state-of-the-art classifiers.

  11. Classification-Based Spatial Error Concealment for Visual Communications

    NASA Astrophysics Data System (ADS)

    Chen, Meng; Zheng, Yefeng; Wu, Min

    2006-12-01

    In an error-prone transmission environment, error concealment is an effective technique to reconstruct the damaged visual content. Due to large variations of image characteristics, different concealment approaches are necessary to accommodate the different nature of the lost image content. In this paper, we address this issue and propose using classification to integrate the state-of-the-art error concealment techniques. The proposed approach takes advantage of multiple concealment algorithms and adaptively selects the suitable algorithm for each damaged image area. With growing awareness that the design of sender and receiver systems should be jointly considered for efficient and reliable multimedia communications, we proposed a set of classification-based block concealment schemes, including receiver-side classification, sender-side attachment, and sender-side embedding. Our experimental results provide extensive performance comparisons and demonstrate that the proposed classification-based error concealment approaches outperform the conventional approaches.

  12. An incremental approach to genetic-algorithms-based classification.

    PubMed

    Guan, Sheng-Uei; Zhu, Fangming

    2005-04-01

    Incremental learning has been widely addressed in the machine learning literature to cope with learning tasks where the learning environment is ever changing or training samples become available over time. However, most research work explores incremental learning with statistical algorithms or neural networks, rather than evolutionary algorithms. The work in this paper employs genetic algorithms (GAs) as basic learning algorithms for incremental learning within one or more classifier agents in a multiagent environment. Four new approaches with different initialization schemes are proposed. They keep the old solutions and use an "integration" operation to integrate them with new elements to accommodate new attributes, while biased mutation and crossover operations are adopted to further evolve a reinforced solution. The simulation results on benchmark classification data sets show that the proposed approaches can deal with the arrival of new input attributes and integrate them with the original input space. It is also shown that the proposed approaches can be successfully used for incremental learning and improve classification rates as compared to the retraining GA. Possible applications for continuous incremental training and feature selection are also discussed.

  13. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Impact of Information based Classification on Network Epidemics

    PubMed Central

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-01-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results. PMID:27329348

  15. Object-based land cover classification based on fusion of multifrequency SAR data and THAICHOTE optical imagery

    NASA Astrophysics Data System (ADS)

    Sukawattanavijit, Chanika; Srestasathiern, Panu

    2017-10-01

    Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.

  16. Evolutionary game theory using agent-based methods.

    PubMed

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Voice based gender classification using machine learning

    NASA Astrophysics Data System (ADS)

    Raahul, A.; Sapthagiri, R.; Pankaj, K.; Vijayarajan, V.

    2017-11-01

    Gender identification is one of the major problem speech analysis today. Tracing the gender from acoustic data i.e., pitch, median, frequency etc. Machine learning gives promising results for classification problem in all the research domains. There are several performance metrics to evaluate algorithms of an area. Our Comparative model algorithm for evaluating 5 different machine learning algorithms based on eight different metrics in gender classification from acoustic data. Agenda is to identify gender, with five different algorithms: Linear Discriminant Analysis (LDA), K-Nearest Neighbour (KNN), Classification and Regression Trees (CART), Random Forest (RF), and Support Vector Machine (SVM) on basis of eight different metrics. The main parameter in evaluating any algorithms is its performance. Misclassification rate must be less in classification problems, which says that the accuracy rate must be high. Location and gender of the person have become very crucial in economic markets in the form of AdSense. Here with this comparative model algorithm, we are trying to assess the different ML algorithms and find the best fit for gender classification of acoustic data.

  18. Agent Based Modeling Applications for Geosciences

    NASA Astrophysics Data System (ADS)

    Stein, J. S.

    2004-12-01

    Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in

  19. Development of a rapid method for the automatic classification of biological agents' fluorescence spectral signatures

    NASA Astrophysics Data System (ADS)

    Carestia, Mariachiara; Pizzoferrato, Roberto; Gelfusa, Michela; Cenciarelli, Orlando; Ludovici, Gian Marco; Gabriele, Jessica; Malizia, Andrea; Murari, Andrea; Vega, Jesus; Gaudio, Pasquale

    2015-11-01

    Biosecurity and biosafety are key concerns of modern society. Although nanomaterials are improving the capacities of point detectors, standoff detection still appears to be an open issue. Laser-induced fluorescence of biological agents (BAs) has proved to be one of the most promising optical techniques to achieve early standoff detection, but its strengths and weaknesses are still to be fully investigated. In particular, different BAs tend to have similar fluorescence spectra due to the ubiquity of biological endogenous fluorophores producing a signal in the UV range, making data analysis extremely challenging. The Universal Multi Event Locator (UMEL), a general method based on support vector regression, is commonly used to identify characteristic structures in arrays of data. In the first part of this work, we investigate fluorescence emission spectra of different simulants of BAs and apply UMEL for their automatic classification. In the second part of this work, we elaborate a strategy for the application of UMEL to the discrimination of different BAs' simulants spectra. Through this strategy, it has been possible to discriminate between these BAs' simulants despite the high similarity of their fluorescence spectra. These preliminary results support the use of SVR methods to classify BAs' spectral signatures.

  20. Image-classification-based global dimming algorithm for LED backlights in LCDs

    NASA Astrophysics Data System (ADS)

    Qibin, Feng; Huijie, He; Dong, Han; Lei, Zhang; Guoqiang, Lv

    2015-07-01

    Backlight dimming can help LCDs reduce power consumption and improve CR. With fixed parameters, dimming algorithm cannot achieve satisfied effects for all kinds of images. The paper introduces an image-classification-based global dimming algorithm. The proposed classification method especially for backlight dimming is based on luminance and CR of input images. The parameters for backlight dimming level and pixel compensation are adaptive with image classifications. The simulation results show that the classification based dimming algorithm presents 86.13% power reduction improvement compared with dimming without classification, with almost same display quality. The prototype is developed. There are no perceived distortions when playing videos. The practical average power reduction of the prototype TV is 18.72%, compared with common TV without dimming.

  1. Investigating the feasibility of a BCI-driven robot-based writing agent for handicapped individuals

    NASA Astrophysics Data System (ADS)

    Syan, Chanan S.; Harnarinesingh, Randy E. S.; Beharry, Rishi

    2014-07-01

    Brain-Computer Interfaces (BCIs) predominantly employ output actuators such as virtual keyboards and wheelchair controllers to enable handicapped individuals to interact and communicate with their environment. However, BCI-based assistive technologies are limited in their application. There is minimal research geared towards granting disabled individuals the ability to communicate using written words. This is a drawback because involving a human attendant in writing tasks can entail a breach of personal privacy where the task entails sensitive and private information such as banking matters. BCI-driven robot-based writing however can provide a safeguard for user privacy where it is required. This study investigated the feasibility of a BCI-driven writing agent using the 3 degree-of- freedom Phantom Omnibot. A full alphanumerical English character set was developed and validated using a teach pendant program in MATLAB. The Omnibot was subsequently interfaced to a P300-based BCI. Three subjects utilised the BCI in the online context to communicate words to the writing robot over a Local Area Network (LAN). The average online letter-wise classification accuracy was 91.43%. The writing agent legibly constructed the communicated letters with minor errors in trajectory execution. The developed system therefore provided a feasible platform for BCI-based writing.

  2. A new epileptic seizure classification based exclusively on ictal semiology.

    PubMed

    Lüders, H; Acharya, J; Baumgartner, C; Benbadis, S; Bleasel, A; Burgess, R; Dinner, D S; Ebner, A; Foldvary, N; Geller, E; Hamer, H; Holthausen, H; Kotagal, P; Morris, H; Meencke, H J; Noachtar, S; Rosenow, F; Sakamoto, A; Steinhoff, B J; Tuxhorn, I; Wyllie, E

    1999-03-01

    Historically, seizure semiology was the main feature in the differential diagnosis of epileptic syndromes. With the development of clinical EEG, the definition of electroclinical complexes became an essential tool to define epileptic syndromes, particularly focal epileptic syndromes. Modern advances in diagnostic technology, particularly in neuroimaging and molecular biology, now permit better definitions of epileptic syndromes. At the same time detailed studies showed that there does not necessarily exist a one-to-one relationship between epileptic seizures or electroclinical complexes and epileptic syndromes. These developments call for the reintroduction of an epileptic seizure classification based exclusively on clinical semiology, similar to the seizure classifications which were used by neurologists before the introduction of the modern diagnostic methods. This classification of epileptic seizures should always be complemented by an epileptic syndrome classification based on all the available clinical information (clinical history, neurological exam, ictal semiology, EEG, anatomical and functional neuroimaging, etc.). Such an approach is more consistent with mainstream clinical neurology and would avoid the current confusion between the classification of epileptic seizures (which in the International Seizure Classification is actually a classification of electroclinical complexes) and the classification of epileptic syndromes.

  3. Multi-issue Agent Negotiation Based on Fairness

    NASA Astrophysics Data System (ADS)

    Zuo, Baohe; Zheng, Sue; Wu, Hong

    Agent-based e-commerce service has become a hotspot now. How to make the agent negotiation process quickly and high-efficiently is the main research direction of this area. In the multi-issue model, MAUT(Multi-attribute Utility Theory) or its derived theory usually consider little about the fairness of both negotiators. This work presents a general model of agent negotiation which considered the satisfaction of both negotiators via autonomous learning. The model can evaluate offers from the opponent agent based on the satisfaction degree, learn online to get the opponent's knowledge from interactive instances of history and negotiation of this time, make concessions dynamically based on fair object. Through building the optimal negotiation model, the bilateral negotiation achieved a higher efficiency and fairer deal.

  4. An Agent-Based Cockpit Task Management System

    NASA Technical Reports Server (NTRS)

    Funk, Ken

    1997-01-01

    An agent-based program to facilitate Cockpit Task Management (CTM) in commercial transport aircraft is developed and evaluated. The agent-based program called the AgendaManager (AMgr) is described and evaluated in a part-task simulator study using airline pilots.

  5. Agent Based Fault Tolerance for the Mobile Environment

    NASA Astrophysics Data System (ADS)

    Park, Taesoon

    This paper presents a fault-tolerance scheme based on mobile agents for the reliable mobile computing systems. Mobility of the agent is suitable to trace the mobile hosts and the intelligence of the agent makes it efficient to support the fault tolerance services. This paper presents two approaches to implement the mobile agent based fault tolerant service and their performances are evaluated and compared with other fault-tolerant schemes.

  6. The practice of agent-based model visualization.

    PubMed

    Dorin, Alan; Geard, Nicholas

    2014-01-01

    We discuss approaches to agent-based model visualization. Agent-based modeling has its own requirements for visualization, some shared with other forms of simulation software, and some unique to this approach. In particular, agent-based models are typified by complexity, dynamism, nonequilibrium and transient behavior, heterogeneity, and a researcher's interest in both individual- and aggregate-level behavior. These are all traits requiring careful consideration in the design, experimentation, and communication of results. In the case of all but final communication for dissemination, researchers may not make their visualizations public. Hence, the knowledge of how to visualize during these earlier stages is unavailable to the research community in a readily accessible form. Here we explore means by which all phases of agent-based modeling can benefit from visualization, and we provide examples from the available literature and online sources to illustrate key stages and techniques.

  7. Energy-efficiency based classification of the manufacturing workstation

    NASA Astrophysics Data System (ADS)

    Frumuşanu, G.; Afteni, C.; Badea, N.; Epureanu, A.

    2017-08-01

    EU Directive 92/75/EC established for the first time an energy consumption labelling scheme, further implemented by several other directives. As consequence, nowadays many products (e.g. home appliances, tyres, light bulbs, houses) have an EU Energy Label when offered for sale or rent. Several energy consumption models of manufacturing equipments have been also developed. This paper proposes an energy efficiency - based classification of the manufacturing workstation, aiming to characterize its energetic behaviour. The concept of energy efficiency of the manufacturing workstation is defined. On this base, a classification methodology has been developed. It refers to specific criteria and their evaluation modalities, together to the definition & delimitation of energy efficiency classes. The energy class position is defined after the amount of energy needed by the workstation in the middle point of its operating domain, while its extension is determined by the value of the first coefficient from the Taylor series that approximates the dependence between the energy consume and the chosen parameter of the working regime. The main domain of interest for this classification looks to be the optimization of the manufacturing activities planning and programming. A case-study regarding an actual lathe classification from energy efficiency point of view, based on two different approaches (analytical and numerical) is also included.

  8. Automated classification of articular cartilage surfaces based on surface texture.

    PubMed

    Stachowiak, G P; Stachowiak, G W; Podsiadlo, P

    2006-11-01

    In this study the automated classification system previously developed by the authors was used to classify articular cartilage surfaces with different degrees of wear. This automated system classifies surfaces based on their texture. Plug samples of sheep cartilage (pins) were run on stainless steel discs under various conditions using a pin-on-disc tribometer. Testing conditions were specifically designed to produce different severities of cartilage damage due to wear. Environmental scanning electron microscope (SEM) (ESEM) images of cartilage surfaces, that formed a database for pattern recognition analysis, were acquired. The ESEM images of cartilage were divided into five groups (classes), each class representing different wear conditions or wear severity. Each class was first examined and assessed visually. Next, the automated classification system (pattern recognition) was applied to all classes. The results of the automated surface texture classification were compared to those based on visual assessment of surface morphology. It was shown that the texture-based automated classification system was an efficient and accurate method of distinguishing between various cartilage surfaces generated under different wear conditions. It appears that the texture-based classification method has potential to become a useful tool in medical diagnostics.

  9. Ground-based cloud classification by learning stable local binary patterns

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Shi, Cunzhao; Wang, Chunheng; Xiao, Baihua

    2018-07-01

    Feature selection and extraction is the first step in implementing pattern classification. The same is true for ground-based cloud classification. Histogram features based on local binary patterns (LBPs) are widely used to classify texture images. However, the conventional uniform LBP approach cannot capture all the dominant patterns in cloud texture images, thereby resulting in low classification performance. In this study, a robust feature extraction method by learning stable LBPs is proposed based on the averaged ranks of the occurrence frequencies of all rotation invariant patterns defined in the LBPs of cloud images. The proposed method is validated with a ground-based cloud classification database comprising five cloud types. Experimental results demonstrate that the proposed method achieves significantly higher classification accuracy than the uniform LBP, local texture patterns (LTP), dominant LBP (DLBP), completed LBP (CLTP) and salient LBP (SaLBP) methods in this cloud image database and under different noise conditions. And the performance of the proposed method is comparable with that of the popular deep convolutional neural network (DCNN) method, but with less computation complexity. Furthermore, the proposed method also achieves superior performance on an independent test data set.

  10. Single-accelerometer-based daily physical activity classification.

    PubMed

    Long, Xi; Yin, Bin; Aarts, Ronald M

    2009-01-01

    In this study, a single tri-axial accelerometer placed on the waist was used to record the acceleration data for human physical activity classification. The data collection involved 24 subjects performing daily real-life activities in a naturalistic environment without researchers' intervention. For the purpose of assessing customers' daily energy expenditure, walking, running, cycling, driving, and sports were chosen as target activities for classification. This study compared a Bayesian classification with that of a Decision Tree based approach. A Bayes classifier has the advantage to be more extensible, requiring little effort in classifier retraining and software update upon further expansion or modification of the target activities. Principal components analysis was applied to remove the correlation among features and to reduce the feature vector dimension. Experiments using leave-one-subject-out and 10-fold cross validation protocols revealed a classification accuracy of approximately 80%, which was comparable with that obtained by a Decision Tree classifier.

  11. Agent-based models of cellular systems.

    PubMed

    Cannata, Nicola; Corradini, Flavio; Merelli, Emanuela; Tesei, Luca

    2013-01-01

    Software agents are particularly suitable for engineering models and simulations of cellular systems. In a very natural and intuitive manner, individual software components are therein delegated to reproduce "in silico" the behavior of individual components of alive systems at a given level of resolution. Individuals' actions and interactions among individuals allow complex collective behavior to emerge. In this chapter we first introduce the readers to software agents and multi-agent systems, reviewing the evolution of agent-based modeling of biomolecular systems in the last decade. We then describe the main tools, platforms, and methodologies available for programming societies of agents, possibly profiting also of toolkits that do not require advanced programming skills.

  12. Robust spike classification based on frequency domain neural waveform features.

    PubMed

    Yang, Chenhui; Yuan, Yuan; Si, Jennie

    2013-12-01

    We introduce a new spike classification algorithm based on frequency domain features of the spike snippets. The goal for the algorithm is to provide high classification accuracy, low false misclassification, ease of implementation, robustness to signal degradation, and objectivity in classification outcomes. In this paper, we propose a spike classification algorithm based on frequency domain features (CFDF). It makes use of frequency domain contents of the recorded neural waveforms for spike classification. The self-organizing map (SOM) is used as a tool to determine the cluster number intuitively and directly by viewing the SOM output map. After that, spike classification can be easily performed using clustering algorithms such as the k-Means. In conjunction with our previously developed multiscale correlation of wavelet coefficient (MCWC) spike detection algorithm, we show that the MCWC and CFDF detection and classification system is robust when tested on several sets of artificial and real neural waveforms. The CFDF is comparable to or outperforms some popular automatic spike classification algorithms with artificial and real neural data. The detection and classification of neural action potentials or neural spikes is an important step in single-unit-based neuroscientific studies and applications. After the detection of neural snippets potentially containing neural spikes, a robust classification algorithm is applied for the analysis of the snippets to (1) extract similar waveforms into one class for them to be considered coming from one unit, and to (2) remove noise snippets if they do not contain any features of an action potential. Usually, a snippet is a small 2 or 3 ms segment of the recorded waveform, and differences in neural action potentials can be subtle from one unit to another. Therefore, a robust, high performance classification system like the CFDF is necessary. In addition, the proposed algorithm does not require any assumptions on statistical

  13. Research on Classification of Chinese Text Data Based on SVM

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Yu, Hongzhi; Wan, Fucheng; Xu, Tao

    2017-09-01

    Data Mining has important application value in today’s industry and academia. Text classification is a very important technology in data mining. At present, there are many mature algorithms for text classification. KNN, NB, AB, SVM, decision tree and other classification methods all show good classification performance. Support Vector Machine’ (SVM) classification method is a good classifier in machine learning research. This paper will study the classification effect based on the SVM method in the Chinese text data, and use the support vector machine method in the chinese text to achieve the classify chinese text, and to able to combination of academia and practical application.

  14. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  15. Is it time for brushless scrubbing with an alcohol-based agent?

    PubMed

    Gruendemann, B J; Bjerke, N B

    2001-12-01

    The practice of surgical scrubbing in perioperative settings is changing rapidly. This article presents information about eliminating the traditional scrub brush technique and using an alcohol formulation for surgical hand scrubs. Also covered are antimicrobial agents, relevant US Food and Drug Administration classifications, skin and fingernail care, and implementation of changes. The article challenges surgical team members to evaluate a new and different approach to surgical hand scrubbing.

  16. An Immune Agent for Web-Based AI Course

    ERIC Educational Resources Information Center

    Gong, Tao; Cai, Zixing

    2006-01-01

    To overcome weakness and faults of a web-based e-learning course such as Artificial Intelligence (AI), an immune agent was proposed, simulating a natural immune mechanism against a virus. The immune agent was built on the multi-dimension education agent model and immune algorithm. The web-based AI course was comprised of many files, such as HTML…

  17. Vehicle Maneuver Detection with Accelerometer-Based Classification.

    PubMed

    Cervantes-Villanueva, Javier; Carrillo-Zapata, Daniel; Terroso-Saenz, Fernando; Valdes-Vela, Mercedes; Skarmeta, Antonio F

    2016-09-29

    In the mobile computing era, smartphones have become instrumental tools to develop innovative mobile context-aware systems. In that sense, their usage in the vehicular domain eases the development of novel and personal transportation solutions. In this frame, the present work introduces an innovative mechanism to perceive the current kinematic state of a vehicle on the basis of the accelerometer data from a smartphone mounted in the vehicle. Unlike previous proposals, the introduced architecture targets the computational limitations of such devices to carry out the detection process following an incremental approach. For its realization, we have evaluated different classification algorithms to act as agents within the architecture. Finally, our approach has been tested with a real-world dataset collected by means of the ad hoc mobile application developed.

  18. Hyperspectral feature mapping classification based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli

    2016-03-01

    This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.

  19. Hydrologic-Process-Based Soil Texture Classifications for Improved Visualization of Landscape Function

    PubMed Central

    Groenendyk, Derek G.; Ferré, Ty P.A.; Thorp, Kelly R.; Rice, Amy K.

    2015-01-01

    Soils lie at the interface between the atmosphere and the subsurface and are a key component that control ecosystem services, food production, and many other processes at the Earth’s surface. There is a long-established convention for identifying and mapping soils by texture. These readily available, georeferenced soil maps and databases are used widely in environmental sciences. Here, we show that these traditional soil classifications can be inappropriate, contributing to bias and uncertainty in applications from slope stability to water resource management. We suggest a new approach to soil classification, with a detailed example from the science of hydrology. Hydrologic simulations based on common meteorological conditions were performed using HYDRUS-1D, spanning textures identified by the United States Department of Agriculture soil texture triangle. We consider these common conditions to be: drainage from saturation, infiltration onto a drained soil, and combined infiltration and drainage events. Using a k-means clustering algorithm, we created soil classifications based on the modeled hydrologic responses of these soils. The hydrologic-process-based classifications were compared to those based on soil texture and a single hydraulic property, Ks. Differences in classifications based on hydrologic response versus soil texture demonstrate that traditional soil texture classification is a poor predictor of hydrologic response. We then developed a QGIS plugin to construct soil maps combining a classification with georeferenced soil data from the Natural Resource Conservation Service. The spatial patterns of hydrologic response were more immediately informative, much simpler, and less ambiguous, for use in applications ranging from trafficability to irrigation management to flood control. The ease with which hydrologic-process-based classifications can be made, along with the improved quantitative predictions of soil responses and visualization of landscape

  20. Hydrologic-Process-Based Soil Texture Classifications for Improved Visualization of Landscape Function.

    PubMed

    Groenendyk, Derek G; Ferré, Ty P A; Thorp, Kelly R; Rice, Amy K

    2015-01-01

    Soils lie at the interface between the atmosphere and the subsurface and are a key component that control ecosystem services, food production, and many other processes at the Earth's surface. There is a long-established convention for identifying and mapping soils by texture. These readily available, georeferenced soil maps and databases are used widely in environmental sciences. Here, we show that these traditional soil classifications can be inappropriate, contributing to bias and uncertainty in applications from slope stability to water resource management. We suggest a new approach to soil classification, with a detailed example from the science of hydrology. Hydrologic simulations based on common meteorological conditions were performed using HYDRUS-1D, spanning textures identified by the United States Department of Agriculture soil texture triangle. We consider these common conditions to be: drainage from saturation, infiltration onto a drained soil, and combined infiltration and drainage events. Using a k-means clustering algorithm, we created soil classifications based on the modeled hydrologic responses of these soils. The hydrologic-process-based classifications were compared to those based on soil texture and a single hydraulic property, Ks. Differences in classifications based on hydrologic response versus soil texture demonstrate that traditional soil texture classification is a poor predictor of hydrologic response. We then developed a QGIS plugin to construct soil maps combining a classification with georeferenced soil data from the Natural Resource Conservation Service. The spatial patterns of hydrologic response were more immediately informative, much simpler, and less ambiguous, for use in applications ranging from trafficability to irrigation management to flood control. The ease with which hydrologic-process-based classifications can be made, along with the improved quantitative predictions of soil responses and visualization of landscape

  1. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    PubMed

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  2. Evaluating Water Demand Using Agent-Based Modeling

    NASA Astrophysics Data System (ADS)

    Lowry, T. S.

    2004-12-01

    The supply and demand of water resources are functions of complex, inter-related systems including hydrology, climate, demographics, economics, and policy. To assess the safety and sustainability of water resources, planners often rely on complex numerical models that relate some or all of these systems using mathematical abstractions. The accuracy of these models relies on how well the abstractions capture the true nature of the systems interactions. Typically, these abstractions are based on analyses of observations and/or experiments that account only for the statistical mean behavior of each system. This limits the approach in two important ways: 1) It cannot capture cross-system disruptive events, such as major drought, significant policy change, or terrorist attack, and 2) it cannot resolve sub-system level responses. To overcome these limitations, we are developing an agent-based water resources model that includes the systems of hydrology, climate, demographics, economics, and policy, to examine water demand during normal and extraordinary conditions. Agent-based modeling (ABM) develops functional relationships between systems by modeling the interaction between individuals (agents), who behave according to a probabilistic set of rules. ABM is a "bottom-up" modeling approach in that it defines macro-system behavior by modeling the micro-behavior of individual agents. While each agent's behavior is often simple and predictable, the aggregate behavior of all agents in each system can be complex, unpredictable, and different than behaviors observed in mean-behavior models. Furthermore, the ABM approach creates a virtual laboratory where the effects of policy changes and/or extraordinary events can be simulated. Our model, which is based on the demographics and hydrology of the Middle Rio Grande Basin in the state of New Mexico, includes agent groups of residential, agricultural, and industrial users. Each agent within each group determines its water usage

  3. 7 CFR 27.36 - Classification and Micronaire determinations based on official standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Classification and Micronaire determinations based on... COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification and Micronaire Determinations § 27.36 Classification and Micronaire...

  4. 7 CFR 27.36 - Classification and Micronaire determinations based on official standards.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Classification and Micronaire determinations based on... COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification and Micronaire Determinations § 27.36 Classification and Micronaire...

  5. A technology path to tactical agent-based modeling

    NASA Astrophysics Data System (ADS)

    James, Alex; Hanratty, Timothy P.

    2017-05-01

    Wargaming is a process of thinking through and visualizing events that could occur during a possible course of action. Over the past 200 years, wargaming has matured into a set of formalized processes. One area of growing interest is the application of agent-based modeling. Agent-based modeling and its additional supporting technologies has potential to introduce a third-generation wargaming capability to the Army, creating a positive overmatch decision-making capability. In its simplest form, agent-based modeling is a computational technique that helps the modeler understand and simulate how the "whole of a system" responds to change over time. It provides a decentralized method of looking at situations where individual agents are instantiated within an environment, interact with each other, and empowered to make their own decisions. However, this technology is not without its own risks and limitations. This paper explores a technology roadmap, identifying research topics that could realize agent-based modeling within a tactical wargaming context.

  6. A hybrid agent-based approach for modeling microbiological systems.

    PubMed

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  7. A multiple-point spatially weighted k-NN method for object-based classification

    NASA Astrophysics Data System (ADS)

    Tang, Yunwei; Jing, Linhai; Li, Hui; Atkinson, Peter M.

    2016-10-01

    Object-based classification, commonly referred to as object-based image analysis (OBIA), is now commonly regarded as able to produce more appealing classification maps, often of greater accuracy, than pixel-based classification and its application is now widespread. Therefore, improvement of OBIA using spatial techniques is of great interest. In this paper, multiple-point statistics (MPS) is proposed for object-based classification enhancement in the form of a new multiple-point k-nearest neighbour (k-NN) classification method (MPk-NN). The proposed method first utilises a training image derived from a pre-classified map to characterise the spatial correlation between multiple points of land cover classes. The MPS borrows spatial structures from other parts of the training image, and then incorporates this spatial information, in the form of multiple-point probabilities, into the k-NN classifier. Two satellite sensor images with a fine spatial resolution were selected to evaluate the new method. One is an IKONOS image of the Beijing urban area and the other is a WorldView-2 image of the Wolong mountainous area, in China. The images were object-based classified using the MPk-NN method and several alternatives, including the k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the new spatial weighting based on MPS can achieve greater classification accuracy relative to the alternatives and it is, thus, recommended as appropriate for object-based classification.

  8. A mobile agent-based moving objects indexing algorithm in location based service

    NASA Astrophysics Data System (ADS)

    Fang, Zhixiang; Li, Qingquan; Xu, Hong

    2006-10-01

    This paper will extends the advantages of location based service, specifically using their ability to management and indexing the positions of moving object, Moreover with this objective in mind, a mobile agent-based moving objects indexing algorithm is proposed in this paper to efficiently process indexing request and acclimatize itself to limitation of location based service environment. The prominent feature of this structure is viewing moving object's behavior as the mobile agent's span, the unique mapping between the geographical position of moving objects and span point of mobile agent is built to maintain the close relationship of them, and is significant clue for mobile agent-based moving objects indexing to tracking moving objects.

  9. Classification of oxidative stress based on its intensity

    PubMed Central

    Lushchak, Volodymyr I.

    2014-01-01

    In living organisms production of reactive oxygen species (ROS) is counterbalanced by their elimination and/or prevention of formation which in concert can typically maintain a steady-state (stationary) ROS level. However, this balance may be disturbed and lead to elevated ROS levels called oxidative stress. To our best knowledge, there is no broadly acceptable system of classification of oxidative stress based on its intensity due to which proposed here system may be helpful for interpretation of experimental data. Oxidative stress field is the hot topic in biology and, to date, many details related to ROS-induced damage to cellular components, ROS-based signaling, cellular responses and adaptation have been disclosed. However, it is common situation when researchers experience substantial difficulties in the correct interpretation of oxidative stress development especially when there is a need to characterize its intensity. Careful selection of specific biomarkers (ROS-modified targets) and some system may be helpful here. A classification of oxidative stress based on its intensity is proposed here. According to this classification there are four zones of function in the relationship between “Dose/concentration of inducer” and the measured “Endpoint”: I – basal oxidative stress (BOS); II – low intensity oxidative stress (LOS); III – intermediate intensity oxidative stress (IOS); IV – high intensity oxidative stress (HOS). The proposed classification will be helpful to describe experimental data where oxidative stress is induced and systematize it based on its intensity, but further studies will be in need to clear discriminate between stress of different intensity. PMID:26417312

  10. Couple Graph Based Label Propagation Method for Hyperspectral Remote Sensing Data Classification

    NASA Astrophysics Data System (ADS)

    Wang, X. P.; Hu, Y.; Chen, J.

    2018-04-01

    Graph based semi-supervised classification method are widely used for hyperspectral image classification. We present a couple graph based label propagation method, which contains both the adjacency graph and the similar graph. We propose to construct the similar graph by using the similar probability, which utilize the label similarity among examples probably. The adjacency graph was utilized by a common manifold learning method, which has effective improve the classification accuracy of hyperspectral data. The experiments indicate that the couple graph Laplacian which unite both the adjacency graph and the similar graph, produce superior classification results than other manifold Learning based graph Laplacian and Sparse representation based graph Laplacian in label propagation framework.

  11. Object-Based Classification as an Alternative Approach to the Traditional Pixel-Based Classification to Identify Potential Habitat of the Grasshopper Sparrow

    NASA Astrophysics Data System (ADS)

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.

  12. Comparison of hand-craft feature based SVM and CNN based deep learning framework for automatic polyp classification.

    PubMed

    Younghak Shin; Balasingham, Ilangko

    2017-07-01

    Colonoscopy is a standard method for screening polyps by highly trained physicians. Miss-detected polyps in colonoscopy are potential risk factor for colorectal cancer. In this study, we investigate an automatic polyp classification framework. We aim to compare two different approaches named hand-craft feature method and convolutional neural network (CNN) based deep learning method. Combined shape and color features are used for hand craft feature extraction and support vector machine (SVM) method is adopted for classification. For CNN approach, three convolution and pooling based deep learning framework is used for classification purpose. The proposed framework is evaluated using three public polyp databases. From the experimental results, we have shown that the CNN based deep learning framework shows better classification performance than the hand-craft feature based methods. It achieves over 90% of classification accuracy, sensitivity, specificity and precision.

  13. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    NASA Astrophysics Data System (ADS)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  14. Hierarchical structure for audio-video based semantic classification of sports video sequences

    NASA Astrophysics Data System (ADS)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  15. New approaches in agent-based modeling of complex financial systems

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2017-12-01

    Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.

  16. Hierarchical trie packet classification algorithm based on expectation-maximization clustering

    PubMed Central

    Bi, Xia-an; Zhao, Junxia

    2017-01-01

    With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476

  17. Hierarchical trie packet classification algorithm based on expectation-maximization clustering.

    PubMed

    Bi, Xia-An; Zhao, Junxia

    2017-01-01

    With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.

  18. Analysis of composition-based metagenomic classification.

    PubMed

    Higashi, Susan; Barreto, André da Motta Salles; Cantão, Maurício Egidio; de Vasconcelos, Ana Tereza Ribeiro

    2012-01-01

    An essential step of a metagenomic study is the taxonomic classification, that is, the identification of the taxonomic lineage of the organisms in a given sample. The taxonomic classification process involves a series of decisions. Currently, in the context of metagenomics, such decisions are usually based on empirical studies that consider one specific type of classifier. In this study we propose a general framework for analyzing the impact that several decisions can have on the classification problem. Instead of focusing on any specific classifier, we define a generic score function that provides a measure of the difficulty of the classification task. Using this framework, we analyze the impact of the following parameters on the taxonomic classification problem: (i) the length of n-mers used to encode the metagenomic sequences, (ii) the similarity measure used to compare sequences, and (iii) the type of taxonomic classification, which can be conventional or hierarchical, depending on whether the classification process occurs in a single shot or in several steps according to the taxonomic tree. We defined a score function that measures the degree of separability of the taxonomic classes under a given configuration induced by the parameters above. We conducted an extensive computational experiment and found out that reasonable values for the parameters of interest could be (i) intermediate values of n, the length of the n-mers; (ii) any similarity measure, because all of them resulted in similar scores; and (iii) the hierarchical strategy, which performed better in all of the cases. As expected, short n-mers generate lower configuration scores because they give rise to frequency vectors that represent distinct sequences in a similar way. On the other hand, large values for n result in sparse frequency vectors that represent differently metagenomic fragments that are in fact similar, also leading to low configuration scores. Regarding the similarity measure, in

  19. Group-Based Active Learning of Classification Models.

    PubMed

    Luo, Zhipeng; Hauskrecht, Milos

    2017-05-01

    Learning of classification models from real-world data often requires additional human expert effort to annotate the data. However, this process can be rather costly and finding ways of reducing the human annotation effort is critical for this task. The objective of this paper is to develop and study new ways of providing human feedback for efficient learning of classification models by labeling groups of examples. Briefly, unlike traditional active learning methods that seek feedback on individual examples, we develop a new group-based active learning framework that solicits label information on groups of multiple examples. In order to describe groups in a user-friendly way, conjunctive patterns are used to compactly represent groups. Our empirical study on 12 UCI data sets demonstrates the advantages and superiority of our approach over both classic instance-based active learning work, as well as existing group-based active-learning methods.

  20. Gadolinium-Based Contrast Agents for MR Cancer Imaging

    PubMed Central

    Zhou, Zhuxian; Lu, Zheng-Rong

    2013-01-01

    Magnetic resonance imaging (MRI) is a clinical imaging modality effective for anatomical and functional imaging of diseased soft tissues, including solid tumors. MRI contrast agents have been routinely used for detecting tumor at an early stage. Gadolinium based contrast agents are the most commonly used contrast agents in clinical MRI. There have been significant efforts to design and develop novel Gd(III) contrast agents with high relaxivity, low toxicity and specific tumor binding. The relaxivity of the Gd(III) contrast agents can be increased by proper chemical modification. The toxicity of Gd(III) contrast agents can be reduced by increasing the agents’ thermodynamic and kinetic stability, as well as optimizing their pharmacokinetic properties. The increasing knowledge in the field of cancer genomics and biology provides an opportunity for designing tumor-specific contrast agents. Various new Gd(III) chelates have been designed and evaluated in animal models for more effective cancer MRI. This review outlines the design and development, physicochemical properties, and in vivo properties of several classes of Gd(III)-based MR contrast agents for tumor imaging. PMID:23047730

  1. From Agents to Continuous Change via Aesthetics: Learning Mechanics with Visual Agent-Based Computational Modeling

    ERIC Educational Resources Information Center

    Sengupta, Pratim; Farris, Amy Voss; Wright, Mason

    2012-01-01

    Novice learners find motion as a continuous process of change challenging to understand. In this paper, we present a pedagogical approach based on agent-based, visual programming to address this issue. Integrating agent-based programming, in particular, Logo programming, with curricular science has been shown to be challenging in previous research…

  2. A Classification of Mediterranean Cyclones Based on Global Analyses

    NASA Technical Reports Server (NTRS)

    Reale, Oreste; Atlas, Robert

    2003-01-01

    The Mediterranean Sea region is dominated by baroclinic and orographic cyclogenesis. However, previous work has demonstrated the existence of rare but intense subsynoptic-scale cyclones displaying remarkable similarities to tropical cyclones and polar lows, including, but not limited to, an eye-like feature in the satellite imagery. The terms polar low and tropical cyclone have been often used interchangeably when referring to small-scale, convective Mediterranean vortices and no definitive statement has been made so far on their nature, be it sub-tropical or polar. Moreover, most of the classifications of Mediterranean cyclones have neglected the small-scale convective vortices, focusing only on the larger-scale and far more common baroclinic cyclones. A classification of all Mediterranean cyclones based on operational global analyses is proposed The classification is based on normalized horizontal shear, vertical shear, scale, low versus mid-level vorticity, low-level temperature gradients, and sea surface temperatures. In the classification system there is a continuum of possible events, according to the increasing role of barotropic instability and decreasing role of baroclinic instability. One of the main results is that the Mediterranean tropical cyclone-like vortices and the Mediterranean polar lows appear to be different types of events, in spite of the apparent similarity of their satellite imagery. A consistent terminology is adopted, stating that tropical cyclone- like vortices are the less baroclinic of all, followed by polar lows, cold small-scale cyclones and finally baroclinic lee cyclones. This classification is based on all the cyclones which occurred in a four-year period (between 1996 and 1999). Four cyclones, selected among all the ones which developed during this time-frame, are analyzed. Particularly, the classification allows to discriminate between two cyclones (occurred in October 1996 and in March 1999) which both display a very well

  3. Data Clustering and Evolving Fuzzy Decision Tree for Data Base Classification Problems

    NASA Astrophysics Data System (ADS)

    Chang, Pei-Chann; Fan, Chin-Yuan; Wang, Yen-Wen

    Data base classification suffers from two well known difficulties, i.e., the high dimensionality and non-stationary variations within the large historic data. This paper presents a hybrid classification model by integrating a case based reasoning technique, a Fuzzy Decision Tree (FDT), and Genetic Algorithms (GA) to construct a decision-making system for data classification in various data base applications. The model is major based on the idea that the historic data base can be transformed into a smaller case-base together with a group of fuzzy decision rules. As a result, the model can be more accurately respond to the current data under classifying from the inductions by these smaller cases based fuzzy decision trees. Hit rate is applied as a performance measure and the effectiveness of our proposed model is demonstrated by experimentally compared with other approaches on different data base classification applications. The average hit rate of our proposed model is the highest among others.

  4. Classification of weld defect based on information fusion technology for radiographic testing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Hongquan; Liang, Zeming, E-mail: heavenlzm@126.com; Gao, Jianmin

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster–Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defectmore » feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.« less

  5. Classification of weld defect based on information fusion technology for radiographic testing system.

    PubMed

    Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying

    2016-03-01

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.

  6. Genome-Based Taxonomic Classification of Bacteroidetes

    PubMed Central

    Hahnke, Richard L.; Meier-Kolthoff, Jan P.; García-López, Marina; Mukherjee, Supratim; Huntemann, Marcel; Ivanova, Natalia N.; Woyke, Tanja; Kyrpides, Nikos C.; Klenk, Hans-Peter; Göker, Markus

    2016-01-01

    The bacterial phylum Bacteroidetes, characterized by a distinct gliding motility, occurs in a broad variety of ecosystems, habitats, life styles, and physiologies. Accordingly, taxonomic classification of the phylum, based on a limited number of features, proved difficult and controversial in the past, for example, when decisions were based on unresolved phylogenetic trees of the 16S rRNA gene sequence. Here we use a large collection of type-strain genomes from Bacteroidetes and closely related phyla for assessing their taxonomy based on the principles of phylogenetic classification and trees inferred from genome-scale data. No significant conflict between 16S rRNA gene and whole-genome phylogenetic analysis is found, whereas many but not all of the involved taxa are supported as monophyletic groups, particularly in the genome-scale trees. Phenotypic and phylogenomic features support the separation of Balneolaceae as new phylum Balneolaeota from Rhodothermaeota and of Saprospiraceae as new class Saprospiria from Chitinophagia. Epilithonimonas is nested within the older genus Chryseobacterium and without significant phenotypic differences; thus merging the two genera is proposed. Similarly, Vitellibacter is proposed to be included in Aequorivita. Flexibacter is confirmed as being heterogeneous and dissected, yielding six distinct genera. Hallella seregens is a later heterotypic synonym of Prevotella dentalis. Compared to values directly calculated from genome sequences, the G+C content mentioned in many species descriptions is too imprecise; moreover, corrected G+C content values have a significantly better fit to the phylogeny. Corresponding emendations of species descriptions are provided where necessary. Whereas most observed conflict with the current classification of Bacteroidetes is already visible in 16S rRNA gene trees, as expected whole-genome phylogenies are much better resolved. PMID:28066339

  7. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  8. The groningen laryngomalacia classification system--based on systematic review and dynamic airway changes.

    PubMed

    van der Heijden, Martijn; Dikkers, Frederik G; Halmos, Gyorgy B

    2015-12-01

    Laryngomalacia is the most common cause of dyspnea and stridor in newborn infants. Laryngomalacia is a dynamic change of the upper airway based on abnormally pliable supraglottic structures, which causes upper airway obstruction. In the past, different classification systems have been introduced. Until now no classification system is widely accepted and applied. Our goal is to provide a simple and complete classification system based on systematic literature search and our experiences. Retrospective cohort study with literature review. All patients with laryngomalacia under the age of 5 at time of diagnosis were included. Photo and video documentation was used to confirm diagnosis and characteristics of dynamic airway change. Outcome was compared with available classification systems in literature. Eighty-five patients were included. In contrast to other classification systems, only three typical different dynamic changes have been identified in our series. Two existing classification systems covered 100% of our findings, but there was an unnecessary overlap between different types in most of the systems. Based on our finding, we propose a new a classification system for laryngomalacia, which is purely based on dynamic airway changes. The groningen laryngomalacia classification is a new, simplified classification system with three types, based on purely dynamic laryngeal changes, tested in a tertiary referral center: Type 1: inward collapse of arytenoids cartilages, Type 2: medial displacement of aryepiglottic folds, and Type 3: posterocaudal displacement of epiglottis against the posterior pharyngeal wall. © 2015 Wiley Periodicals, Inc.

  9. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  10. An Active Learning Exercise for Introducing Agent-Based Modeling

    ERIC Educational Resources Information Center

    Pinder, Jonathan P.

    2013-01-01

    Recent developments in agent-based modeling as a method of systems analysis and optimization indicate that students in business analytics need an introduction to the terminology, concepts, and framework of agent-based modeling. This article presents an active learning exercise for MBA students in business analytics that demonstrates agent-based…

  11. Method and apparatus for enhanced detection of toxic agents

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Wu, Jie Jayne; Qi, Hairong

    2013-10-01

    A biosensor based detection of toxins includes enhancing a fluorescence signal by concentrating a plurality of photosynthetic organisms in a fluid into a concentrated region using biased AC electro-osmosis. A measured photosynthetic activity of the photosynthetic organisms is obtained in the concentrated region, where chemical, biological or radiological agents reduce a nominal photosynthetic activity of the photosynthetic organisms. A presence of the chemical, biological and/or radiological agents or precursors thereof, is determined in the fluid based on the measured photosynthetic activity of the concentrated plurality of photosynthetic organisms. A lab-on-a-chip system is used for the concentrating step. The presence of agents is determined from feature vectors, obtained from processing a time dependent signal using amplitude statistics and/or time-frequency analysis, relative to a control signal. A linear discriminant method including support vector machine classification (SVM) is used to identify the agents.

  12. InterLymph hierarchical classification of lymphoid neoplasms for epidemiologic research based on the WHO classification (2008): update and future directions

    PubMed Central

    Morton, Lindsay M.; Linet, Martha S.; Clarke, Christina A.; Kadin, Marshall E.; Vajdic, Claire M.; Monnereau, Alain; Maynadié, Marc; Chiu, Brian C.-H.; Marcos-Gragera, Rafael; Costantini, Adele Seniori; Cerhan, James R.; Weisenburger, Dennis D.

    2010-01-01

    After publication of the updated World Health Organization (WHO) classification of tumors of hematopoietic and lymphoid tissues in 2008, the Pathology Working Group of the International Lymphoma Epidemiology Consortium (InterLymph) now presents an update of the hierarchical classification of lymphoid neoplasms for epidemiologic research based on the 2001 WHO classification, which we published in 2007. The updated hierarchical classification incorporates all of the major and provisional entities in the 2008 WHO classification, including newly defined entities based on age, site, certain infections, and molecular characteristics, as well as borderline categories, early and “in situ” lesions, disorders with limited capacity for clinical progression, lesions without current International Classification of Diseases for Oncology, 3rd Edition codes, and immunodeficiency-associated lymphoproliferative disorders. WHO subtypes are defined in hierarchical groupings, with newly defined groups for small B-cell lymphomas with plasmacytic differentiation and for primary cutaneous T-cell lymphomas. We suggest approaches for applying the hierarchical classification in various epidemiologic settings, including strategies for dealing with multiple coexisting lymphoma subtypes in one patient, and cases with incomplete pathologic information. The pathology materials useful for state-of-the-art epidemiology studies are also discussed. We encourage epidemiologists to adopt the updated InterLymph hierarchical classification, which incorporates the most recent WHO entities while demonstrating their relationship to older classifications. PMID:20699439

  13. InterLymph hierarchical classification of lymphoid neoplasms for epidemiologic research based on the WHO classification (2008): update and future directions.

    PubMed

    Turner, Jennifer J; Morton, Lindsay M; Linet, Martha S; Clarke, Christina A; Kadin, Marshall E; Vajdic, Claire M; Monnereau, Alain; Maynadié, Marc; Chiu, Brian C-H; Marcos-Gragera, Rafael; Costantini, Adele Seniori; Cerhan, James R; Weisenburger, Dennis D

    2010-11-18

    After publication of the updated World Health Organization (WHO) classification of tumors of hematopoietic and lymphoid tissues in 2008, the Pathology Working Group of the International Lymphoma Epidemiology Consortium (InterLymph) now presents an update of the hierarchical classification of lymphoid neoplasms for epidemiologic research based on the 2001 WHO classification, which we published in 2007. The updated hierarchical classification incorporates all of the major and provisional entities in the 2008 WHO classification, including newly defined entities based on age, site, certain infections, and molecular characteristics, as well as borderline categories, early and "in situ" lesions, disorders with limited capacity for clinical progression, lesions without current International Classification of Diseases for Oncology, 3rd Edition codes, and immunodeficiency-associated lymphoproliferative disorders. WHO subtypes are defined in hierarchical groupings, with newly defined groups for small B-cell lymphomas with plasmacytic differentiation and for primary cutaneous T-cell lymphomas. We suggest approaches for applying the hierarchical classification in various epidemiologic settings, including strategies for dealing with multiple coexisting lymphoma subtypes in one patient, and cases with incomplete pathologic information. The pathology materials useful for state-of-the-art epidemiology studies are also discussed. We encourage epidemiologists to adopt the updated InterLymph hierarchical classification, which incorporates the most recent WHO entities while demonstrating their relationship to older classifications.

  14. Nucleic and Amino Acid Sequences Support Structure-Based Viral Classification.

    PubMed

    Sinclair, Robert M; Ravantti, Janne J; Bamford, Dennis H

    2017-04-15

    Viral capsids ensure viral genome integrity by protecting the enclosed nucleic acids. Interactions between the genome and capsid and between individual capsid proteins (i.e., capsid architecture) are intimate and are expected to be characterized by strong evolutionary conservation. For this reason, a capsid structure-based viral classification has been proposed as a way to bring order to the viral universe. The seeming lack of sufficient sequence similarity to reproduce this classification has made it difficult to reject structural convergence as the basis for the classification. We reinvestigate whether the structure-based classification for viral coat proteins making icosahedral virus capsids is in fact supported by previously undetected sequence similarity. Since codon choices can influence nascent protein folding cotranslationally, we searched for both amino acid and nucleotide sequence similarity. To demonstrate the sensitivity of the approach, we identify a candidate gene for the pandoravirus capsid protein. We show that the structure-based classification is strongly supported by amino acid and also nucleotide sequence similarities, suggesting that the similarities are due to common descent. The correspondence between structure-based and sequence-based analyses of the same proteins shown here allow them to be used in future analyses of the relationship between linear sequence information and macromolecular function, as well as between linear sequence and protein folds. IMPORTANCE Viral capsids protect nucleic acid genomes, which in turn encode capsid proteins. This tight coupling of protein shell and nucleic acids, together with strong functional constraints on capsid protein folding and architecture, leads to the hypothesis that capsid protein-coding nucleotide sequences may retain signatures of ancient viral evolution. We have been able to show that this is indeed the case, using the major capsid proteins of viruses forming icosahedral capsids. Importantly

  15. Nucleic and Amino Acid Sequences Support Structure-Based Viral Classification

    PubMed Central

    Sinclair, Robert M.; Ravantti, Janne J.

    2017-01-01

    ABSTRACT Viral capsids ensure viral genome integrity by protecting the enclosed nucleic acids. Interactions between the genome and capsid and between individual capsid proteins (i.e., capsid architecture) are intimate and are expected to be characterized by strong evolutionary conservation. For this reason, a capsid structure-based viral classification has been proposed as a way to bring order to the viral universe. The seeming lack of sufficient sequence similarity to reproduce this classification has made it difficult to reject structural convergence as the basis for the classification. We reinvestigate whether the structure-based classification for viral coat proteins making icosahedral virus capsids is in fact supported by previously undetected sequence similarity. Since codon choices can influence nascent protein folding cotranslationally, we searched for both amino acid and nucleotide sequence similarity. To demonstrate the sensitivity of the approach, we identify a candidate gene for the pandoravirus capsid protein. We show that the structure-based classification is strongly supported by amino acid and also nucleotide sequence similarities, suggesting that the similarities are due to common descent. The correspondence between structure-based and sequence-based analyses of the same proteins shown here allow them to be used in future analyses of the relationship between linear sequence information and macromolecular function, as well as between linear sequence and protein folds. IMPORTANCE Viral capsids protect nucleic acid genomes, which in turn encode capsid proteins. This tight coupling of protein shell and nucleic acids, together with strong functional constraints on capsid protein folding and architecture, leads to the hypothesis that capsid protein-coding nucleotide sequences may retain signatures of ancient viral evolution. We have been able to show that this is indeed the case, using the major capsid proteins of viruses forming icosahedral capsids

  16. Consentaneous Agent-Based and Stochastic Model of the Financial Markets

    PubMed Central

    Gontis, Vygintas; Kononovicius, Aleksejus

    2014-01-01

    We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation. PMID:25029364

  17. [Proposals for social class classification based on the Spanish National Classification of Occupations 2011 using neo-Weberian and neo-Marxist approaches].

    PubMed

    Domingo-Salvany, Antònia; Bacigalupe, Amaia; Carrasco, José Miguel; Espelt, Albert; Ferrando, Josep; Borrell, Carme

    2013-01-01

    In Spain, the new National Classification of Occupations (Clasificación Nacional de Ocupaciones [CNO-2011]) is substantially different to the 1994 edition, and requires adaptation of occupational social classes for use in studies of health inequalities. This article presents two proposals to measure social class: the new classification of occupational social class (CSO-SEE12), based on the CNO-2011 and a neo-Weberian perspective, and a social class classification based on a neo-Marxist approach. The CSO-SEE12 is the result of a detailed review of the CNO-2011 codes. In contrast, the neo-Marxist classification is derived from variables related to capital and organizational and skill assets. The proposed CSO-SEE12 consists of seven classes that can be grouped into a smaller number of categories according to study needs. The neo-Marxist classification consists of 12 categories in which home owners are divided into three categories based on capital goods and employed persons are grouped into nine categories composed of organizational and skill assets. These proposals are complemented by a proposed classification of educational level that integrates the various curricula in Spain and provides correspondences with the International Standard Classification of Education. Copyright © 2012 SESPAS. Published by Elsevier Espana. All rights reserved.

  18. Modelling of robotic work cells using agent based-approach

    NASA Astrophysics Data System (ADS)

    Sękala, A.; Banaś, W.; Gwiazda, A.; Monica, Z.; Kost, G.; Hryniewicz, P.

    2016-08-01

    In the case of modern manufacturing systems the requirements, both according the scope and according characteristics of technical procedures are dynamically changing. This results in production system organization inability to keep up with changes in a market demand. Accordingly, there is a need for new design methods, characterized, on the one hand with a high efficiency and on the other with the adequate level of the generated organizational solutions. One of the tools that could be used for this purpose is the concept of agent systems. These systems are the tools of artificial intelligence. They allow assigning to agents the proper domains of procedures and knowledge so that they represent in a self-organizing system of an agent environment, components of a real system. The agent-based system for modelling robotic work cell should be designed taking into consideration many limitations considered with the characteristic of this production unit. It is possible to distinguish some grouped of structural components that constitute such a system. This confirms the structural complexity of a work cell as a specific production system. So it is necessary to develop agents depicting various aspects of the work cell structure. The main groups of agents that are used to model a robotic work cell should at least include next pattern representatives: machine tool agents, auxiliary equipment agents, robots agents, transport equipment agents, organizational agents as well as data and knowledge bases agents. In this way it is possible to create the holarchy of the agent-based system.

  19. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    NASA Astrophysics Data System (ADS)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  20. Agent-based services for B2B electronic commerce

    NASA Astrophysics Data System (ADS)

    Fong, Elizabeth; Ivezic, Nenad; Rhodes, Tom; Peng, Yun

    2000-12-01

    The potential of agent-based systems has not been realized yet, in part, because of the lack of understanding of how the agent technology supports industrial needs and emerging standards. The area of business-to-business electronic commerce (b2b e-commerce) is one of the most rapidly developing sectors of industry with huge impact on manufacturing practices. In this paper, we investigate the current state of agent technology and the feasibility of applying agent-based computing to b2b e-commerce in the circuit board manufacturing sector. We identify critical tasks and opportunities in the b2b e-commerce area where agent-based services can best be deployed. We describe an implemented agent-based prototype system to facilitate the bidding process for printed circuit board manufacturing and assembly. These activities are taking place within the Internet Commerce for Manufacturing (ICM) project, the NIST- sponsored project working with industry to create an environment where small manufacturers of mechanical and electronic components may participate competitively in virtual enterprises that manufacture printed circuit assemblies.

  1. Internet-enabled collaborative agent-based supply chains

    NASA Astrophysics Data System (ADS)

    Shen, Weiming; Kremer, Rob; Norrie, Douglas H.

    2000-12-01

    This paper presents some results of our recent research work related to the development of a new Collaborative Agent System Architecture (CASA) and an Infrastructure for Collaborative Agent Systems (ICAS). Initially being proposed as a general architecture for Internet based collaborative agent systems (particularly complex industrial collaborative agent systems), the proposed architecture is very suitable for managing the Internet enabled complex supply chain for a large manufacturing enterprise. The general collaborative agent system architecture with the basic communication and cooperation services, domain independent components, prototypes and mechanisms are described. Benefits of implementing Internet enabled supply chains with the proposed infrastructure are discussed. A case study on Internet enabled supply chain management is presented.

  2. Agent-Based Models in Social Physics

    NASA Astrophysics Data System (ADS)

    Quang, Le Anh; Jung, Nam; Cho, Eun Sung; Choi, Jae Han; Lee, Jae Woo

    2018-06-01

    We review the agent-based models (ABM) on social physics including econophysics. The ABM consists of agent, system space, and external environment. The agent is autonomous and decides his/her behavior by interacting with the neighbors or the external environment with the rules of behavior. Agents are irrational because they have only limited information when they make decisions. They adapt using learning from past memories. Agents have various attributes and are heterogeneous. ABM is a non-equilibrium complex system that exhibits various emergence phenomena. The social complexity ABM describes human behavioral characteristics. In ABMs of econophysics, we introduce the Sugarscape model and the artificial market models. We review minority games and majority games in ABMs of game theory. Social flow ABM introduces crowding, evacuation, traffic congestion, and pedestrian dynamics. We also review ABM for opinion dynamics and voter model. We discuss features and advantages and disadvantages of Netlogo, Repast, Swarm, and Mason, which are representative platforms for implementing ABM.

  3. Do Low Molecular Weight Agents Cause More Severe Asthma than High Molecular Weight Agents?

    PubMed

    Meca, Olga; Cruz, María-Jesús; Sánchez-Ortiz, Mónica; González-Barcala, Francisco-Javier; Ojanguren, Iñigo; Munoz, Xavier

    2016-01-01

    The aim of this study was to analyse whether patients with occupational asthma (OA) caused by low molecular weight (LMW) agents differed from patients with OA caused by high molecular weight (HMW) with regard to risk factors, asthma presentation and severity, and response to various diagnostic tests. Seventy-eight patients with OA diagnosed by positive specific inhalation challenge (SIC) were included. Anthropometric characteristics, atopic status, occupation, latency periods, asthma severity according to the Global Initiative for Asthma (GINA) control classification, lung function tests and SIC results were analysed. OA was induced by an HMW agent in 23 patients (29%) and by an LMW agent in 55 (71%). A logistic regression analysis confirmed that patients with OA caused by LMW agents had a significantly higher risk of severity according to the GINA classification after adjusting for potential confounders (OR = 3.579, 95% CI 1.136-11.280; p = 0.029). During the SIC, most patients with OA caused by HMW agents presented an early reaction (82%), while in patients with OA caused by LMW agents the response was mainly late (73%) (p = 0.0001). Similarly, patients with OA caused by LMW agents experienced a greater degree of bronchial hyperresponsiveness, measured as the difference in the methacholine dose-response ratio (DRR) before and after SIC (1.77, range 0-16), compared with patients with OA caused by HMW agents (0.87, range 0-72), (p = 0.024). OA caused by LMW agents may be more severe than that caused by HMW agents. The severity of the condition may be determined by the different mechanisms of action of these agents.

  4. Graph-Based Semi-Supervised Hyperspectral Image Classification Using Spatial Information

    NASA Astrophysics Data System (ADS)

    Jamshidpour, N.; Homayouni, S.; Safari, A.

    2017-09-01

    Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  5. Simulating cancer growth with multiscale agent-based modeling.

    PubMed

    Wang, Zhihui; Butner, Joseph D; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S

    2015-02-01

    There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Simulating Cancer Growth with Multiscale Agent-Based Modeling

    PubMed Central

    Wang, Zhihui; Butner, Joseph D.; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S.

    2014-01-01

    There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models. PMID:24793698

  7. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  8. Key-phrase based classification of public health web pages.

    PubMed

    Dolamic, Ljiljana; Boyer, Célia

    2013-01-01

    This paper describes and evaluates the public health web pages classification model based on key phrase extraction and matching. Easily extendible both in terms of new classes as well as the new language this method proves to be a good solution for text classification faced with the total lack of training data. To evaluate the proposed solution we have used a small collection of public health related web pages created by a double blind manual classification. Our experiments have shown that by choosing the adequate threshold value the desired value for either precision or recall can be achieved.

  9. A Classification of Remote Sensing Image Based on Improved Compound Kernels of Svm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianing; Gao, Wanlin; Liu, Zili; Mou, Guifen; Lu, Lin; Yu, Lina

    The accuracy of RS classification based on SVM which is developed from statistical learning theory is high under small number of train samples, which results in satisfaction of classification on RS using SVM methods. The traditional RS classification method combines visual interpretation with computer classification. The accuracy of the RS classification, however, is improved a lot based on SVM method, because it saves much labor and time which is used to interpret images and collect training samples. Kernel functions play an important part in the SVM algorithm. It uses improved compound kernel function and therefore has a higher accuracy of classification on RS images. Moreover, compound kernel improves the generalization and learning ability of the kernel.

  10. Novel Strength Test Battery to Permit Evidence-Based Paralympic Classification

    PubMed Central

    Beckman, Emma M.; Newcombe, Peter; Vanlandewijck, Yves; Connick, Mark J.; Tweedy, Sean M.

    2014-01-01

    Abstract Ordinal-scale strength assessment methods currently used in Paralympic athletics classification prevent the development of evidence-based classification systems. This study evaluated a battery of 7, ratio-scale, isometric tests with the aim of facilitating the development of evidence-based methods of classification. This study aimed to report sex-specific normal performance ranges, evaluate test–retest reliability, and evaluate the relationship between the measures and body mass. Body mass and strength measures were obtained from 118 participants—63 males and 55 females—ages 23.2 years ± 3.7 (mean ± SD). Seventeen participants completed the battery twice to evaluate test–retest reliability. The body mass–strength relationship was evaluated using Pearson correlations and allometric exponents. Conventional patterns of force production were observed. Reliability was acceptable (mean intraclass correlation = 0.85). Eight measures had moderate significant correlations with body size (r = 0.30–61). Allometric exponents were higher in males than in females (mean 0.99 vs 0.30). Results indicate that this comprehensive and parsimonious battery is an important methodological advance because it has psychometric properties critical for the development of evidence-based classification. Measures were interrelated with body size, indicating further research is required to determine whether raw measures require normalization in order to be validly applied in classification. PMID:25068950

  11. Genome-Based Taxonomic Classification of Bacteroidetes

    DOE PAGES

    Hahnke, Richard L.; Meier-Kolthoff, Jan P.; García-López, Marina; ...

    2016-12-20

    The bacterial phylum Bacteroidetes, characterized by a distinct gliding motility, occurs in a broad variety of ecosystems, habitats, life styles, and physiologies. Accordingly, taxonomic classification of the phylum, based on a limited number of features, proved difficult and controversial in the past, for example, when decisions were based on unresolved phylogenetic trees of the 16S rRNA gene sequence. Here we use a large collection of type-strain genomes from Bacteroidetes and closely related phyla for assessing their taxonomy based on the principles of phylogenetic classification and trees inferred from genome-scale data. No significant conflict between 16S rRNA gene and whole-genome phylogeneticmore » analysis is found, whereas many but not all of the involved taxa are supported as monophyletic groups, particularly in the genome-scale trees. Phenotypic and phylogenomic features support the separation of Balneolaceae as new phylum Balneolaeota from Rhodothermaeota and of Saprospiraceae as new class Saprospiria from Chitinophagia. Epilithonimonas is nested within the older genus Chryseobacterium and without significant phenotypic differences; thus merging the two genera is proposed. Similarly, Vitellibacter is proposed to be included in Aequorivita. Flexibacter is confirmed as being heterogeneous and dissected, yielding six distinct genera. Hallella seregens is a later heterotypic synonym of Prevotella dentalis. Compared to values directly calculated from genome sequences, the G+C content mentioned in many species descriptions is too imprecise; moreover, corrected G+C content values have a significantly better fit to the phylogeny. Corresponding emendations of species descriptions are provided where necessary. Whereas most observed conflict with the current classification of Bacteroidetes is already visible in 16S rRNA gene trees, as expected whole-genome phylogenies are much better resolved.« less

  12. Genome-Based Taxonomic Classification of Bacteroidetes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hahnke, Richard L.; Meier-Kolthoff, Jan P.; García-López, Marina

    The bacterial phylum Bacteroidetes, characterized by a distinct gliding motility, occurs in a broad variety of ecosystems, habitats, life styles, and physiologies. Accordingly, taxonomic classification of the phylum, based on a limited number of features, proved difficult and controversial in the past, for example, when decisions were based on unresolved phylogenetic trees of the 16S rRNA gene sequence. Here we use a large collection of type-strain genomes from Bacteroidetes and closely related phyla for assessing their taxonomy based on the principles of phylogenetic classification and trees inferred from genome-scale data. No significant conflict between 16S rRNA gene and whole-genome phylogeneticmore » analysis is found, whereas many but not all of the involved taxa are supported as monophyletic groups, particularly in the genome-scale trees. Phenotypic and phylogenomic features support the separation of Balneolaceae as new phylum Balneolaeota from Rhodothermaeota and of Saprospiraceae as new class Saprospiria from Chitinophagia. Epilithonimonas is nested within the older genus Chryseobacterium and without significant phenotypic differences; thus merging the two genera is proposed. Similarly, Vitellibacter is proposed to be included in Aequorivita. Flexibacter is confirmed as being heterogeneous and dissected, yielding six distinct genera. Hallella seregens is a later heterotypic synonym of Prevotella dentalis. Compared to values directly calculated from genome sequences, the G+C content mentioned in many species descriptions is too imprecise; moreover, corrected G+C content values have a significantly better fit to the phylogeny. Corresponding emendations of species descriptions are provided where necessary. Whereas most observed conflict with the current classification of Bacteroidetes is already visible in 16S rRNA gene trees, as expected whole-genome phylogenies are much better resolved.« less

  13. Comparing K-mer based methods for improved classification of 16S sequences.

    PubMed

    Vinje, Hilde; Liland, Kristian Hovde; Almøy, Trygve; Snipen, Lars

    2015-07-01

    The need for precise and stable taxonomic classification is highly relevant in modern microbiology. Parallel to the explosion in the amount of sequence data accessible, there has also been a shift in focus for classification methods. Previously, alignment-based methods were the most applicable tools. Now, methods based on counting K-mers by sliding windows are the most interesting classification approach with respect to both speed and accuracy. Here, we present a systematic comparison on five different K-mer based classification methods for the 16S rRNA gene. The methods differ from each other both in data usage and modelling strategies. We have based our study on the commonly known and well-used naïve Bayes classifier from the RDP project, and four other methods were implemented and tested on two different data sets, on full-length sequences as well as fragments of typical read-length. The difference in classification error obtained by the methods seemed to be small, but they were stable and for both data sets tested. The Preprocessed nearest-neighbour (PLSNN) method performed best for full-length 16S rRNA sequences, significantly better than the naïve Bayes RDP method. On fragmented sequences the naïve Bayes Multinomial method performed best, significantly better than all other methods. For both data sets explored, and on both full-length and fragmented sequences, all the five methods reached an error-plateau. We conclude that no K-mer based method is universally best for classifying both full-length sequences and fragments (reads). All methods approach an error plateau indicating improved training data is needed to improve classification from here. Classification errors occur most frequent for genera with few sequences present. For improving the taxonomy and testing new classification methods, the need for a better and more universal and robust training data set is crucial.

  14. Mammographic mass classification based on possibility theory

    NASA Astrophysics Data System (ADS)

    Hmida, Marwa; Hamrouni, Kamel; Solaiman, Basel; Boussetta, Sana

    2017-03-01

    Shape and margin features are very important for differentiating between benign and malignant masses in mammographic images. In fact, benign masses are usually round and oval and have smooth contours. However, malignant tumors have generally irregular shape and appear lobulated or speculated in margins. This knowledge suffers from imprecision and ambiguity. Therefore, this paper deals with the problem of mass classification by using shape and margin features while taking into account the uncertainty linked to the degree of truth of the available information and the imprecision related to its content. Thus, in this work, we proposed a novel mass classification approach which provides a possibility based representation of the extracted shape features and builds a possibility knowledge basis in order to evaluate the possibility degree of malignancy and benignity for each mass. For experimentation, the MIAS database was used and the classification results show the great performance of our approach in spite of using simple features.

  15. Rough set classification based on quantum logic

    NASA Astrophysics Data System (ADS)

    Hassan, Yasser F.

    2017-11-01

    By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.

  16. Rank preserving sparse learning for Kinect based scene classification.

    PubMed

    Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong

    2013-10-01

    With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification.

  17. An AERONET-Based Aerosol Classification Using the Mahalanobis Distance

    NASA Technical Reports Server (NTRS)

    Hamill, Patrick; Giordano, Marco; Ward, Carolyne; Giles, David; Holben, Brent

    2016-01-01

    We present an aerosol classification based on AERONET aerosol data from 1993 to 2012. We used the AERONET Level 2.0 almucantar aerosol retrieval products to define several reference aerosol clusters which are characteristic of the following general aerosol types: Urban-Industrial, Biomass Burning, Mixed Aerosol, Dust, and Maritime. The classification of a particular aerosol observation as one of these aerosol types is determined by its five-dimensional Mahalanobis distance to each reference cluster. We have calculated the fractional aerosol type distribution at 190 AERONET sites, as well as the monthly variation in aerosol type at those locations. The results are presented on a global map and individually in the supplementary material. Our aerosol typing is based on recognizing that different geographic regions exhibit characteristic aerosol types. To generate reference clusters we only keep data points that lie within a Mahalanobis distance of 2 from the centroid. Our aerosol characterization is based on the AERONET retrieved quantities, therefore it does not include low optical depth values. The analysis is based on point sources (the AERONET sites) rather than globally distributed values. The classifications obtained will be useful in interpreting aerosol retrievals from satellite borne instruments.

  18. Pathological Bases for a Robust Application of Cancer Molecular Classification

    PubMed Central

    Diaz-Cano, Salvador J.

    2015-01-01

    Any robust classification system depends on its purpose and must refer to accepted standards, its strength relying on predictive values and a careful consideration of known factors that can affect its reliability. In this context, a molecular classification of human cancer must refer to the current gold standard (histological classification) and try to improve it with key prognosticators for metastatic potential, staging and grading. Although organ-specific examples have been published based on proteomics, transcriptomics and genomics evaluations, the most popular approach uses gene expression analysis as a direct correlate of cellular differentiation, which represents the key feature of the histological classification. RNA is a labile molecule that varies significantly according with the preservation protocol, its transcription reflect the adaptation of the tumor cells to the microenvironment, it can be passed through mechanisms of intercellular transference of genetic information (exosomes), and it is exposed to epigenetic modifications. More robust classifications should be based on stable molecules, at the genetic level represented by DNA to improve reliability, and its analysis must deal with the concept of intratumoral heterogeneity, which is at the origin of tumor progression and is the byproduct of the selection process during the clonal expansion and progression of neoplasms. The simultaneous analysis of multiple DNA targets and next generation sequencing offer the best practical approach for an analytical genomic classification of tumors. PMID:25898411

  19. Brain tumor segmentation based on local independent projection-based classification.

    PubMed

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Chen, Wufan; Feng, Qianjin

    2014-10-01

    Brain tumor segmentation is an important procedure for early tumor diagnosis and radiotherapy planning. Although numerous brain tumor segmentation methods have been presented, enhancing tumor segmentation methods is still challenging because brain tumor MRI images exhibit complex characteristics, such as high diversity in tumor appearance and ambiguous tumor boundaries. To address this problem, we propose a novel automatic tumor segmentation method for MRI images. This method treats tumor segmentation as a classification problem. Additionally, the local independent projection-based classification (LIPC) method is used to classify each voxel into different classes. A novel classification framework is derived by introducing the local independent projection into the classical classification model. Locality is important in the calculation of local independent projections for LIPC. Locality is also considered in determining whether local anchor embedding is more applicable in solving linear projection weights compared with other coding methods. Moreover, LIPC considers the data distribution of different classes by learning a softmax regression model, which can further improve classification performance. In this study, 80 brain tumor MRI images with ground truth data are used as training data and 40 images without ground truth data are used as testing data. The segmentation results of testing data are evaluated by an online evaluation tool. The average dice similarities of the proposed method for segmenting complete tumor, tumor core, and contrast-enhancing tumor on real patient data are 0.84, 0.685, and 0.585, respectively. These results are comparable to other state-of-the-art methods.

  20. Choice-Based Conjoint Analysis: Classification vs. Discrete Choice Models

    NASA Astrophysics Data System (ADS)

    Giesen, Joachim; Mueller, Klaus; Taneva, Bilyana; Zolliker, Peter

    Conjoint analysis is a family of techniques that originated in psychology and later became popular in market research. The main objective of conjoint analysis is to measure an individual's or a population's preferences on a class of options that can be described by parameters and their levels. We consider preference data obtained in choice-based conjoint analysis studies, where one observes test persons' choices on small subsets of the options. There are many ways to analyze choice-based conjoint analysis data. Here we discuss the intuition behind a classification based approach, and compare this approach to one based on statistical assumptions (discrete choice models) and to a regression approach. Our comparison on real and synthetic data indicates that the classification approach outperforms the discrete choice models.

  1. A proposed classification scheme for Ada-based software products

    NASA Technical Reports Server (NTRS)

    Cernosek, Gary J.

    1986-01-01

    As the requirements for producing software in the Ada language become a reality for projects such as the Space Station, a great amount of Ada-based program code will begin to emerge. Recognizing the potential for varying levels of quality to result in Ada programs, what is needed is a classification scheme that describes the quality of a software product whose source code exists in Ada form. A 5-level classification scheme is proposed that attempts to decompose this potentially broad spectrum of quality which Ada programs may possess. The number of classes and their corresponding names are not as important as the mere fact that there needs to be some set of criteria from which to evaluate programs existing in Ada. An exact criteria for each class is not presented, nor are any detailed suggestions of how to effectively implement this quality assessment. The idea of Ada-based software classification is introduced and a set of requirements from which to base further research and development is suggested.

  2. A review of classification algorithms for EEG-based brain-computer interfaces.

    PubMed

    Lotte, F; Congedo, M; Lécuyer, A; Lamarche, F; Arnaldi, B

    2007-06-01

    In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.

  3. Network-based high level data classification.

    PubMed

    Silva, Thiago Christiano; Zhao, Liang

    2012-06-01

    Traditional supervised data classification considers only physical features (e.g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.

  4. Locality-preserving sparse representation-based classification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  5. PEGylated Peptide-Based Imaging Agents for Targeted Molecular Imaging.

    PubMed

    Wu, Huizi; Huang, Jiaguo

    2016-01-01

    Molecular imaging is able to directly visualize targets and characterize cellular pathways with a high signal/background ratio, which requires a sufficient amount of agents to uptake and accumulate in the imaging area. The design and development of peptide based agents for imaging and diagnosis as a hot and promising research topic that is booming in the field of molecular imaging. To date, selected peptides have been increasingly developed as agents by coupling with different imaging moieties (such as radiometals and fluorophore) with the help of sophisticated chemical techniques. Although a few successes have been achieved, most of them have failed mainly caused by their fast renal clearance and therefore low tumor uptakes, which may limit the effectively tumor retention effect. Besides, several peptide agents based on nanoparticles have also been developed for medical diagnostics. However, a great majority of those agents shown long circulation times and accumulation over time into the reticuloendothelial system (RES; including spleen, liver, lymph nodes and bone marrow) after systematic administration, such long-term severe accumulation probably results in the possible likelihood of toxicity and potentially induces health hazards. Recently reported design criteria have been proposed not only to enhance binding affinity in tumor region with long retention, but also to improve clearance from the body in a reasonable amount of time. PEGylation has been considered as one of the most successful modification methods to prolong tumor retention and improve the pharmacokinetic and pharmacodynamic properties for peptide-based imaging agents. This review summarizes an overview of PEGylated peptides imaging agents based on different imaging moieties including radioisotopes, fluorophores, and nanoparticles. The unique concepts and applications of various PEGylated peptide-based imaging agents are introduced for each of several imaging moieties. Effects of PEGylation on

  6. The selection of adhesive systems for resin-based luting agents.

    PubMed

    Carville, Rebecca; Quinn, Frank

    2008-01-01

    The use of resin-based luting agents is ever expanding with the development of adhesive dentistry. A multitude of different adhesive systems are used with resin-based luting agents, and new products are introduced to the market frequently. Traditional adhesives generally required a multiple step bonding procedure prior to cementing with active resin-based luting materials; however, combined agents offer a simple application procedure. Self-etching 'all-in-one' systems claim that there is no need for the use of a separate adhesive process. The following review addresses the advantages and disadvantages of the available adhesive systems used with resin-based luting agents.

  7. Object-Based Classification and Change Detection of Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Park, J. G.; Harada, I.; Kwak, Y.

    2016-06-01

    Topography and geology are factors to characterize the distribution of natural vegetation. Topographic contour is particularly influential on the living conditions of plants such as soil moisture, sunlight, and windiness. Vegetation associations having similar characteristics are present in locations having similar topographic conditions unless natural disturbances such as landslides and forest fires or artificial disturbances such as deforestation and man-made plantation bring about changes in such conditions. We developed a vegetation map of Japan using an object-based segmentation approach with topographic information (elevation, slope, slope direction) that is closely related to the distribution of vegetation. The results found that the object-based classification is more effective to produce a vegetation map than the pixel-based classification.

  8. Macromolecular and Dendrimer Based Magnetic Resonance Contrast Agents

    PubMed Central

    Bumb, Ambika; Brechbiel, Martin W.; Choyke, Peter

    2010-01-01

    Magnetic resonance imaging (MRI) is a powerful imaging modality that can provide an assessment of function or molecular expression in tandem with anatomic detail. Over the last 20–25 years, a number of gadolinium based MR contrast agents have been developed to enhance signal by altering proton relaxation properties. This review explores a range of these agents from small molecule chelates, such as Gd-DTPA and Gd-DOTA, to macromolecular structures composed of albumin, polylysine, polysaccharides (dextran, inulin, starch), poly(ethylene glycol), copolymers of cystamine and cystine with GD-DTPA, and various dendritic structures based on polyamidoamine and polylysine (Gadomers). The synthesis, structure, biodistribution and targeting of dendrimer-based MR contrast agents are also discussed. PMID:20590365

  9. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    PubMed

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  10. Polarimetric SAR image classification based on discriminative dictionary learning model

    NASA Astrophysics Data System (ADS)

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  11. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    PubMed Central

    Bosse, Stefan

    2015-01-01

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550

  12. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    PubMed

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  13. Cheese Classification, Characterization, and Categorization: A Global Perspective.

    PubMed

    Almena-Aliste, Montserrat; Mietton, Bernard

    2014-02-01

    Cheese is one of the most fascinating, complex, and diverse foods enjoyed today. Three elements constitute the cheese ecosystem: ripening agents, consisting of enzymes and microorganisms; the composition of the fresh cheese; and the environmental conditions during aging. These factors determine and define not only the sensory quality of the final cheese product but also the vast diversity of cheeses produced worldwide. How we define and categorize cheese is a complicated matter. There are various approaches to cheese classification, and a global approach for classification and characterization is needed. We review current cheese classification schemes and the limitations inherent in each of the schemes described. While some classification schemes are based on microbiological criteria, others rely on descriptions of the technologies used for cheese production. The goal of this review is to present an overview of comprehensive and practical integrative classification models in order to better describe cheese diversity and the fundamental differences within cheeses, as well as to connect fundamental technological, microbiological, chemical, and sensory characteristics to contribute to an overall characterization of the main families of cheese, including the expanding world of American artisanal cheeses.

  14. Agent-based real-time signal coordination in congested networks.

    DOT National Transportation Integrated Search

    2014-01-01

    This study is the continuation of a previous NEXTRANS study on agent-based reinforcement : learning methods for signal coordination in congested networks. In the previous study, the : formulation of a real-time agent-based traffic signal control in o...

  15. [Determinant-based classification of acute pancreatitis severity. International multidisciplinary classification of acute pancreatitis severity: the 2013 German edition].

    PubMed

    Layer, P; Dellinger, E P; Forsmark, C E; Lévy, P; Maraví-Poma, E; Shimosegawa, T; Siriwardena, A K; Uomo, G; Whitcomb, D C; Windsor, J A; Petrov, M S

    2013-06-01

    The aim of this study was to develop a new international classification of acute pancreatitis severity on the basis of a sound conceptual framework, comprehensive review of published evidence, and worldwide consultation. The Atlanta definitions of acute pancreatitis severity are ingrained in the lexicon of pancreatologists but suboptimal because these definitions are based on empiric descriptions of occurrences that are merely associated with severity. A personal invitation to contribute to the development of a new international classification of acute pancreatitis severity was sent to all surgeons, gastroenterologists, internists, intensive medicine specialists, and radiologists who are currently active in clinical research on acute pancreatitis. The invitation was not limited to members of certain associations or residents of certain countries. A global Web-based survey was conducted and a dedicated international symposium was organised to bring contributors from different disciplines together and discuss the concept and definitions. The new international classification is based on the actual local and systemic determinants of severity, rather than descriptions of events that are correlated with severity. The local determinant relates to whether there is (peri)pancreatic necrosis or not, and if present, whether it is sterile or infected. The systemic determinant relates to whether there is organ failure or not, and if present, whether it is transient or persistent. The presence of one determinant can modify the effect of another such that the presence of both infected (peri)pancreatic necrosis and persistent organ failure have a greater effect on severity than either determinant alone. The derivation of a classification based on the above principles results in 4 categories of severity - mild, moderate, severe, and critical. This classification is the result of a consultative process amongst pancreatologists from 49 countries spanning North America, South America

  16. Lidar-based individual tree species classification using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Mizoguchi, Tomohiro; Ishii, Akira; Nakamura, Hiroyuki; Inoue, Tsuyoshi; Takamatsu, Hisashi

    2017-06-01

    Terrestrial lidar is commonly used for detailed documentation in the field of forest inventory investigation. Recent improvements of point cloud processing techniques enabled efficient and precise computation of an individual tree shape parameters, such as breast-height diameter, height, and volume. However, tree species are manually specified by skilled workers to date. Previous works for automatic tree species classification mainly focused on aerial or satellite images, and few works have been reported for classification techniques using ground-based sensor data. Several candidate sensors can be considered for classification, such as RGB or multi/hyper spectral cameras. Above all candidates, we use terrestrial lidar because it can obtain high resolution point cloud in the dark forest. We selected bark texture for the classification criteria, since they clearly represent unique characteristics of each tree and do not change their appearance under seasonable variation and aged deterioration. In this paper, we propose a new method for automatic individual tree species classification based on terrestrial lidar using Convolutional Neural Network (CNN). The key component is the creation step of a depth image which well describe the characteristics of each species from a point cloud. We focus on Japanese cedar and cypress which cover the large part of domestic forest. Our experimental results demonstrate the effectiveness of our proposed method.

  17. The agent-based spatial information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren

    2006-10-01

    Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid

  18. [Gadolinium-based contrast agents for magnetic resonance imaging].

    PubMed

    Carrasco Muñoz, S; Calles Blanco, C; Marcin, Javier; Fernández Álvarez, C; Lafuente Martínez, J

    2014-06-01

    Gadolinium-based contrast agents are increasingly being used in magnetic resonance imaging. These agents can improve the contrast in images and provide information about function and metabolism, increasing both sensitivity and specificity. We describe the gadolinium-based contrast agents that have been approved for clinical use, detailing their main characteristics based on their chemical structure, stability, and safety. In general terms, these compounds are safe. Nevertheless, adverse reactions, the possibility of nephrotoxicity from these compounds, and the possibility of developing nephrogenic systemic fibrosis will be covered in this article. Lastly, the article will discuss the current guidelines, recommendations, and contraindications for their clinical use, including the management of pregnant and breast-feeding patients. Copyright © 2014 SERAM. Published by Elsevier Espana. All rights reserved.

  19. Automated classification of mouse pup isolation syllables: from cluster analysis to an Excel-based "mouse pup syllable classification calculator".

    PubMed

    Grimsley, Jasmine M S; Gadziola, Marie A; Wenstrup, Jeffrey J

    2012-01-01

    Mouse pups vocalize at high rates when they are cold or isolated from the nest. The proportions of each syllable type produced carry information about disease state and are being used as behavioral markers for the internal state of animals. Manual classifications of these vocalizations identified 10 syllable types based on their spectro-temporal features. However, manual classification of mouse syllables is time consuming and vulnerable to experimenter bias. This study uses an automated cluster analysis to identify acoustically distinct syllable types produced by CBA/CaJ mouse pups, and then compares the results to prior manual classification methods. The cluster analysis identified two syllable types, based on their frequency bands, that have continuous frequency-time structure, and two syllable types featuring abrupt frequency transitions. Although cluster analysis computed fewer syllable types than manual classification, the clusters represented well the probability distributions of the acoustic features within syllables. These probability distributions indicate that some of the manually classified syllable types are not statistically distinct. The characteristics of the four classified clusters were used to generate a Microsoft Excel-based mouse syllable classifier that rapidly categorizes syllables, with over a 90% match, into the syllable types determined by cluster analysis.

  20. A machine-learned computational functional genomics-based approach to drug classification.

    PubMed

    Lötsch, Jörn; Ultsch, Alfred

    2016-12-01

    The public accessibility of "big data" about the molecular targets of drugs and the biological functions of genes allows novel data science-based approaches to pharmacology that link drugs directly with their effects on pathophysiologic processes. This provides a phenotypic path to drug discovery and repurposing. This paper compares the performance of a functional genomics-based criterion to the traditional drug target-based classification. Knowledge discovery in the DrugBank and Gene Ontology databases allowed the construction of a "drug target versus biological process" matrix as a combination of "drug versus genes" and "genes versus biological processes" matrices. As a canonical example, such matrices were constructed for classical analgesic drugs. These matrices were projected onto a toroid grid of 50 × 82 artificial neurons using a self-organizing map (SOM). The distance, respectively, cluster structure of the high-dimensional feature space of the matrices was visualized on top of this SOM using a U-matrix. The cluster structure emerging on the U-matrix provided a correct classification of the analgesics into two main classes of opioid and non-opioid analgesics. The classification was flawless with both the functional genomics and the traditional target-based criterion. The functional genomics approach inherently included the drugs' modulatory effects on biological processes. The main pharmacological actions known from pharmacological science were captures, e.g., actions on lipid signaling for non-opioid analgesics that comprised many NSAIDs and actions on neuronal signal transmission for opioid analgesics. Using machine-learned techniques for computational drug classification in a comparative assessment, a functional genomics-based criterion was found to be similarly suitable for drug classification as the traditional target-based criterion. This supports a utility of functional genomics-based approaches to computational system pharmacology for drug

  1. The development of a classification schema for arts-based approaches to knowledge translation.

    PubMed

    Archibald, Mandy M; Caine, Vera; Scott, Shannon D

    2014-10-01

    Arts-based approaches to knowledge translation are emerging as powerful interprofessional strategies with potential to facilitate evidence uptake, communication, knowledge, attitude, and behavior change across healthcare provider and consumer groups. These strategies are in the early stages of development. To date, no classification system for arts-based knowledge translation exists, which limits development and understandings of effectiveness in evidence syntheses. We developed a classification schema of arts-based knowledge translation strategies based on two mechanisms by which these approaches function: (a) the degree of precision in key message delivery, and (b) the degree of end-user participation. We demonstrate how this classification is necessary to explore how context, time, and location shape arts-based knowledge translation strategies. Classifying arts-based knowledge translation strategies according to their core attributes extends understandings of the appropriateness of these approaches for various healthcare settings and provider groups. The classification schema developed may enhance understanding of how, where, and for whom arts-based knowledge translation approaches are effective, and enable theorizing of essential knowledge translation constructs, such as the influence of context, time, and location on utilization strategies. The classification schema developed may encourage systematic inquiry into the effectiveness of these approaches in diverse interprofessional contexts. © 2014 Sigma Theta Tau International.

  2. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    NASA Astrophysics Data System (ADS)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  3. Malay sentiment analysis based on combined classification approaches and Senti-lexicon algorithm.

    PubMed

    Al-Saffar, Ahmed; Awang, Suryanti; Tao, Hai; Omar, Nazlia; Al-Saiagh, Wafaa; Al-Bared, Mohammed

    2018-01-01

    Sentiment analysis techniques are increasingly exploited to categorize the opinion text to one or more predefined sentiment classes for the creation and automated maintenance of review-aggregation websites. In this paper, a Malay sentiment analysis classification model is proposed to improve classification performances based on the semantic orientation and machine learning approaches. First, a total of 2,478 Malay sentiment-lexicon phrases and words are assigned with a synonym and stored with the help of more than one Malay native speaker, and the polarity is manually allotted with a score. In addition, the supervised machine learning approaches and lexicon knowledge method are combined for Malay sentiment classification with evaluating thirteen features. Finally, three individual classifiers and a combined classifier are used to evaluate the classification accuracy. In experimental results, a wide-range of comparative experiments is conducted on a Malay Reviews Corpus (MRC), and it demonstrates that the feature extraction improves the performance of Malay sentiment analysis based on the combined classification. However, the results depend on three factors, the features, the number of features and the classification approach.

  4. Malay sentiment analysis based on combined classification approaches and Senti-lexicon algorithm

    PubMed Central

    Awang, Suryanti; Tao, Hai; Omar, Nazlia; Al-Saiagh, Wafaa; Al-bared, Mohammed

    2018-01-01

    Sentiment analysis techniques are increasingly exploited to categorize the opinion text to one or more predefined sentiment classes for the creation and automated maintenance of review-aggregation websites. In this paper, a Malay sentiment analysis classification model is proposed to improve classification performances based on the semantic orientation and machine learning approaches. First, a total of 2,478 Malay sentiment-lexicon phrases and words are assigned with a synonym and stored with the help of more than one Malay native speaker, and the polarity is manually allotted with a score. In addition, the supervised machine learning approaches and lexicon knowledge method are combined for Malay sentiment classification with evaluating thirteen features. Finally, three individual classifiers and a combined classifier are used to evaluate the classification accuracy. In experimental results, a wide-range of comparative experiments is conducted on a Malay Reviews Corpus (MRC), and it demonstrates that the feature extraction improves the performance of Malay sentiment analysis based on the combined classification. However, the results depend on three factors, the features, the number of features and the classification approach. PMID:29684036

  5. Spatial Mutual Information Based Hyperspectral Band Selection for Classification

    PubMed Central

    2015-01-01

    The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742

  6. SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.

    PubMed

    Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi

    2010-01-01

    Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.

  7. Classification of arterial and venous cerebral vasculature based on wavelet postprocessing of CT perfusion data.

    PubMed

    Havla, Lukas; Schneider, Moritz J; Thierfelder, Kolja M; Beyer, Sebastian E; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H; Dietrich, Olaf

    2016-02-01

    The purpose of this study was to propose and evaluate a new wavelet-based technique for classification of arterial and venous vessels using time-resolved cerebral CT perfusion data sets. Fourteen consecutive patients (mean age 73 yr, range 17-97) with suspected stroke but no pathology in follow-up MRI were included. A CT perfusion scan with 32 dynamic phases was performed during intravenous bolus contrast-agent application. After rigid-body motion correction, a Paul wavelet (order 1) was used to calculate voxelwise the wavelet power spectrum (WPS) of each attenuation-time course. The angiographic intensity A was defined as the maximum of the WPS, located at the coordinates T (time axis) and W (scale/width axis) within the WPS. Using these three parameters (A, T, W) separately as well as combined by (1) Fisher's linear discriminant analysis (FLDA), (2) logistic regression (LogR) analysis, or (3) support vector machine (SVM) analysis, their potential to classify 18 different arterial and venous vessel segments per subject was evaluated. The best vessel classification was obtained using all three parameters A and T and W [area under the curve (AUC): 0.953 with FLDA and 0.957 with LogR or SVM]. In direct comparison, the wavelet-derived parameters provided performance at least equal to conventional attenuation-time-course parameters. The maximum AUC obtained from the proposed wavelet parameters was slightly (although not statistically significantly) higher than the maximum AUC (0.945) obtained from the conventional parameters. A new method to classify arterial and venous cerebral vessels with high statistical accuracy was introduced based on the time-domain wavelet transform of dynamic CT perfusion data in combination with linear or nonlinear multidimensional classification techniques.

  8. Biocompatible blood pool MRI contrast agents based on hyaluronan

    PubMed Central

    Zhu, Wenlian; Artemov, Dmitri

    2010-01-01

    Biocompatible gadolinium blood pool contrast agents based on a biopolymer, hyaluronan, were investigated for magnetic resonance angiography application. Hyaluronan, a non-sulfated linear glucosaminoglycan composed of 2000–25,000 repeating disaccharide subunits of D-glucuronic acid and N-acetylglucosamine with molecular weight up to 20 MDa, is a major component of the extracellular matrix. Two gadolinium contrast agents based on 16 and 74 kDa hyaluronan were synthesized, both with R1 relaxivity around 5 mM−1 s−1 per gadolinium at 9.4 T at 25°C. These two hyaluronan based agents show significant enhancement of the vasculature for an extended period of time. Initial excretion was primarily through the renal system. Later uptake was observed in the stomach and lower gastrointestinal tract. Macromolecular hyaluronan-based gadolinium agents have a high clinical translation potential as hyaluronan is already approved by FDA for a variety of medical applications. PMID:21504061

  9. Visual attention based bag-of-words model for image classification

    NASA Astrophysics Data System (ADS)

    Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che

    2014-04-01

    Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.

  10. Classification of right-hand grasp movement based on EMOTIV Epoc+

    NASA Astrophysics Data System (ADS)

    Tobing, T. A. M. L.; Prawito, Wijaya, S. K.

    2017-07-01

    Combinations of BCT elements for right-hand grasp movement have been obtained, providing the average value of their classification accuracy. The aim of this study is to find a suitable combination for best classification accuracy of right-hand grasp movement based on EEG headset, EMOTIV Epoc+. There are three movement classifications: grasping hand, relax, and opening hand. These classifications take advantage of Event-Related Desynchronization (ERD) phenomenon that makes it possible to differ relaxation, imagery, and movement state from each other. The combinations of elements are the usage of Independent Component Analysis (ICA), spectrum analysis by Fast Fourier Transform (FFT), maximum mu and beta power with their frequency as features, and also classifier Probabilistic Neural Network (PNN) and Radial Basis Function (RBF). The average values of classification accuracy are ± 83% for training and ± 57% for testing. To have a better understanding of the signal quality recorded by EMOTIV Epoc+, the result of classification accuracy of left or right-hand grasping movement EEG signal (provided by Physionet) also be given, i.e.± 85% for training and ± 70% for testing. The comparison of accuracy value from each combination, experiment condition, and external EEG data are provided for the purpose of value analysis of classification accuracy.

  11. Determinant-based classification of acute pancreatitis severity: an international multidisciplinary consultation.

    PubMed

    Dellinger, E Patchen; Forsmark, Christopher E; Layer, Peter; Lévy, Philippe; Maraví-Poma, Enrique; Petrov, Maxim S; Shimosegawa, Tooru; Siriwardena, Ajith K; Uomo, Generoso; Whitcomb, David C; Windsor, John A

    2012-12-01

    To develop a new international classification of acute pancreatitis severity on the basis of a sound conceptual framework, comprehensive review of published evidence, and worldwide consultation. The Atlanta definitions of acute pancreatitis severity are ingrained in the lexicon of pancreatologists but suboptimal because these definitions are based on empiric description of occurrences that are merely associated with severity. A personal invitation to contribute to the development of a new international classification of acute pancreatitis severity was sent to all surgeons, gastroenterologists, internists, intensivists, and radiologists who are currently active in clinical research on acute pancreatitis. The invitation was not limited to members of certain associations or residents of certain countries. A global Web-based survey was conducted and a dedicated international symposium was organized to bring contributors from different disciplines together and discuss the concept and definitions. The new international classification is based on the actual local and systemic determinants of severity, rather than description of events that are correlated with severity. The local determinant relates to whether there is (peri)pancreatic necrosis or not, and if present, whether it is sterile or infected. The systemic determinant relates to whether there is organ failure or not, and if present, whether it is transient or persistent. The presence of one determinant can modify the effect of another such that the presence of both infected (peri)pancreatic necrosis and persistent organ failure have a greater effect on severity than either determinant alone. The derivation of a classification based on the above principles results in 4 categories of severity-mild, moderate, severe, and critical. This classification is the result of a consultative process amongst pancreatologists from 49 countries spanning North America, South America, Europe, Asia, Oceania, and Africa. It provides

  12. Modeling marine oily wastewater treatment by a probabilistic agent-based approach.

    PubMed

    Jing, Liang; Chen, Bing; Zhang, Baiyu; Ye, Xudong

    2018-02-01

    This study developed a novel probabilistic agent-based approach for modeling of marine oily wastewater treatment processes. It begins first by constructing a probability-based agent simulation model, followed by a global sensitivity analysis and a genetic algorithm-based calibration. The proposed modeling approach was tested through a case study of the removal of naphthalene from marine oily wastewater using UV irradiation. The removal of naphthalene was described by an agent-based simulation model using 8 types of agents and 11 reactions. Each reaction was governed by a probability parameter to determine its occurrence. The modeling results showed that the root mean square errors between modeled and observed removal rates were 8.73 and 11.03% for calibration and validation runs, respectively. Reaction competition was analyzed by comparing agent-based reaction probabilities, while agents' heterogeneity was visualized by plotting their real-time spatial distribution, showing a strong potential for reactor design and process optimization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Tissue classification for laparoscopic image understanding based on multispectral texture analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Wirkert, Sebastian J.; Iszatt, Justin; Kenngott, Hannes; Wagner, Martin; Mayer, Benjamin; Stock, Christian; Clancy, Neil T.; Elson, Daniel S.; Maier-Hein, Lena

    2016-03-01

    Intra-operative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study we show (1) that multispectral imaging data is superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) that combining the tissue texture with the reflectance spectrum improves the classification performance. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.

  14. Hazard Classification of Household Chemical Products in Korea according to the Globally Harmonized System of Classification and labeling of Chemicals.

    PubMed

    Kim, Kyung-Hee; Song, Dae-Jong; Yu, Myeong-Hyun; Park, Yuon-Shin; Noh, Hye-Ran; Kim, Hae-Joon; Choi, Jae-Wook

    2013-07-16

    This study was conducted to review the validity of the need for the application of the Globally Harmonized System of Classification and Labeling of Chemicals (GHS) to household chemical products in Korea. The study also aimed to assess the severity of health and environmental hazards of household chemical products using the GHS. 135 products were classified as 'cleaning agents and polishing agents' and 98 products were classified as 'bleaches, disinfectants, and germicides.' The current status of carcinogenic classification of GHS and carcinogenicity was examined for 272 chemical substances contained in household chemical products by selecting the top 11 products for each of the product categories. In addition, the degree of toxicity was assessed through analysis of whether the standard of the Republic of Korea's regulations on household chemical products had been exceeded or not. According to GHS health and environmental hazards, "acute toxicity (oral)" was found to be the highest for two product groups, 'cleaning agents and polishing agents', and 'bleaches, disinfectants, and germicides' (result of classification of 233 household chemical products) at 37.8% and 52.0% respectively. In an analysis of carcinogenicity assuming a threshold of IARC 2B for the substances in household chemical products, we found 'cleaning agents and polishing agents' to contain 12 chemical substances and 'bleaches, disinfectants, and germicides' 11 chemical substances. Some of the household chemical products were found to have a high hazard level including acute toxicity and germ cell mutagenicity, carcinogenicity, and reproductive toxicity. Establishing a hazard information delivery system including the application of GHS to household chemical products in Korea is urgent as well.

  15. Stability of subsystem solutions in agent-based models

    NASA Astrophysics Data System (ADS)

    Perc, Matjaž

    2018-01-01

    The fact that relatively simple entities, such as particles or neurons, or even ants or bees or humans, give rise to fascinatingly complex behaviour when interacting in large numbers is the hallmark of complex systems science. Agent-based models are frequently employed for modelling and obtaining a predictive understanding of complex systems. Since the sheer number of equations that describe the behaviour of an entire agent-based model often makes it impossible to solve such models exactly, Monte Carlo simulation methods must be used for the analysis. However, unlike pairwise interactions among particles that typically govern solid-state physics systems, interactions among agents that describe systems in biology, sociology or the humanities often involve group interactions, and they also involve a larger number of possible states even for the most simplified description of reality. This begets the question: when can we be certain that an observed simulation outcome of an agent-based model is actually stable and valid in the large system-size limit? The latter is key for the correct determination of phase transitions between different stable solutions, and for the understanding of the underlying microscopic processes that led to these phase transitions. We show that a satisfactory answer can only be obtained by means of a complete stability analysis of subsystem solutions. A subsystem solution can be formed by any subset of all possible agent states. The winner between two subsystem solutions can be determined by the average moving direction of the invasion front that separates them, yet it is crucial that the competing subsystem solutions are characterised by a proper composition and spatiotemporal structure before the competition starts. We use the spatial public goods game with diverse tolerance as an example, but the approach has relevance for a wide variety of agent-based models.

  16. CW-SSIM kernel based random forest for image classification

    NASA Astrophysics Data System (ADS)

    Fan, Guangzhe; Wang, Zhou; Wang, Jiheng

    2010-07-01

    Complex wavelet structural similarity (CW-SSIM) index has been proposed as a powerful image similarity metric that is robust to translation, scaling and rotation of images, but how to employ it in image classification applications has not been deeply investigated. In this paper, we incorporate CW-SSIM as a kernel function into a random forest learning algorithm. This leads to a novel image classification approach that does not require a feature extraction or dimension reduction stage at the front end. We use hand-written digit recognition as an example to demonstrate our algorithm. We compare the performance of the proposed approach with random forest learning based on other kernels, including the widely adopted Gaussian and the inner product kernels. Empirical evidences show that the proposed method is superior in its classification power. We also compared our proposed approach with the direct random forest method without kernel and the popular kernel-learning method support vector machine. Our test results based on both simulated and realworld data suggest that the proposed approach works superior to traditional methods without the feature selection procedure.

  17. Object-Based Random Forest Classification of Land Cover from Remotely Sensed Imagery for Industrial and Mining Reclamation

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Luo, M.; Xu, L.; Zhou, X.; Ren, J.; Zhou, J.

    2018-04-01

    The RF method based on grid-search parameter optimization could achieve a classification accuracy of 88.16 % in the classification of images with multiple feature variables. This classification accuracy was higher than that of SVM and ANN under the same feature variables. In terms of efficiency, the RF classification method performs better than SVM and ANN, it is more capable of handling multidimensional feature variables. The RF method combined with object-based analysis approach could highlight the classification accuracy further. The multiresolution segmentation approach on the basis of ESP scale parameter optimization was used for obtaining six scales to execute image segmentation, when the segmentation scale was 49, the classification accuracy reached the highest value of 89.58 %. The classification accuracy of object-based RF classification was 1.42 % higher than that of pixel-based classification (88.16 %), and the classification accuracy was further improved. Therefore, the RF classification method combined with object-based analysis approach could achieve relatively high accuracy in the classification and extraction of land use information for industrial and mining reclamation areas. Moreover, the interpretation of remotely sensed imagery using the proposed method could provide technical support and theoretical reference for remotely sensed monitoring land reclamation.

  18. A New Approach To Secure Federated Information Bases Using Agent Technology.

    ERIC Educational Resources Information Center

    Weippi, Edgar; Klug, Ludwig; Essmayr, Wolfgang

    2003-01-01

    Discusses database agents which can be used to establish federated information bases by integrating heterogeneous databases. Highlights include characteristics of federated information bases, including incompatible database management systems, schemata, and frequently changing context; software agent technology; Java agents; system architecture;…

  19. Yarn-dyed fabric defect classification based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Jing, Junfeng; Dong, Amei; Li, Pengfei; Zhang, Kaibing

    2017-09-01

    Considering that manual inspection of the yarn-dyed fabric can be time consuming and inefficient, we propose a yarn-dyed fabric defect classification method by using a convolutional neural network (CNN) based on a modified AlexNet. CNN shows powerful ability in performing feature extraction and fusion by simulating the learning mechanism of human brain. The local response normalization layers in AlexNet are replaced by the batch normalization layers, which can enhance both the computational efficiency and classification accuracy. In the training process of the network, the characteristics of the defect are extracted step by step and the essential features of the image can be obtained from the fusion of the edge details with several convolution operations. Then the max-pooling layers, the dropout layers, and the fully connected layers are employed in the classification model to reduce the computation cost and extract more precise features of the defective fabric. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show promising performance with an acceptable average classification rate and strong robustness on yarn-dyed fabric defect classification.

  20. Research on monocentric model of urbanization by agent-based simulation

    NASA Astrophysics Data System (ADS)

    Xue, Ling; Yang, Kaizhong

    2008-10-01

    Over the past years, GIS have been widely used for modeling urbanization from a variety of perspectives such as digital terrain representation and overlay analysis using cell-based data platform. Similarly, simulation of urban dynamics has been achieved with the use of Cellular Automata. In contrast to these approaches, agent-based simulation provides a much more powerful set of tools. This allows researchers to set up a counterpart for real environmental and urban systems in computer for experimentation and scenario analysis. This Paper basically reviews the research on the economic mechanism of urbanization and an agent-based monocentric model is setup for further understanding the urbanization process and mechanism in China. We build an endogenous growth model with dynamic interactions between spatial agglomeration and urban development by using agent-based simulation. It simulates the migration decisions of two main types of agents, namely rural and urban households between rural and urban area. The model contains multiple economic interactions that are crucial in understanding urbanization and industrial process in China. These adaptive agents can adjust their supply and demand according to the market situation by a learning algorithm. The simulation result shows this agent-based urban model is able to perform the regeneration and to produce likely-to-occur projections of reality.

  1. Agent-based simulation of a financial market

    NASA Astrophysics Data System (ADS)

    Raberto, Marco; Cincotti, Silvano; Focardi, Sergio M.; Marchesi, Michele

    2001-10-01

    This paper introduces an agent-based artificial financial market in which heterogeneous agents trade one single asset through a realistic trading mechanism for price formation. Agents are initially endowed with a finite amount of cash and a given finite portfolio of assets. There is no money-creation process; the total available cash is conserved in time. In each period, agents make random buy and sell decisions that are constrained by available resources, subject to clustering, and dependent on the volatility of previous periods. The model proposed herein is able to reproduce the leptokurtic shape of the probability density of log price returns and the clustering of volatility. Implemented using extreme programming and object-oriented technology, the simulator is a flexible computational experimental facility that can find applications in both academic and industrial research projects.

  2. A Coupled Simulation Architecture for Agent-Based/Geohydrological Modelling

    NASA Astrophysics Data System (ADS)

    Jaxa-Rozen, M.

    2016-12-01

    The quantitative modelling of social-ecological systems can provide useful insights into the interplay between social and environmental processes, and their impact on emergent system dynamics. However, such models should acknowledge the complexity and uncertainty of both of the underlying subsystems. For instance, the agent-based models which are increasingly popular for groundwater management studies can be made more useful by directly accounting for the hydrological processes which drive environmental outcomes. Conversely, conventional environmental models can benefit from an agent-based depiction of the feedbacks and heuristics which influence the decisions of groundwater users. From this perspective, this work describes a Python-based software architecture which couples the popular NetLogo agent-based platform with the MODFLOW/SEAWAT geohydrological modelling environment. This approach enables users to implement agent-based models in NetLogo's user-friendly platform, while benefiting from the full capabilities of MODFLOW/SEAWAT packages or reusing existing geohydrological models. The software architecture is based on the pyNetLogo connector, which provides an interface between the NetLogo agent-based modelling software and the Python programming language. This functionality is then extended and combined with Python's object-oriented features, to design a simulation architecture which couples NetLogo with MODFLOW/SEAWAT through the FloPy library (Bakker et al., 2016). The Python programming language also provides access to a range of external packages which can be used for testing and analysing the coupled models, which is illustrated for an application of Aquifer Thermal Energy Storage (ATES).

  3. Objected-oriented remote sensing image classification method based on geographic ontology model

    NASA Astrophysics Data System (ADS)

    Chu, Z.; Liu, Z. J.; Gu, H. Y.

    2016-11-01

    Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application

  4. Improving Generalization Based on l1-Norm Regularization for EEG-Based Motor Imagery Classification

    PubMed Central

    Zhao, Yuwei; Han, Jiuqi; Chen, Yushu; Sun, Hongji; Chen, Jiayun; Ke, Ang; Han, Yao; Zhang, Peng; Zhang, Yi; Zhou, Jin; Wang, Changyong

    2018-01-01

    Multichannel electroencephalography (EEG) is widely used in typical brain-computer interface (BCI) systems. In general, a number of parameters are essential for a EEG classification algorithm due to redundant features involved in EEG signals. However, the generalization of the EEG method is often adversely affected by the model complexity, considerably coherent with its number of undetermined parameters, further leading to heavy overfitting. To decrease the complexity and improve the generalization of EEG method, we present a novel l1-norm-based approach to combine the decision value obtained from each EEG channel directly. By extracting the information from different channels on independent frequency bands (FB) with l1-norm regularization, the method proposed fits the training data with much less parameters compared to common spatial pattern (CSP) methods in order to reduce overfitting. Moreover, an effective and efficient solution to minimize the optimization object is proposed. The experimental results on dataset IVa of BCI competition III and dataset I of BCI competition IV show that, the proposed method contributes to high classification accuracy and increases generalization performance for the classification of MI EEG. As the training set ratio decreases from 80 to 20%, the average classification accuracy on the two datasets changes from 85.86 and 86.13% to 84.81 and 76.59%, respectively. The classification performance and generalization of the proposed method contribute to the practical application of MI based BCI systems. PMID:29867307

  5. Video based object representation and classification using multiple covariance matrices.

    PubMed

    Zhang, Yurong; Liu, Quan

    2017-01-01

    Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.

  6. Demeter, persephone, and the search for emergence in agent-based models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, M. J.; Howe, T. R.; Collier, N. T.

    2006-01-01

    In Greek mythology, the earth goddess Demeter was unable to find her daughter Persephone after Persephone was abducted by Hades, the god of the underworld. Demeter is said to have embarked on a long and frustrating, but ultimately successful, search to find her daughter. Unfortunately, long and frustrating searches are not confined to Greek mythology. In modern times, agent-based modelers often face similar troubles when searching for agents that are to be to be connected to one another and when seeking appropriate target agents while defining agent behaviors. The result is a 'search for emergence' in that many emergent ormore » potentially emergent behaviors in agent-based models of complex adaptive systems either implicitly or explicitly require search functions. This paper considers a new nested querying approach to simplifying such agent-based modeling and multi-agent simulation search problems.« less

  7. A classification model of Hyperion image base on SAM combined decision tree

    NASA Astrophysics Data System (ADS)

    Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin

    2009-10-01

    Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model

  8. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  9. The fractional volatility model: An agent-based interpretation

    NASA Astrophysics Data System (ADS)

    Vilela Mendes, R.

    2008-06-01

    Based on the criteria of mathematical simplicity and consistency with empirical market data, a model with volatility driven by fractional noise has been constructed which provides a fairly accurate mathematical parametrization of the data. Here, some features of the model are reviewed and extended to account for leverage effects. Using agent-based models, one tries to find which agent strategies and (or) properties of the financial institutions might be responsible for the features of the fractional volatility model.

  10. Natural Language Processing Based Instrument for Classification of Free Text Medical Records

    PubMed Central

    2016-01-01

    According to the Ministry of Labor, Health and Social Affairs of Georgia a new health management system has to be introduced in the nearest future. In this context arises the problem of structuring and classifying documents containing all the history of medical services provided. The present work introduces the instrument for classification of medical records based on the Georgian language. It is the first attempt of such classification of the Georgian language based medical records. On the whole 24.855 examination records have been studied. The documents were classified into three main groups (ultrasonography, endoscopy, and X-ray) and 13 subgroups using two well-known methods: Support Vector Machine (SVM) and K-Nearest Neighbor (KNN). The results obtained demonstrated that both machine learning methods performed successfully, with a little supremacy of SVM. In the process of classification a “shrink” method, based on features selection, was introduced and applied. At the first stage of classification the results of the “shrink” case were better; however, on the second stage of classification into subclasses 23% of all documents could not be linked to only one definite individual subclass (liver or binary system) due to common features characterizing these subclasses. The overall results of the study were successful. PMID:27668260

  11. Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification

    PubMed Central

    Hou, Le; Samaras, Dimitris; Kurc, Tahsin M.; Gao, Yi; Davis, James E.; Saltz, Joel H.

    2016-01-01

    Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN. PMID:27795661

  12. An immunity-based anomaly detection system with sensor agents.

    PubMed

    Okamoto, Takeshi; Ishida, Yoshiteru

    2009-01-01

    This paper proposes an immunity-based anomaly detection system with sensor agents based on the specificity and diversity of the immune system. Each agent is specialized to react to the behavior of a specific user. Multiple diverse agents decide whether the behavior is normal or abnormal. Conventional systems have used only a single sensor to detect anomalies, while the immunity-based system makes use of multiple sensors, which leads to improvements in detection accuracy. In addition, we propose an evaluation framework for the anomaly detection system, which is capable of evaluating the differences in detection accuracy between internal and external anomalies. This paper focuses on anomaly detection in user's command sequences on UNIX-like systems. In experiments, the immunity-based system outperformed some of the best conventional systems.

  13. An Agent-Based Data Mining System for Ontology Evolution

    NASA Astrophysics Data System (ADS)

    Hadzic, Maja; Dillon, Darshan

    We have developed an evidence-based mental health ontological model that represents mental health in multiple dimensions. The ongoing addition of new mental health knowledge requires a continual update of the Mental Health Ontology. In this paper, we describe how the ontology evolution can be realized using a multi-agent system in combination with data mining algorithms. We use the TICSA methodology to design this multi-agent system which is composed of four different types of agents: Information agent, Data Warehouse agent, Data Mining agents and Ontology agent. We use UML 2.1 sequence diagrams to model the collaborative nature of the agents and a UML 2.1 composite structure diagram to model the structure of individual agents. The Mental Heath Ontology has the potential to underpin various mental health research experiments of a collaborative nature which are greatly needed in times of increasing mental distress and illness.

  14. Caries-removal effectiveness of a papain-based chemo-mechanical agent: A quantitative micro-CT study.

    PubMed

    Neves, Aline A; Lourenço, Roseane A; Alves, Haimon D; Lopes, Ricardo T; Primo, Laura G

    2015-01-01

    The aim of this study was to access the effectiveness and specificity of a papain-based chemo-mechanical caries-removal agent in providing minimum residual caries after cavity preparation. In order to do it, extracted carious molars were selected and scanned in a micro-CT before and after caries-removal procedures with the papain-based gel. Similar parameters for acquisition and reconstruction of the image stacks were used between the scans. After classification of the dentin substrate based on mineral density intervals and establishment of a carious tissue threshold, volumetric parameters related to effectiveness (mineral density of removed dentin volume and residual dentin tissue) and specificity (relation between carious dentin in removed volume and initial caries) of this caries-removal agent were obtained. In general, removed dentin volume was similar or higher than the initial carious volume, indicating that the method was able to effectively remove dentin tissue. Samples with an almost perfect accuracy in carious dentin removal also showed an increased removal of caries-affected tissue. On the contrary, less or no affected dentin was removed in samples where some carious tissue was left in residual dentin. Mineral density values in residual dentin were always higher or similar to the threshold for mineral density values in carious dentin. In conclusion, the papain-based gel was effective in removing carious dentin up to a conservative in vitro threshold. Lesion characteristics, such as activity and morphology of enamel lesion, may also influence caries-removal properties of the method. © Wiley Periodicals, Inc.

  15. The addition of entropy-based regularity parameters improves sleep stage classification based on heart rate variability.

    PubMed

    Aktaruzzaman, M; Migliorini, M; Tenhunen, M; Himanen, S L; Bianchi, A M; Sassi, R

    2015-05-01

    The work considers automatic sleep stage classification, based on heart rate variability (HRV) analysis, with a focus on the distinction of wakefulness (WAKE) from sleep and rapid eye movement (REM) from non-REM (NREM) sleep. A set of 20 automatically annotated one-night polysomnographic recordings was considered, and artificial neural networks were selected for classification. For each inter-heartbeat (RR) series, beside features previously presented in literature, we introduced a set of four parameters related to signal regularity. RR series of three different lengths were considered (corresponding to 2, 6, and 10 successive epochs, 30 s each, in the same sleep stage). Two sets of only four features captured 99 % of the data variance in each classification problem, and both of them contained one of the new regularity features proposed. The accuracy of classification for REM versus NREM (68.4 %, 2 epochs; 83.8 %, 10 epochs) was higher than when distinguishing WAKE versus SLEEP (67.6 %, 2 epochs; 71.3 %, 10 epochs). Also, the reliability parameter (Cohens's Kappa) was higher (0.68 and 0.45, respectively). Sleep staging classification based on HRV was still less precise than other staging methods, employing a larger variety of signals collected during polysomnographic studies. However, cheap and unobtrusive HRV-only sleep classification proved sufficiently precise for a wide range of applications.

  16. Oscillatory neural network for pattern recognition: trajectory based classification and supervised learning.

    PubMed

    Miller, Vonda H; Jansen, Ben H

    2008-12-01

    Computer algorithms that match human performance in recognizing written text or spoken conversation remain elusive. The reasons why the human brain far exceeds any existing recognition scheme to date in the ability to generalize and to extract invariant characteristics relevant to category matching are not clear. However, it has been postulated that the dynamic distribution of brain activity (spatiotemporal activation patterns) is the mechanism by which stimuli are encoded and matched to categories. This research focuses on supervised learning using a trajectory based distance metric for category discrimination in an oscillatory neural network model. Classification is accomplished using a trajectory based distance metric. Since the distance metric is differentiable, a supervised learning algorithm based on gradient descent is demonstrated. Classification of spatiotemporal frequency transitions and their relation to a priori assessed categories is shown along with the improved classification results after supervised training. The results indicate that this spatiotemporal representation of stimuli and the associated distance metric is useful for simple pattern recognition tasks and that supervised learning improves classification results.

  17. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  18. Polarization-based material classification technique using passive millimeter-wave polarimetric imagery.

    PubMed

    Hu, Fei; Cheng, Yayun; Gui, Liangqi; Wu, Liang; Zhang, Xinyi; Peng, Xiaohui; Su, Jinlong

    2016-11-01

    The polarization properties of thermal millimeter-wave emission capture inherent information of objects, e.g., material composition, shape, and surface features. In this paper, a polarization-based material-classification technique using passive millimeter-wave polarimetric imagery is presented. Linear polarization ratio (LPR) is created to be a new feature discriminator that is sensitive to material type and to remove the reflected ambient radiation effect. The LPR characteristics of several common natural and artificial materials are investigated by theoretical and experimental analysis. Based on a priori information about LPR characteristics, the optimal range of incident angle and the classification criterion are discussed. Simulation and measurement results indicate that the presented classification technique is effective for distinguishing between metals and dielectrics. This technique suggests possible applications for outdoor metal target detection in open scenes.

  19. Towards an agent-oriented programming language based on Scala

    NASA Astrophysics Data System (ADS)

    Mitrović, Dejan; Ivanović, Mirjana; Budimac, Zoran

    2012-09-01

    Scala and its multi-threaded model based on actors represent an excellent framework for developing purely reactive agents. This paper presents an early research on extending Scala with declarative programming constructs, which would result in a new agent-oriented programming language suitable for developing more advanced, BDI agent architectures. The main advantage the new language over many other existing solutions for programming BDI agents is a natural and straightforward integration of imperative and declarative programming constructs, fitted under a single development framework.

  20. The Study on Collaborative Manufacturing Platform Based on Agent

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-yan; Qu, Zheng-geng

    To fulfill the trends of knowledge-intensive in collaborative manufacturing development, we have described multi agent architecture supporting knowledge-based platform of collaborative manufacturing development platform. In virtue of wrapper service and communication capacity agents provided, the proposed architecture facilitates organization and collaboration of multi-disciplinary individuals and tools. By effectively supporting the formal representation, capture, retrieval and reuse of manufacturing knowledge, the generalized knowledge repository based on ontology library enable engineers to meaningfully exchange information and pass knowledge across boundaries. Intelligent agent technology increases traditional KBE systems efficiency and interoperability and provides comprehensive design environments for engineers.

  1. Chinese wine classification system based on micrograph using combination of shape and structure features

    NASA Astrophysics Data System (ADS)

    Wan, Yi

    2011-06-01

    Chinese wines can be classification or graded by the micrographs. Micrographs of Chinese wines show floccules, stick and granule of variant shape and size. Different wines have variant microstructure and micrographs, we study the classification of Chinese wines based on the micrographs. Shape and structure of wines' particles in microstructure is the most important feature for recognition and classification of wines. So we introduce a feature extraction method which can describe the structure and region shape of micrograph efficiently. First, the micrographs are enhanced using total variation denoising, and segmented using a modified Otsu's method based on the Rayleigh Distribution. Then features are extracted using proposed method in the paper based on area, perimeter and traditional shape feature. Eight kinds total 26 features are selected. Finally, Chinese wine classification system based on micrograph using combination of shape and structure features and BP neural network have been presented. We compare the recognition results for different choices of features (traditional shape features or proposed features). The experimental results show that the better classification rate have been achieved using the combinational features proposed in this paper.

  2. Desert plains classification based on Geomorphometrical parameters (Case study: Aghda, Yazd)

    NASA Astrophysics Data System (ADS)

    Tazeh, mahdi; Kalantari, Saeideh

    2013-04-01

    This research focuses on plains. There are several tremendous methods and classification which presented for plain classification. One of The natural resource based classification which is mostly using in Iran, classified plains into three types, Erosional Pediment, Denudation Pediment Aggradational Piedmont. The qualitative and quantitative factors to differentiate them from each other are also used appropriately. In this study effective Geomorphometrical parameters in differentiate landforms were applied for plain. Geomorphometrical parameters are calculable and can be extracted using mathematical equations and the corresponding relations on digital elevation model. Geomorphometrical parameters used in this study included Percent of Slope, Plan Curvature, Profile Curvature, Minimum Curvature, the Maximum Curvature, Cross sectional Curvature, Longitudinal Curvature and Gaussian Curvature. The results indicated that the most important affecting Geomorphometrical parameters for plain and desert classifications includes: Percent of Slope, Minimum Curvature, Profile Curvature, and Longitudinal Curvature. Key Words: Plain, Geomorphometry, Classification, Biophysical, Yazd Khezarabad.

  3. A canonical correlation analysis based EMG classification algorithm for eliminating electrode shift effect.

    PubMed

    Zhe Fan; Zhong Wang; Guanglin Li; Ruomei Wang

    2016-08-01

    Motion classification system based on surface Electromyography (sEMG) pattern recognition has achieved good results in experimental condition. But it is still a challenge for clinical implement and practical application. Many factors contribute to the difficulty of clinical use of the EMG based dexterous control. The most obvious and important is the noise in the EMG signal caused by electrode shift, muscle fatigue, motion artifact, inherent instability of signal and biological signals such as Electrocardiogram. In this paper, a novel method based on Canonical Correlation Analysis (CCA) was developed to eliminate the reduction of classification accuracy caused by electrode shift. The average classification accuracy of our method were above 95% for the healthy subjects. In the process, we validated the influence of electrode shift on motion classification accuracy and discovered the strong correlation with correlation coefficient of >0.9 between shift position data and normal position data.

  4. Data-driven agent-based modeling, with application to rooftop solar adoption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Haifeng; Vorobeychik, Yevgeniy; Letchford, Joshua

    Agent-based modeling is commonly used for studying complex system properties emergent from interactions among many agents. We present a novel data-driven agent-based modeling framework applied to forecasting individual and aggregate residential rooftop solar adoption in San Diego county. Our first step is to learn a model of individual agent behavior from combined data of individual adoption characteristics and property assessment. We then construct an agent-based simulation with the learned model embedded in artificial agents, and proceed to validate it using a holdout sequence of collective adoption decisions. We demonstrate that the resulting agent-based model successfully forecasts solar adoption trends andmore » provides a meaningful quantification of uncertainty about its predictions. We utilize our model to optimize two classes of policies aimed at spurring solar adoption: one that subsidizes the cost of adoption, and another that gives away free systems to low-income house- holds. We find that the optimal policies derived for the latter class are significantly more efficacious, whereas the policies similar to the current California Solar Initiative incentive scheme appear to have a limited impact on overall adoption trends.« less

  5. Data-driven agent-based modeling, with application to rooftop solar adoption

    DOE PAGES

    Zhang, Haifeng; Vorobeychik, Yevgeniy; Letchford, Joshua; ...

    2016-01-25

    Agent-based modeling is commonly used for studying complex system properties emergent from interactions among many agents. We present a novel data-driven agent-based modeling framework applied to forecasting individual and aggregate residential rooftop solar adoption in San Diego county. Our first step is to learn a model of individual agent behavior from combined data of individual adoption characteristics and property assessment. We then construct an agent-based simulation with the learned model embedded in artificial agents, and proceed to validate it using a holdout sequence of collective adoption decisions. We demonstrate that the resulting agent-based model successfully forecasts solar adoption trends andmore » provides a meaningful quantification of uncertainty about its predictions. We utilize our model to optimize two classes of policies aimed at spurring solar adoption: one that subsidizes the cost of adoption, and another that gives away free systems to low-income house- holds. We find that the optimal policies derived for the latter class are significantly more efficacious, whereas the policies similar to the current California Solar Initiative incentive scheme appear to have a limited impact on overall adoption trends.« less

  6. The DTW-based representation space for seismic pattern classification

    NASA Astrophysics Data System (ADS)

    Orozco-Alzate, Mauricio; Castro-Cabrera, Paola Alexandra; Bicego, Manuele; Londoño-Bonilla, John Makario

    2015-12-01

    Distinguishing among the different seismic volcanic patterns is still one of the most important and labor-intensive tasks for volcano monitoring. This task could be lightened and made free from subjective bias by using automatic classification techniques. In this context, a core but often overlooked issue is the choice of an appropriate representation of the data to be classified. Recently, it has been suggested that using a relative representation (i.e. proximities, namely dissimilarities on pairs of objects) instead of an absolute one (i.e. features, namely measurements on single objects) is advantageous to exploit the relational information contained in the dissimilarities to derive highly discriminant vector spaces, where any classifier can be used. According to that motivation, this paper investigates the suitability of a dynamic time warping (DTW) dissimilarity-based vector representation for the classification of seismic patterns. Results show the usefulness of such a representation in the seismic pattern classification scenario, including analyses of potential benefits from recent advances in the dissimilarity-based paradigm such as the proper selection of representation sets and the combination of different dissimilarity representations that might be available for the same data.

  7. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  8. Comparing administered and market-based water allocation systems using an agent-based modeling approach

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Cai, X.; Wang, Z.

    2009-12-01

    It also has been well recognized that market-based systems can have significant advantages over administered systems for water allocation. However there are not many successful water markets around the world yet and administered systems exist commonly in water allocation management practice. This paradox has been under discussion for decades and still calls for attention for both research and practice. This paper explores some insights for the paradox and tries to address why market systems have not been widely implemented for water allocation. Adopting the theory of agent-based system we develop a consistent analytical model to interpret both systems. First we derive some theorems based on the analytical model, with respect to the necessary conditions for economic efficiency of water allocation. Following that the agent-based model is used to illustrate the coherence and difference between administered and market-based systems. The two systems are compared from three aspects: 1) the driving forces acting on the system state, 2) system efficiency, and 3) equity. Regarding economic efficiency, penalty on the violation of water use permits (or rights) under an administered system can lead to system-wide economic efficiency, as well as being acceptable by some agents, which follows the theory of the so-call rational violation. Ideal equity will be realized if penalty equals incentive with an administered system and if transaction costs are zero with a market system. The performances of both agents and the over system are explained with an administered system and market system, respectively. The performances of agents are subject to different mechanisms of interactions between agents under the two systems. The system emergency (i.e., system benefit, equilibrium market price, etc), resulting from the performance at the agent level, reflects the different mechanism of the two systems, the “invisible hand” with the market system and administrative measures (penalty

  9. Applications of agent-based modeling to nutrient movement Lake Michigan

    EPA Science Inventory

    As part of an ongoing project aiming to provide useful information for nearshore management (harmful algal blooms, nutrient loading), we explore the value of agent-based models in Lake Michigan. Agent-based models follow many individual “agents” moving through a simul...

  10. Uav-Based Crops Classification with Joint Features from Orthoimage and Dsm Data

    NASA Astrophysics Data System (ADS)

    Liu, B.; Shi, Y.; Duan, Y.; Wu, W.

    2018-04-01

    Accurate crops classification remains a challenging task due to the same crop with different spectra and different crops with same spectrum phenomenon. Recently, UAV-based remote sensing approach gains popularity not only for its high spatial and temporal resolution, but also for its ability to obtain spectraand spatial data at the same time. This paper focus on how to take full advantages of spatial and spectrum features to improve crops classification accuracy, based on an UAV platform equipped with a general digital camera. Texture and spatial features extracted from the RGB orthoimage and the digital surface model of the monitoring area are analysed and integrated within a SVM classification framework. Extensive experiences results indicate that the overall classification accuracy is drastically improved from 72.9 % to 94.5 % when the spatial features are combined together, which verified the feasibility and effectiveness of the proposed method.

  11. [Land cover classification of Four Lakes Region in Hubei Province based on MODIS and ENVISAT data].

    PubMed

    Xue, Lian; Jin, Wei-Bin; Xiong, Qin-Xue; Liu, Zhang-Yong

    2010-03-01

    Based on the differences of back scattering coefficient in ENVISAT ASAR data, a classification was made on the towns, waters, and vegetation-covered areas in the Four Lakes Region of Hubei Province. According to the local cropping systems and phenological characteristics in the region, and by using the discrepancies of the MODIS-NDVI index from late April to early May, the vegetation-covered areas were classified into croplands and non-croplands. The classification results based on the above-mentioned procedure was verified by the classification results based on the ETM data with high spatial resolution. Based on the DEM data, the non-croplands were categorized into forest land and bottomland; and based on the discrepancies of mean NDVI index per month, the crops were identified as mid rice, late rice, and cotton, and the croplands were identified as paddy field and upland field. The land cover classification based on the MODIS data with low spatial resolution was basically consistent with that based on the ETM data with high spatial resolution, and the total error rate was about 13.15% when the classification results based on ETM data were taken as the standard. The utilization of the above-mentioned procedures for large scale land cover classification and mapping could make the fast tracking of regional land cover classification.

  12. Hyperspectral image classification based on local binary patterns and PCANet

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang

    2018-04-01

    Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.

  13. Optical beam classification using deep learning: a comparison with rule- and feature-based classification

    NASA Astrophysics Data System (ADS)

    Alom, Md. Zahangir; Awwal, Abdul A. S.; Lowe-Webb, Roger; Taha, Tarek M.

    2017-08-01

    Vector Machine (SVM). The experimental results show around 96% classification accuracy using CNN; the CNN approach also provides comparable recognition results compared to the present feature-based off-normal detection. The feature-based solution was developed to capture the expertise of a human expert in classifying the images. The misclassified results are further studied to explain the differences and discover any discrepancies or inconsistencies in current classification.

  14. Classification of forensic autopsy reports through conceptual graph-based document representation model.

    PubMed

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali

    2018-06-01

    Text categorization has been used extensively in recent years to classify plain-text clinical reports. This study employs text categorization techniques for the classification of open narrative forensic autopsy reports. One of the key steps in text classification is document representation. In document representation, a clinical report is transformed into a format that is suitable for classification. The traditional document representation technique for text categorization is the bag-of-words (BoW) technique. In this study, the traditional BoW technique is ineffective in classifying forensic autopsy reports because it merely extracts frequent but discriminative features from clinical reports. Moreover, this technique fails to capture word inversion, as well as word-level synonymy and polysemy, when classifying autopsy reports. Hence, the BoW technique suffers from low accuracy and low robustness unless it is improved with contextual and application-specific information. To overcome the aforementioned limitations of the BoW technique, this research aims to develop an effective conceptual graph-based document representation (CGDR) technique to classify 1500 forensic autopsy reports from four (4) manners of death (MoD) and sixteen (16) causes of death (CoD). Term-based and Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) based conceptual features were extracted and represented through graphs. These features were then used to train a two-level text classifier. The first level classifier was responsible for predicting MoD. In addition, the second level classifier was responsible for predicting CoD using the proposed conceptual graph-based document representation technique. To demonstrate the significance of the proposed technique, its results were compared with those of six (6) state-of-the-art document representation techniques. Lastly, this study compared the effects of one-level classification and two-level classification on the experimental results

  15. Employing wavelet-based texture features in ammunition classification

    NASA Astrophysics Data System (ADS)

    Borzino, Ángelo M. C. R.; Maher, Robert C.; Apolinário, José A.; de Campos, Marcello L. R.

    2017-05-01

    Pattern recognition, a branch of machine learning, involves classification of information in images, sounds, and other digital representations. This paper uses pattern recognition to identify which kind of ammunition was used when a bullet was fired based on a carefully constructed set of gunshot sound recordings. To do this task, we show that texture features obtained from the wavelet transform of a component of the gunshot signal, treated as an image, and quantized in gray levels, are good ammunition discriminators. We test the technique with eight different calibers and achieve a classification rate better than 95%. We also compare the performance of the proposed method with results obtained by standard temporal and spectrographic techniques

  16. Simple-random-sampling-based multiclass text classification algorithm.

    PubMed

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements.

  17. A Characteristics-Based Approach to Radioactive Waste Classification in Advanced Nuclear Fuel Cycles

    NASA Astrophysics Data System (ADS)

    Djokic, Denia

    The radioactive waste classification system currently used in the United States primarily relies on a source-based framework. This has lead to numerous issues, such as wastes that are not categorized by their intrinsic risk, or wastes that do not fall under a category within the framework and therefore are without a legal imperative for responsible management. Furthermore, in the possible case that advanced fuel cycles were to be deployed in the United States, the shortcomings of the source-based classification system would be exacerbated: advanced fuel cycles implement processes such as the separation of used nuclear fuel, which introduce new waste streams of varying characteristics. To be able to manage and dispose of these potential new wastes properly, development of a classification system that would assign appropriate level of management to each type of waste based on its physical properties is imperative. This dissertation explores how characteristics from wastes generated from potential future nuclear fuel cycles could be coupled with a characteristics-based classification framework. A static mass flow model developed under the Department of Energy's Fuel Cycle Research & Development program, called the Fuel-cycle Integration and Tradeoffs (FIT) model, was used to calculate the composition of waste streams resulting from different nuclear fuel cycle choices: two modified open fuel cycle cases (recycle in MOX reactor) and two different continuous-recycle fast reactor recycle cases (oxide and metal fuel fast reactors). This analysis focuses on the impact of waste heat load on waste classification practices, although future work could involve coupling waste heat load with metrics of radiotoxicity and longevity. The value of separation of heat-generating fission products and actinides in different fuel cycles and how it could inform long- and short-term disposal management is discussed. It is shown that the benefits of reducing the short-term fission

  18. B-tree search reinforcement learning for model based intelligent agent

    NASA Astrophysics Data System (ADS)

    Bhuvaneswari, S.; Vignashwaran, R.

    2013-03-01

    Agents trained by learning techniques provide a powerful approximation of active solutions for naive approaches. In this study using B - Trees implying reinforced learning the data search for information retrieval is moderated to achieve accuracy with minimum search time. The impact of variables and tactics applied in training are determined using reinforcement learning. Agents based on these techniques perform satisfactory baseline and act as finite agents based on the predetermined model against competitors from the course.

  19. Geographical classification of apple based on hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Guo, Zhiming; Huang, Wenqian; Chen, Liping; Zhao, Chunjiang; Peng, Yankun

    2013-05-01

    Attribute of apple according to geographical origin is often recognized and appreciated by the consumers. It is usually an important factor to determine the price of a commercial product. Hyperspectral imaging technology and supervised pattern recognition was attempted to discriminate apple according to geographical origins in this work. Hyperspectral images of 207 Fuji apple samples were collected by hyperspectral camera (400-1000nm). Principal component analysis (PCA) was performed on hyperspectral imaging data to determine main efficient wavelength images, and then characteristic variables were extracted by texture analysis based on gray level co-occurrence matrix (GLCM) from dominant waveband image. All characteristic variables were obtained by fusing the data of images in efficient spectra. Support vector machine (SVM) was used to construct the classification model, and showed excellent performance in classification results. The total classification rate had the high classify accuracy of 92.75% in the training set and 89.86% in the prediction sets, respectively. The overall results demonstrated that the hyperspectral imaging technique coupled with SVM classifier can be efficiently utilized to discriminate Fuji apple according to geographical origins.

  20. Feature-based classification of amino acid substitutions outside conserved functional protein domains.

    PubMed

    Gemovic, Branislava; Perovic, Vladimir; Glisic, Sanja; Veljkovic, Nevena

    2013-01-01

    There are more than 500 amino acid substitutions in each human genome, and bioinformatics tools irreplaceably contribute to determination of their functional effects. We have developed feature-based algorithm for the detection of mutations outside conserved functional domains (CFDs) and compared its classification efficacy with the most commonly used phylogeny-based tools, PolyPhen-2 and SIFT. The new algorithm is based on the informational spectrum method (ISM), a feature-based technique, and statistical analysis. Our dataset contained neutral polymorphisms and mutations associated with myeloid malignancies from epigenetic regulators ASXL1, DNMT3A, EZH2, and TET2. PolyPhen-2 and SIFT had significantly lower accuracies in predicting the effects of amino acid substitutions outside CFDs than expected, with especially low sensitivity. On the other hand, only ISM algorithm showed statistically significant classification of these sequences. It outperformed PolyPhen-2 and SIFT by 15% and 13%, respectively. These results suggest that feature-based methods, like ISM, are more suitable for the classification of amino acid substitutions outside CFDs than phylogeny-based tools.

  1. Expected energy-based restricted Boltzmann machine for classification.

    PubMed

    Elfwing, S; Uchibe, E; Doya, K

    2015-04-01

    In classification tasks, restricted Boltzmann machines (RBMs) have predominantly been used in the first stage, either as feature extractors or to provide initialization of neural networks. In this study, we propose a discriminative learning approach to provide a self-contained RBM method for classification, inspired by free-energy based function approximation (FE-RBM), originally proposed for reinforcement learning. For classification, the FE-RBM method computes the output for an input vector and a class vector by the negative free energy of an RBM. Learning is achieved by stochastic gradient-descent using a mean-squared error training objective. In an earlier study, we demonstrated that the performance and the robustness of FE-RBM function approximation can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that the learning performance of RBM function approximation can be further improved by computing the output by the negative expected energy (EE-RBM), instead of the negative free energy. To create a deep learning architecture, we stack several RBMs on top of each other. We also connect the class nodes to all hidden layers to try to improve the performance even further. We validate the classification performance of EE-RBM using the MNIST data set and the NORB data set, achieving competitive performance compared with other classifiers such as standard neural networks, deep belief networks, classification RBMs, and support vector machines. The purpose of using the NORB data set is to demonstrate that EE-RBM with binary input nodes can achieve high performance in the continuous input domain. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    NASA Astrophysics Data System (ADS)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  3. Atmospheric circulation classification comparison based on wildfires in Portugal

    NASA Astrophysics Data System (ADS)

    Pereira, M. G.; Trigo, R. M.

    2009-04-01

    Atmospheric circulation classifications are not a simple description of atmospheric states but a tool to understand and interpret the atmospheric processes and to model the relation between atmospheric circulation and surface climate and other related variables (Radan Huth et al., 2008). Classifications were initially developed with weather forecasting purposes, however with the progress in computer processing capability, new and more robust objective methods were developed and applied to large datasets prompting atmospheric circulation classification methods to one of the most important fields in synoptic and statistical climatology. Classification studies have been extensively used in climate change studies (e.g. reconstructed past climates, recent observed changes and future climates), in bioclimatological research (e.g. relating human mortality to climatic factors) and in a wide variety of synoptic climatological applications (e.g. comparison between datasets, air pollution, snow avalanches, wine quality, fish captures and forest fires). Likewise, atmospheric circulation classifications are important for the study of the role of weather in wildfire occurrence in Portugal because the daily synoptic variability is the most important driver of local weather conditions (Pereira et al., 2005). In particular, the objective classification scheme developed by Trigo and DaCamara (2000) to classify the atmospheric circulation affecting Portugal have proved to be quite useful in discriminating the occurrence and development of wildfires as well as the distribution over Portugal of surface climatic variables with impact in wildfire activity such as maximum and minimum temperature and precipitation. This work aims to present: (i) an overview the existing circulation classification for the Iberian Peninsula, and (ii) the results of a comparison study between these atmospheric circulation classifications based on its relation with wildfires and relevant meteorological

  4. Adaptive video-based vehicle classification technique for monitoring traffic.

    DOT National Transportation Integrated Search

    2015-08-01

    This report presents a methodology for extracting two vehicle features, vehicle length and number of axles in order : to classify the vehicles from video, based on Federal Highway Administration (FHWA)s recommended vehicle : classification scheme....

  5. Feature selection for elderly faller classification based on wearable sensors.

    PubMed

    Howcroft, Jennifer; Kofman, Jonathan; Lemaire, Edward D

    2017-05-30

    Wearable sensors can be used to derive numerous gait pattern features for elderly fall risk and faller classification; however, an appropriate feature set is required to avoid high computational costs and the inclusion of irrelevant features. The objectives of this study were to identify and evaluate smaller feature sets for faller classification from large feature sets derived from wearable accelerometer and pressure-sensing insole gait data. A convenience sample of 100 older adults (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence) walked 7.62 m while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, left and right shanks. Feature selection was performed using correlation-based feature selection (CFS), fast correlation based filter (FCBF), and Relief-F algorithms. Faller classification was performed using multi-layer perceptron neural network, naïve Bayesian, and support vector machine classifiers, with 75:25 single stratified holdout and repeated random sampling. The best performing model was a support vector machine with 78% accuracy, 26% sensitivity, 95% specificity, 0.36 F1 score, and 0.31 MCC and one posterior pelvis accelerometer input feature (left acceleration standard deviation). The second best model achieved better sensitivity (44%) and used a support vector machine with 74% accuracy, 83% specificity, 0.44 F1 score, and 0.29 MCC. This model had ten input features: maximum, mean and standard deviation posterior acceleration; maximum, mean and standard deviation anterior acceleration; mean superior acceleration; and three impulse features. The best multi-sensor model sensitivity (56%) was achieved using posterior pelvis and both shank accelerometers and a naïve Bayesian classifier. The best single-sensor model sensitivity (41%) was achieved using the posterior pelvis accelerometer and a naïve Bayesian classifier. Feature selection provided models with smaller feature

  6. Structural MRI-based detection of Alzheimer's disease using feature ranking and classification error.

    PubMed

    Beheshti, Iman; Demirel, Hasan; Farokhian, Farnaz; Yang, Chunlan; Matsuda, Hiroshi

    2016-12-01

    This paper presents an automatic computer-aided diagnosis (CAD) system based on feature ranking for detection of Alzheimer's disease (AD) using structural magnetic resonance imaging (sMRI) data. The proposed CAD system is composed of four systematic stages. First, global and local differences in the gray matter (GM) of AD patients compared to the GM of healthy controls (HCs) are analyzed using a voxel-based morphometry technique. The aim is to identify significant local differences in the volume of GM as volumes of interests (VOIs). Second, the voxel intensity values of the VOIs are extracted as raw features. Third, the raw features are ranked using a seven-feature ranking method, namely, statistical dependency (SD), mutual information (MI), information gain (IG), Pearson's correlation coefficient (PCC), t-test score (TS), Fisher's criterion (FC), and the Gini index (GI). The features with higher scores are more discriminative. To determine the number of top features, the estimated classification error based on training set made up of the AD and HC groups is calculated, with the vector size that minimized this error selected as the top discriminative feature. Fourth, the classification is performed using a support vector machine (SVM). In addition, a data fusion approach among feature ranking methods is introduced to improve the classification performance. The proposed method is evaluated using a data-set from ADNI (130 AD and 130 HC) with 10-fold cross-validation. The classification accuracy of the proposed automatic system for the diagnosis of AD is up to 92.48% using the sMRI data. An automatic CAD system for the classification of AD based on feature-ranking method and classification errors is proposed. In this regard, seven-feature ranking methods (i.e., SD, MI, IG, PCC, TS, FC, and GI) are evaluated. The optimal size of top discriminative features is determined by the classification error estimation in the training phase. The experimental results indicate that

  7. High-order distance-based multiview stochastic learning in image classification.

    PubMed

    Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng

    2014-12-01

    How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.

  8. Research on Remote Sensing Geological Information Extraction Based on Object Oriented Classification

    NASA Astrophysics Data System (ADS)

    Gao, Hui

    2018-04-01

    The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  9. Pattern-oriented modeling of agent-based complex systems: Lessons from ecology

    USGS Publications Warehouse

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-01-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  10. Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology

    NASA Astrophysics Data System (ADS)

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-11-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  11. Agent-based method for distributed clustering of textual information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  12. Object-based classification of earthquake damage from high-resolution optical imagery using machine learning

    NASA Astrophysics Data System (ADS)

    Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene

    2016-07-01

    Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.

  13. Image search engine with selective filtering and feature-element-based classification

    NASA Astrophysics Data System (ADS)

    Li, Qing; Zhang, Yujin; Dai, Shengyang

    2001-12-01

    With the growth of Internet and storage capability in recent years, image has become a widespread information format in World Wide Web. However, it has become increasingly harder to search for images of interest, and effective image search engine for the WWW needs to be developed. We propose in this paper a selective filtering process and a novel approach for image classification based on feature element in the image search engine we developed for the WWW. First a selective filtering process is embedded in a general web crawler to filter out the meaningless images with GIF format. Two parameters that can be obtained easily are used in the filtering process. Our classification approach first extract feature elements from images instead of feature vectors. Compared with feature vectors, feature elements can better capture visual meanings of the image according to subjective perception of human beings. Different from traditional image classification method, our classification approach based on feature element doesn't calculate the distance between two vectors in the feature space, while trying to find associations between feature element and class attribute of the image. Experiments are presented to show the efficiency of the proposed approach.

  14. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-11-01

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data

    NASA Astrophysics Data System (ADS)

    Gajda, Agnieszka; Wójtowicz-Nowakowska, Anna

    2013-04-01

    A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data Land cover maps are generally produced on the basis of high resolution imagery. Recently, LiDAR (Light Detection and Ranging) data have been brought into use in diverse applications including land cover mapping. In this study we attempted to assess the accuracy of land cover classification using both high resolution aerial imagery and LiDAR data (airborne laser scanning, ALS), testing two classification approaches: a pixel-based classification and object-oriented image analysis (OBIA). The study was conducted on three test areas (3 km2 each) in the administrative area of Kraków, Poland, along the course of the Vistula River. They represent three different dominating land cover types of the Vistula River valley. Test site 1 had a semi-natural vegetation, with riparian forests and shrubs, test site 2 represented a densely built-up area, and test site 3 was an industrial site. Point clouds from ALS and ortophotomaps were both captured in November 2007. Point cloud density was on average 16 pt/m2 and it contained additional information about intensity and encoded RGB values. Ortophotomaps had a spatial resolution of 10 cm. From point clouds two raster maps were generated: intensity (1) and (2) normalised Digital Surface Model (nDSM), both with the spatial resolution of 50 cm. To classify the aerial data, a supervised classification approach was selected. Pixel based classification was carried out in ERDAS Imagine software. Ortophotomaps and intensity and nDSM rasters were used in classification. 15 homogenous training areas representing each cover class were chosen. Classified pixels were clumped to avoid salt and pepper effect. Object oriented image object classification was carried out in eCognition software, which implements both the optical and ALS data. Elevation layers (intensity, firs/last reflection, etc.) were used at segmentation stage due to

  16. Semi-supervised vibration-based classification and condition monitoring of compressors

    NASA Astrophysics Data System (ADS)

    Potočnik, Primož; Govekar, Edvard

    2017-09-01

    Semi-supervised vibration-based classification and condition monitoring of the reciprocating compressors installed in refrigeration appliances is proposed in this paper. The method addresses the problem of industrial condition monitoring where prior class definitions are often not available or difficult to obtain from local experts. The proposed method combines feature extraction, principal component analysis, and statistical analysis for the extraction of initial class representatives, and compares the capability of various classification methods, including discriminant analysis (DA), neural networks (NN), support vector machines (SVM), and extreme learning machines (ELM). The use of the method is demonstrated on a case study which was based on industrially acquired vibration measurements of reciprocating compressors during the production of refrigeration appliances. The paper presents a comparative qualitative analysis of the applied classifiers, confirming the good performance of several nonlinear classifiers. If the model parameters are properly selected, then very good classification performance can be obtained from NN trained by Bayesian regularization, SVM and ELM classifiers. The method can be effectively applied for the industrial condition monitoring of compressors.

  17. G0-WISHART Distribution Based Classification from Polarimetric SAR Images

    NASA Astrophysics Data System (ADS)

    Hu, G. C.; Zhao, Q. H.

    2017-09-01

    Enormous scientific and technical developments have been carried out to further improve the remote sensing for decades, particularly Polarimetric Synthetic Aperture Radar(PolSAR) technique, so classification method based on PolSAR images has getted much more attention from scholars and related department around the world. The multilook polarmetric G0-Wishart model is a more flexible model which describe homogeneous, heterogeneous and extremely heterogeneous regions in the image. Moreover, the polarmetric G0-Wishart distribution dose not include the modified Bessel function of the second kind. It is a kind of simple statistical distribution model with less parameter. To prove its feasibility, a process of classification has been tested with the full-polarized Synthetic Aperture Radar (SAR) image by the method. First, apply multilook polarimetric SAR data process and speckle filter to reduce speckle influence for classification result. Initially classify the image into sixteen classes by H/A/α decomposition. Using the ICM algorithm to classify feature based on the G0-Wshart distance. Qualitative and quantitative results show that the proposed method can classify polaimetric SAR data effectively and efficiently.

  18. Analysis of A Drug Target-based Classification System using Molecular Descriptors.

    PubMed

    Lu, Jing; Zhang, Pin; Bi, Yi; Luo, Xiaomin

    2016-01-01

    Drug-target interaction is an important topic in drug discovery and drug repositioning. KEGG database offers a drug annotation and classification using a target-based classification system. In this study, we gave an investigation on five target-based classes: (I) G protein-coupled receptors; (II) Nuclear receptors; (III) Ion channels; (IV) Enzymes; (V) Pathogens, using molecular descriptors to represent each drug compound. Two popular feature selection methods, maximum relevance minimum redundancy and incremental feature selection, were adopted to extract the important descriptors. Meanwhile, an optimal prediction model based on nearest neighbor algorithm was constructed, which got the best result in identifying drug target-based classes. Finally, some key descriptors were discussed to uncover their important roles in the identification of drug-target classes.

  19. A Computerized English-Spanish Correlation Index to Five Biomedical Library Classification Schemes Based on MeSH*

    PubMed Central

    Muench, Eugene V.

    1971-01-01

    A computerized English/Spanish correlation index to five biomedical library classification schemes and a computerized English/Spanish, Spanish/English listings of MeSH are described. The index was accomplished by supplying appropriate classification numbers of five classification schemes (National Library of Medicine; Library of Congress; Dewey Decimal; Cunningham; Boston Medical) to MeSH and a Spanish translation of MeSH The data were keypunched, merged on magnetic tape, and sorted in a computer alphabetically by English and Spanish subject headings and sequentially by classification number. Some benefits and uses of the index are: a complete index to classification schemes based on MeSH terms; a tool for conversion of classification numbers when reclassifying collections; a Spanish index and a crude Spanish translation of five classification schemes; a data base for future applications, e.g., automatic classification. Other classification schemes, such as the UDC, and translations of MeSH into other languages can be added. PMID:5172471

  20. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    NASA Astrophysics Data System (ADS)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  1. Treatment-Based Classification versus Usual Care for Management of Low Back Pain

    DTIC Science & Technology

    2017-10-01

    AWARD NUMBER: W81XWH-11-1-0657 TITLE: Treatment-Based Classification versus Usual Care for Management of Low Back Pain PRINCIPAL INVESTIGATOR...Treatment-Based Classification versus Usual Care for Management of Low Back Pain 5b. GRANT NUMBER W81XWH-11-1-0657 5c. PROGRAM ELEMENT NUMBER 6...AUTHOR(S) MAJ Daniel Rhon – daniel_rhon@baylor.edu 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S

  2. Ecosystem classifications based on summer and winter conditions.

    PubMed

    Andrew, Margaret E; Nelson, Trisalyn A; Wulder, Michael A; Hobart, George W; Coops, Nicholas C; Farmer, Carson J Q

    2013-04-01

    Ecosystem classifications map an area into relatively homogenous units for environmental research, monitoring, and management. However, their effectiveness is rarely tested. Here, three classifications are (1) defined and characterized for Canada along summertime productivity (moderate-resolution imaging spectrometer fraction of absorbed photosynthetically active radiation) and wintertime snow conditions (special sensor microwave/imager snow water equivalent), independently and in combination, and (2) comparatively evaluated to determine the ability of each classification to represent the spatial and environmental patterns of alternative schemes, including the Canadian ecozone framework. All classifications depicted similar patterns across Canada, but detailed class distributions differed. Class spatial characteristics varied with environmental conditions within classifications, but were comparable between classifications. There was moderate correspondence between classifications. The strongest association was between productivity classes and ecozones. The classification along both productivity and snow balanced these two sets of variables, yielding intermediate levels of association in all pairwise comparisons. Despite relatively low spatial agreement between classifications, they successfully captured patterns of the environmental conditions underlying alternate schemes (e.g., snow classes explained variation in productivity and vice versa). The performance of ecosystem classifications and the relevance of their input variables depend on the environmental patterns and processes used for applications and evaluation. Productivity or snow regimes, as constructed here, may be desirable when summarizing patterns controlled by summer- or wintertime conditions, respectively, or of climate change responses. General purpose ecosystem classifications should include both sets of drivers. Classifications should be carefully, quantitatively, and comparatively evaluated

  3. Agent-Based Crowd Simulation Considering Emotion Contagion for Emergency Evacuation Problem

    NASA Astrophysics Data System (ADS)

    Faroqi, H.; Mesgari, M.-S.

    2015-12-01

    During emergencies, emotions greatly affect human behaviour. For more realistic multi-agent systems in simulations of emergency evacuations, it is important to incorporate emotions and their effects on the agents. In few words, emotional contagion is a process in which a person or group influences the emotions or behavior of another person or group through the conscious or unconscious induction of emotion states and behavioral attitudes. In this study, we simulate an emergency situation in an open square area with three exits considering Adults and Children agents with different behavior. Also, Security agents are considered in order to guide Adults and Children for finding the exits and be calm. Six levels of emotion levels are considered for each agent in different scenarios and situations. The agent-based simulated model initialize with the random scattering of agent populations and then when an alarm occurs, each agent react to the situation based on its and neighbors current circumstances. The main goal of each agent is firstly to find the exit, and then help other agents to find their ways. Numbers of exited agents along with their emotion levels and damaged agents are compared in different scenarios with different initialization in order to evaluate the achieved results of the simulated model. NetLogo 5.2 is used as the multi-agent simulation framework with R language as the developing language.

  4. LiDAR point classification based on sparse representation

    NASA Astrophysics Data System (ADS)

    Li, Nan; Pfeifer, Norbert; Liu, Chun

    2017-04-01

    In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with

  5. Context-based automated defect classification system using multiple morphological masks

    DOEpatents

    Gleason, Shaun S.; Hunt, Martin A.; Sari-Sarraf, Hamed

    2002-01-01

    Automatic detection of defects during the fabrication of semiconductor wafers is largely automated, but the classification of those defects is still performed manually by technicians. This invention includes novel digital image analysis techniques that generate unique feature vector descriptions of semiconductor defects as well as classifiers that use these descriptions to automatically categorize the defects into one of a set of pre-defined classes. Feature extraction techniques based on multiple-focus images, multiple-defect mask images, and segmented semiconductor wafer images are used to create unique feature-based descriptions of the semiconductor defects. These feature-based defect descriptions are subsequently classified by a defect classifier into categories that depend on defect characteristics and defect contextual information, that is, the semiconductor process layer(s) with which the defect comes in contact. At the heart of the system is a knowledge database that stores and distributes historical semiconductor wafer and defect data to guide the feature extraction and classification processes. In summary, this invention takes as its input a set of images containing semiconductor defect information, and generates as its output a classification for the defect that describes not only the defect itself, but also the location of that defect with respect to the semiconductor process layers.

  6. Agent-based models in translational systems biology

    PubMed Central

    An, Gary; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram

    2013-01-01

    Effective translational methodologies for knowledge representation are needed in order to make strides against the constellation of diseases that affect the world today. These diseases are defined by their mechanistic complexity, redundancy, and nonlinearity. Translational systems biology aims to harness the power of computational simulation to streamline drug/device design, simulate clinical trials, and eventually to predict the effects of drugs on individuals. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggests that this modeling framework is well suited for translational systems biology. This review describes agent-based modeling and gives examples of its translational applications in the context of acute inflammation and wound healing. PMID:20835989

  7. CATS-based Air Traffic Controller Agents

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.

    2002-01-01

    This report describes intelligent agents that function as air traffic controllers. Each agent controls traffic in a single sector in real time; agents controlling traffic in adjoining sectors can coordinate to manage an arrival flow across a given meter fix. The purpose of this research is threefold. First, it seeks to study the design of agents for controlling complex systems. In particular, it investigates agent planning and reactive control functionality in a dynamic environment in which a variety perceptual and decision making skills play a central role. It examines how heuristic rules can be applied to model planning and decision making skills, rather than attempting to apply optimization methods. Thus, the research attempts to develop intelligent agents that provide an approximation of human air traffic controller behavior that, while not based on an explicit cognitive model, does produce task performance consistent with the way human air traffic controllers operate. Second, this research sought to extend previous research on using the Crew Activity Tracking System (CATS) as the basis for intelligent agents. The agents use a high-level model of air traffic controller activities to structure the control task. To execute an activity in the CATS model, according to the current task context, the agents reference a 'skill library' and 'control rules' that in turn execute the pattern recognition, planning, and decision-making required to perform the activity. Applying the skills enables the agents to modify their representation of the current control situation (i.e., the 'flick' or 'picture'). The updated representation supports the next activity in a cycle of action that, taken as a whole, simulates air traffic controller behavior. A third, practical motivation for this research is to use intelligent agents to support evaluation of new air traffic control (ATC) methods to support new Air Traffic Management (ATM) concepts. Current approaches that use large, human

  8. Research on Production Scheduling System with Bottleneck Based on Multi-agent

    NASA Astrophysics Data System (ADS)

    Zhenqiang, Bao; Weiye, Wang; Peng, Wang; Pan, Quanke

    Aimed at the imbalance problem of resource capacity in Production Scheduling System, this paper uses Production Scheduling System based on multi-agent which has been constructed, and combines the dynamic and autonomous of Agent; the bottleneck problem in the scheduling is solved dynamically. Firstly, this paper uses Bottleneck Resource Agent to find out the bottleneck resource in the production line, analyses the inherent mechanism of bottleneck, and describes the production scheduling process based on bottleneck resource. Bottleneck Decomposition Agent harmonizes the relationship of job's arrival time and transfer time in Bottleneck Resource Agent and Non-Bottleneck Resource Agents, therefore, the dynamic scheduling problem is simplified as the single machine scheduling of each resource which takes part in the scheduling. Finally, the dynamic real-time scheduling problem is effectively solved in Production Scheduling System.

  9. Chronic Heart Failure Follow-up Management Based on Agent Technology.

    PubMed

    Mohammadzadeh, Niloofar; Safdari, Reza

    2015-10-01

    Monitoring heart failure patients through continues assessment of sign and symptoms by information technology tools lead to large reduction in re-hospitalization. Agent technology is one of the strongest artificial intelligence areas; therefore, it can be expected to facilitate, accelerate, and improve health services especially in home care and telemedicine. The aim of this article is to provide an agent-based model for chronic heart failure (CHF) follow-up management. This research was performed in 2013-2014 to determine appropriate scenarios and the data required to monitor and follow-up CHF patients, and then an agent-based model was designed. Agents in the proposed model perform the following tasks: medical data access, communication with other agents of the framework and intelligent data analysis, including medical data processing, reasoning, negotiation for decision-making, and learning capabilities. The proposed multi-agent system has ability to learn and thus improve itself. Implementation of this model with more and various interval times at a broader level could achieve better results. The proposed multi-agent system is no substitute for cardiologists, but it could assist them in decision-making.

  10. Chronic Heart Failure Follow-up Management Based on Agent Technology

    PubMed Central

    Safdari, Reza

    2015-01-01

    Objectives Monitoring heart failure patients through continues assessment of sign and symptoms by information technology tools lead to large reduction in re-hospitalization. Agent technology is one of the strongest artificial intelligence areas; therefore, it can be expected to facilitate, accelerate, and improve health services especially in home care and telemedicine. The aim of this article is to provide an agent-based model for chronic heart failure (CHF) follow-up management. Methods This research was performed in 2013-2014 to determine appropriate scenarios and the data required to monitor and follow-up CHF patients, and then an agent-based model was designed. Results Agents in the proposed model perform the following tasks: medical data access, communication with other agents of the framework and intelligent data analysis, including medical data processing, reasoning, negotiation for decision-making, and learning capabilities. Conclusions The proposed multi-agent system has ability to learn and thus improve itself. Implementation of this model with more and various interval times at a broader level could achieve better results. The proposed multi-agent system is no substitute for cardiologists, but it could assist them in decision-making. PMID:26618038

  11. Agent-based modelling in synthetic biology.

    PubMed

    Gorochowski, Thomas E

    2016-11-30

    Biological systems exhibit complex behaviours that emerge at many different levels of organization. These span the regulation of gene expression within single cells to the use of quorum sensing to co-ordinate the action of entire bacterial colonies. Synthetic biology aims to make the engineering of biology easier, offering an opportunity to control natural systems and develop new synthetic systems with useful prescribed behaviours. However, in many cases, it is not understood how individual cells should be programmed to ensure the emergence of a required collective behaviour. Agent-based modelling aims to tackle this problem, offering a framework in which to simulate such systems and explore cellular design rules. In this article, I review the use of agent-based models in synthetic biology, outline the available computational tools, and provide details on recently engineered biological systems that are amenable to this approach. I further highlight the challenges facing this methodology and some of the potential future directions. © 2016 The Author(s).

  12. The research on medical image classification algorithm based on PLSA-BOW model.

    PubMed

    Cao, C H; Cao, H L

    2016-04-29

    With the rapid development of modern medical imaging technology, medical image classification has become more important for medical diagnosis and treatment. To solve the existence of polysemous words and synonyms problem, this study combines the word bag model with PLSA (Probabilistic Latent Semantic Analysis) and proposes the PLSA-BOW (Probabilistic Latent Semantic Analysis-Bag of Words) model. In this paper we introduce the bag of words model in text field to image field, and build the model of visual bag of words model. The method enables the word bag model-based classification method to be further improved in accuracy. The experimental results show that the PLSA-BOW model for medical image classification can lead to a more accurate classification.

  13. Classification of the Regional Ionospheric Disturbance Based on Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Terzi, Merve Begum; Arikan, Orhan; Karatay, Secil; Arikan, Feza; Gulyaeva, Tamara

    2016-08-01

    In this study, Total Electron Content (TEC) estimated from GPS receivers is used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. For the automated classification of regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. Performance of developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing developed classification technique to Global Ionospheric Map (GIM) TEC data, which is provided by the NASA Jet Propulsion Laboratory (JPL), it is shown that SVM can be a suitable learning method to detect anomalies in TEC variations.

  14. MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS

    EPA Science Inventory

    Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...

  15. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  16. Cost-effectiveness of a classification-based system for sub-acute and chronic low back pain.

    PubMed

    Apeldoorn, Adri T; Bosmans, Judith E; Ostelo, Raymond W; de Vet, Henrica C W; van Tulder, Maurits W

    2012-07-01

    Identifying relevant subgroups in patients with low back pain (LBP) is considered important to guide physical therapy practice and to improve outcomes. The aim of the present study was to assess the cost-effectiveness of a modified version of Delitto's classification-based treatment approach compared with usual physical therapy care in patients with sub-acute and chronic LBP with 1 year follow-up. All patients were classified using the modified version of Delitto's classification-based system and then randomly assigned to receive either classification-based treatment or usual physical therapy care. The main clinical outcomes measured were; global perceived effect, intensity of pain, functional disability and quality of life. Costs were measured from a societal perspective. Multiple imputations were used for missing data. Uncertainty surrounding cost differences and incremental cost-effectiveness ratios was estimated using bootstrapping. Cost-effectiveness planes and cost-effectiveness acceptability curves were estimated. In total, 156 patients were included. The outcome analyses showed a significantly better outcome on global perceived effect favoring the classification-based approach, and no differences between the groups on pain, disability and quality-adjusted life-years. Mean total societal costs for the classification-based group were 2,287, and for the usual physical therapy care group 2,020. The difference was 266 (95% CI -720 to 1,612) and not statistically significant. Cost-effectiveness analyses showed that the classification-based approach was not cost-effective in comparison with usual physical therapy care for any clinical outcome measure. The classification-based treatment approach as used in this study was not cost-effective in comparison with usual physical therapy care in a population of patients with sub-acute and chronic LBP.

  17. HYDROLOGIC REGIME CLASSIFICATION OF LAKE MICHIGAN COASTAL RIVERINE WETLANDS BASED ON WATERSHED CHARACTERISTICS

    EPA Science Inventory

    Classification of wetlands systems is needed not only to establish reference condition, but also to predict the relative sensitivity of different wetland classes. In the current study, we examined the potential for ecoregion- versus flow-based classification strategies to explain...

  18. Agent-Based Model Approach to Complex Phenomena in Real Economy

    NASA Astrophysics Data System (ADS)

    Iyetomi, H.; Aoyama, H.; Fujiwara, Y.; Ikeda, Y.; Souma, W.

    An agent-based model for firms' dynamics is developed. The model consists of firm agents with identical characteristic parameters and a bank agent. Dynamics of those agents are described by their balance sheets. Each firm tries to maximize its expected profit with possible risks in market. Infinite growth of a firm directed by the ``profit maximization" principle is suppressed by a concept of ``going concern". Possibility of bankruptcy of firms is also introduced by incorporating a retardation effect of information on firms' decision. The firms, mutually interacting through the monopolistic bank, become heterogeneous in the course of temporal evolution. Statistical properties of firms' dynamics obtained by simulations based on the model are discussed in light of observations in the real economy.

  19. Agent-Based Learning Environments as a Research Tool for Investigating Teaching and Learning.

    ERIC Educational Resources Information Center

    Baylor, Amy L.

    2002-01-01

    Discusses intelligent learning environments for computer-based learning, such as agent-based learning environments, and their advantages over human-based instruction. Considers the effects of multiple agents; agents and research design; the use of Multiple Intelligent Mentors Instructing Collaboratively (MIMIC) for instructional design for…

  20. A Visual mining based framework for classification accuracy estimation

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal Vijayakumar

    2013-12-01

    Classification techniques have been widely used in different remote sensing applications and correct classification of mixed pixels is a tedious task. Traditional approaches adopt various statistical parameters, however does not facilitate effective visualisation. Data mining tools are proving very helpful in the classification process. We propose a visual mining based frame work for accuracy assessment of classification techniques using open source tools such as WEKA and PREFUSE. These tools in integration can provide an efficient approach for getting information about improvements in the classification accuracy and helps in refining training data set. We have illustrated framework for investigating the effects of various resampling methods on classification accuracy and found that bilinear (BL) is best suited for preserving radiometric characteristics. We have also investigated the optimal number of folds required for effective analysis of LISS-IV images. Techniki klasyfikacji są szeroko wykorzystywane w różnych aplikacjach teledetekcyjnych, w których poprawna klasyfikacja pikseli stanowi poważne wyzwanie. Podejście tradycyjne wykorzystujące różnego rodzaju parametry statystyczne nie zapewnia efektywnej wizualizacji. Wielce obiecujące wydaje się zastosowanie do klasyfikacji narzędzi do eksploracji danych. W artykule zaproponowano podejście bazujące na wizualnej analizie eksploracyjnej, wykorzystujące takie narzędzia typu open source jak WEKA i PREFUSE. Wymienione narzędzia ułatwiają korektę pół treningowych i efektywnie wspomagają poprawę dokładności klasyfikacji. Działanie metody sprawdzono wykorzystując wpływ różnych metod resampling na zachowanie dokładności radiometrycznej i uzyskując najlepsze wyniki dla metody bilinearnej (BL).

  1. Pain expressiveness and altruistic behavior: an exploration using agent-based modeling.

    PubMed

    de C Williams, Amanda C; Gallagher, Elizabeth; Fidalgo, Antonio R; Bentley, Peter J

    2016-03-01

    Predictions which invoke evolutionary mechanisms are hard to test. Agent-based modeling in artificial life offers a way to simulate behaviors and interactions in specific physical or social environments over many generations. The outcomes have implications for understanding adaptive value of behaviors in context. Pain-related behavior in animals is communicated to other animals that might protect or help, or might exploit or predate. An agent-based model simulated the effects of displaying or not displaying pain (expresser/nonexpresser strategies) when injured and of helping, ignoring, or exploiting another in pain (altruistic/nonaltruistic/selfish strategies). Agents modeled in MATLAB interacted at random while foraging (gaining energy); random injury interrupted foraging for a fixed time unless help from an altruistic agent, who paid an energy cost, speeded recovery. Environmental and social conditions also varied, and each model ran for 10,000 iterations. Findings were meaningful in that, in general, contingencies that evident from experimental work with a variety of mammals, over a few interactions, were replicated in the agent-based model after selection pressure over many generations. More energy-demanding expression of pain reduced its frequency in successive generations, and increasing injury frequency resulted in fewer expressers and altruists. Allowing exploitation of injured agents decreased expression of pain to near zero, but altruists remained. Decreasing costs or increasing benefits of helping hardly changed its frequency, whereas increasing interaction rate between injured agents and helpers diminished the benefits to both. Agent-based modeling allows simulation of complex behaviors and environmental pressures over evolutionary time.

  2. A minimum spanning forest based classification method for dedicated breast CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei, E-mail: bfei@emory.edu

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting modelmore » used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging.« less

  3. Application of wavelet transformation and adaptive neighborhood based modified backpropagation (ANMBP) for classification of brain cancer

    NASA Astrophysics Data System (ADS)

    Werdiningsih, Indah; Zaman, Badrus; Nuqoba, Barry

    2017-08-01

    This paper presents classification of brain cancer using wavelet transformation and Adaptive Neighborhood Based Modified Backpropagation (ANMBP). Three stages of the processes, namely features extraction, features reduction, and classification process. Wavelet transformation is used for feature extraction and ANMBP is used for classification process. The result of features extraction is feature vectors. Features reduction used 100 energy values per feature and 10 energy values per feature. Classifications of brain cancer are normal, alzheimer, glioma, and carcinoma. Based on simulation results, 10 energy values per feature can be used to classify brain cancer correctly. The correct classification rate of proposed system is 95 %. This research demonstrated that wavelet transformation can be used for features extraction and ANMBP can be used for classification of brain cancer.

  4. 8 CFR 204.306 - Classification as an immediate relative based on a Convention adoption.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 8 Aliens and Nationality 1 2011-01-01 2011-01-01 false Classification as an immediate relative....306 Classification as an immediate relative based on a Convention adoption. (a) Unless 8 CFR 204.309 requires the denial of a Form I-800A or Form I-800, a child is eligible for classification as an immediate...

  5. An automatic graph-based approach for artery/vein classification in retinal images.

    PubMed

    Dashtbozorg, Behdad; Mendonça, Ana Maria; Campilho, Aurélio

    2014-03-01

    The classification of retinal vessels into artery/vein (A/V) is an important phase for automating the detection of vascular changes, and for the calculation of characteristic signs associated with several systemic diseases such as diabetes, hypertension, and other cardiovascular conditions. This paper presents an automatic approach for A/V classification based on the analysis of a graph extracted from the retinal vasculature. The proposed method classifies the entire vascular tree deciding on the type of each intersection point (graph nodes) and assigning one of two labels to each vessel segment (graph links). Final classification of a vessel segment as A/V is performed through the combination of the graph-based labeling results with a set of intensity features. The results of this proposed method are compared with manual labeling for three public databases. Accuracy values of 88.3%, 87.4%, and 89.8% are obtained for the images of the INSPIRE-AVR, DRIVE, and VICAVR databases, respectively. These results demonstrate that our method outperforms recent approaches for A/V classification.

  6. Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.

    PubMed

    Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi

    2013-01-01

    The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.

  7. Rock classification based on resistivity patterns in electrical borehole wall images

    NASA Astrophysics Data System (ADS)

    Linek, Margarete; Jungmann, Matthias; Berlage, Thomas; Pechnig, Renate; Clauser, Christoph

    2007-06-01

    Electrical borehole wall images represent grey-level-coded micro-resistivity measurements at the borehole wall. Different scientific methods have been implemented to transform image data into quantitative log curves. We introduce a pattern recognition technique applying texture analysis, which uses second-order statistics based on studying the occurrence of pixel pairs. We calculate so-called Haralick texture features such as contrast, energy, entropy and homogeneity. The supervised classification method is used for assigning characteristic texture features to different rock classes and assessing the discriminative power of these image features. We use classifiers obtained from training intervals to characterize the entire image data set recovered in ODP hole 1203A. This yields a synthetic lithology profile based on computed texture data. We show that Haralick features accurately classify 89.9% of the training intervals. We obtained misclassification for vesicular basaltic rocks. Hence, further image analysis tools are used to improve the classification reliability. We decompose the 2D image signal by the application of wavelet transformation in order to enhance image objects horizontally, diagonally and vertically. The resulting filtered images are used for further texture analysis. This combined classification based on Haralick features and wavelet transformation improved our classification up to a level of 98%. The application of wavelet transformation increases the consistency between standard logging profiles and texture-derived lithology. Texture analysis of borehole wall images offers the potential to facilitate objective analysis of multiple boreholes with the same lithology.

  8. Agent-Based Framework for Personalized Service Provisioning in Converged IP Networks

    NASA Astrophysics Data System (ADS)

    Podobnik, Vedran; Matijasevic, Maja; Lovrek, Ignac; Skorin-Kapov, Lea; Desic, Sasa

    In a global multi-service and multi-provider market, the Internet Service Providers will increasingly need to differentiate in the service quality they offer and base their operation on new, consumer-centric business models. In this paper, we propose an agent-based framework for the Business-to-Consumer (B2C) electronic market, comprising the Consumer Agents, Broker Agents and Content Agents, which enable Internet consumers to select a content provider in an automated manner. We also discuss how to dynamically allocate network resources to provide end-to-end Quality of Service (QoS) for a given consumer and content provider.

  9. Genetics-Based Classification of Filoviruses Calls for Expanded Sampling of Genomic Sequences

    PubMed Central

    Lauber, Chris; Gorbalenya, Alexander E.

    2012-01-01

    We have recently developed a computational approach for hierarchical, genome-based classification of viruses of a family (DEmARC). In DEmARC, virus clusters are delimited objectively by devising a universal family-wide threshold on intra-cluster genetic divergence of viruses that is specific for each level of the classification. Here, we apply DEmARC to a set of 56 filoviruses with complete genome sequences and compare the resulting classification to the ICTV taxonomy of the family Filoviridae. We find in total six candidate taxon levels two of which correspond to the species and genus ranks of the family. At these two levels, the six filovirus species and two genera officially recognized by ICTV, as well as a seventh tentative species for Lloviu virus and prototyping a third genus, are reproduced. DEmARC lends the highest possible support for these two as well as the four other levels, implying that the actual number of valid taxon levels remains uncertain and the choice of levels for filovirus species and genera is arbitrary. Based on our experience with other virus families, we conclude that the current sampling of filovirus genomic sequences needs to be considerably expanded in order to resolve these uncertainties in the framework of genetics-based classification. PMID:23170166

  10. Genetics-based classification of filoviruses calls for expanded sampling of genomic sequences.

    PubMed

    Lauber, Chris; Gorbalenya, Alexander E

    2012-09-01

    We have recently developed a computational approach for hierarchical, genome-based classification of viruses of a family (DEmARC). In DEmARC, virus clusters are delimited objectively by devising a universal family-wide threshold on intra-cluster genetic divergence of viruses that is specific for each level of the classification. Here, we apply DEmARC to a set of 56 filoviruses with complete genome sequences and compare the resulting classification to the ICTV taxonomy of the family Filoviridae. We find in total six candidate taxon levels two of which correspond to the species and genus ranks of the family. At these two levels, the six filovirus species and two genera officially recognized by ICTV, as well as a seventh tentative species for Lloviu virus and prototyping a third genus, are reproduced. DEmARC lends the highest possible support for these two as well as the four other levels, implying that the actual number of valid taxon levels remains uncertain and the choice of levels for filovirus species and genera is arbitrary. Based on our experience with other virus families, we conclude that the current sampling of filovirus genomic sequences needs to be considerably expanded in order to resolve these uncertainties in the framework of genetics-based classification.

  11. Combining High Spatial Resolution Optical and LIDAR Data for Object-Based Image Classification

    NASA Astrophysics Data System (ADS)

    Li, R.; Zhang, T.; Geng, R.; Wang, L.

    2018-04-01

    In order to classify high spatial resolution images more accurately, in this research, a hierarchical rule-based object-based classification framework was developed based on a high-resolution image with airborne Light Detection and Ranging (LiDAR) data. The eCognition software is employed to conduct the whole process. In detail, firstly, the FBSP optimizer (Fuzzy-based Segmentation Parameter) is used to obtain the optimal scale parameters for different land cover types. Then, using the segmented regions as basic units, the classification rules for various land cover types are established according to the spectral, morphological and texture features extracted from the optical images, and the height feature from LiDAR respectively. Thirdly, the object classification results are evaluated by using the confusion matrix, overall accuracy and Kappa coefficients. As a result, a method using the combination of an aerial image and the airborne Lidar data shows higher accuracy.

  12. Situation awareness-based agent transparency for human-autonomy teaming effectiveness

    NASA Astrophysics Data System (ADS)

    Chen, Jessie Y. C.; Barnes, Michael J.; Wright, Julia L.; Stowers, Kimberly; Lakhmani, Shan G.

    2017-05-01

    We developed the Situation awareness-based Agent Transparency (SAT) model to support human operators' situation awareness of the mission environment through teaming with intelligent agents. The model includes the agent's current actions and plans (Level 1), its reasoning process (Level 2), and its projection of future outcomes (Level 3). Human-inthe-loop simulation experiments have been conducted (Autonomous Squad Member and IMPACT) to illustrate the utility of the model for human-autonomy team interface designs. Across studies, the results consistently showed that human operators' task performance improved as the agents became more transparent. They also perceived transparent agents as more trustworthy.

  13. Vector-based navigation using grid-like representations in artificial agents.

    PubMed

    Banino, Andrea; Barry, Caswell; Uria, Benigno; Blundell, Charles; Lillicrap, Timothy; Mirowski, Piotr; Pritzel, Alexander; Chadwick, Martin J; Degris, Thomas; Modayil, Joseph; Wayne, Greg; Soyer, Hubert; Viola, Fabio; Zhang, Brian; Goroshin, Ross; Rabinowitz, Neil; Pascanu, Razvan; Beattie, Charlie; Petersen, Stig; Sadik, Amir; Gaffney, Stephen; King, Helen; Kavukcuoglu, Koray; Hassabis, Demis; Hadsell, Raia; Kumaran, Dharshan

    2018-05-01

    Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go 1,2 . Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning 3-5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex 6 . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space 7,8 and is critical for integrating self-motion (path integration) 6,7,9 and planning direct trajectories to goals (vector-based navigation) 7,10,11 . Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types 12 . We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments-optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation 7,10,11 , demonstrating that the latter can be combined with path-based strategies to

  14. AVNM: A Voting based Novel Mathematical Rule for Image Classification.

    PubMed

    Vidyarthi, Ankit; Mittal, Namita

    2016-12-01

    In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of a number of nearest neighbors i.e. "k" for computation. At present, it is hard to find the optimal value of "k" using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. Motivated by the prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. The proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. The proposed classifier also reduces the effect of outliers. To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard datasets taken from UCI database and one manually created dataset. The experimental result shows that the proposed AVNM rule outperforms the KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. The proposed AVNM rule is based on sample space reduction mechanism for identification of an optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with the state-of-art algorithm, KNN, and its variants. The proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manually created dataset. Copyright © 2016 Elsevier

  15. 78 FR 18252 - Prevailing Rate Systems; North American Industry Classification System Based Federal Wage System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ...-AM78 Prevailing Rate Systems; North American Industry Classification System Based Federal Wage System... 2007 North American Industry Classification System (NAICS) codes currently used in Federal Wage System... (OPM) issued a final rule (73 FR 45853) to update the 2002 North American Industry Classification...

  16. Wearable-Sensor-Based Classification Models of Faller Status in Older Adults.

    PubMed

    Howcroft, Jennifer; Lemaire, Edward D; Kofman, Jonathan

    2016-01-01

    Wearable sensors have potential for quantitative, gait-based, point-of-care fall risk assessment that can be easily and quickly implemented in clinical-care and older-adult living environments. This investigation generated models for wearable-sensor based fall-risk classification in older adults and identified the optimal sensor type, location, combination, and modelling method; for walking with and without a cognitive load task. A convenience sample of 100 older individuals (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence) walked 7.62 m under single-task and dual-task conditions while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, and left and right shanks. Participants also completed the Activities-specific Balance Confidence scale, Community Health Activities Model Program for Seniors questionnaire, six minute walk test, and ranked their fear of falling. Fall risk classification models were assessed for all sensor combinations and three model types: multi-layer perceptron neural network, naïve Bayesian, and support vector machine. The best performing model was a multi-layer perceptron neural network with input parameters from pressure-sensing insoles and head, pelvis, and left shank accelerometers (accuracy = 84%, F1 score = 0.600, MCC score = 0.521). Head sensor-based models had the best performance of the single-sensor models for single-task gait assessment. Single-task gait assessment models outperformed models based on dual-task walking or clinical assessment data. Support vector machines and neural networks were the best modelling technique for fall risk classification. Fall risk classification models developed for point-of-care environments should be developed using support vector machines and neural networks, with a multi-sensor single-task gait assessment.

  17. A Sieving ANN for Emotion-Based Movie Clip Classification

    NASA Astrophysics Data System (ADS)

    Watanapa, Saowaluk C.; Thipakorn, Bundit; Charoenkitkarn, Nipon

    Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.

  18. Classification Framework for ICT-Based Learning Technologies for Disabled People

    ERIC Educational Resources Information Center

    Hersh, Marion

    2017-01-01

    The paper presents the first systematic approach to the classification of inclusive information and communication technologies (ICT)-based learning technologies and ICT-based learning technologies for disabled people which covers both assistive and general learning technologies, is valid for all disabled people and considers the full range of…

  19. Biomedical literature classification using encyclopedic knowledge: a Wikipedia-based bag-of-concepts approach.

    PubMed

    Mouriño García, Marcos Antonio; Pérez Rodríguez, Roberto; Anido Rifón, Luis E

    2015-01-01

    Automatic classification of text documents into a set of categories has a lot of applications. Among those applications, the automatic classification of biomedical literature stands out as an important application for automatic document classification strategies. Biomedical staff and researchers have to deal with a lot of literature in their daily activities, so it would be useful a system that allows for accessing to documents of interest in a simple and effective way; thus, it is necessary that these documents are sorted based on some criteria-that is to say, they have to be classified. Documents to classify are usually represented following the bag-of-words (BoW) paradigm. Features are words in the text-thus suffering from synonymy and polysemy-and their weights are just based on their frequency of occurrence. This paper presents an empirical study of the efficiency of a classifier that leverages encyclopedic background knowledge-concretely Wikipedia-in order to create bag-of-concepts (BoC) representations of documents, understanding concept as "unit of meaning", and thus tackling synonymy and polysemy. Besides, the weighting of concepts is based on their semantic relevance in the text. For the evaluation of the proposal, empirical experiments have been conducted with one of the commonly used corpora for evaluating classification and retrieval of biomedical information, OHSUMED, and also with a purpose-built corpus of MEDLINE biomedical abstracts, UVigoMED. Results obtained show that the Wikipedia-based bag-of-concepts representation outperforms the classical bag-of-words representation up to 157% in the single-label classification problem and up to 100% in the multi-label problem for OHSUMED corpus, and up to 122% in the single-label classification problem and up to 155% in the multi-label problem for UVigoMED corpus.

  20. Biomedical literature classification using encyclopedic knowledge: a Wikipedia-based bag-of-concepts approach

    PubMed Central

    Pérez Rodríguez, Roberto; Anido Rifón, Luis E.

    2015-01-01

    Automatic classification of text documents into a set of categories has a lot of applications. Among those applications, the automatic classification of biomedical literature stands out as an important application for automatic document classification strategies. Biomedical staff and researchers have to deal with a lot of literature in their daily activities, so it would be useful a system that allows for accessing to documents of interest in a simple and effective way; thus, it is necessary that these documents are sorted based on some criteria—that is to say, they have to be classified. Documents to classify are usually represented following the bag-of-words (BoW) paradigm. Features are words in the text—thus suffering from synonymy and polysemy—and their weights are just based on their frequency of occurrence. This paper presents an empirical study of the efficiency of a classifier that leverages encyclopedic background knowledge—concretely Wikipedia—in order to create bag-of-concepts (BoC) representations of documents, understanding concept as “unit of meaning”, and thus tackling synonymy and polysemy. Besides, the weighting of concepts is based on their semantic relevance in the text. For the evaluation of the proposal, empirical experiments have been conducted with one of the commonly used corpora for evaluating classification and retrieval of biomedical information, OHSUMED, and also with a purpose-built corpus of MEDLINE biomedical abstracts, UVigoMED. Results obtained show that the Wikipedia-based bag-of-concepts representation outperforms the classical bag-of-words representation up to 157% in the single-label classification problem and up to 100% in the multi-label problem for OHSUMED corpus, and up to 122% in the single-label classification problem and up to 155% in the multi-label problem for UVigoMED corpus. PMID:26468436

  1. Attribute-based Decision Graphs: A framework for multiclass data classification.

    PubMed

    Bertini, João Roberto; Nicoletti, Maria do Carmo; Zhao, Liang

    2017-01-01

    Graph-based algorithms have been successfully applied in machine learning and data mining tasks. A simple but, widely used, approach to build graphs from vector-based data is to consider each data instance as a vertex and connecting pairs of it using a similarity measure. Although this abstraction presents some advantages, such as arbitrary shape representation of the original data, it is still tied to some drawbacks, for example, it is dependent on the choice of a pre-defined distance metric and is biased by the local information among data instances. Aiming at exploring alternative ways to build graphs from data, this paper proposes an algorithm for constructing a new type of graph, called Attribute-based Decision Graph-AbDG. Given a vector-based data set, an AbDG is built by partitioning each data attribute range into disjoint intervals and representing each interval as a vertex. The edges are then established between vertices from different attributes according to a pre-defined pattern. Classification is performed through a matching process among the attribute values of the new instance and AbDG. Moreover, AbDG provides an inner mechanism to handle missing attribute values, which contributes for expanding its applicability. Results of classification tasks have shown that AbDG is a competitive approach when compared to well-known multiclass algorithms. The main contribution of the proposed framework is the combination of the advantages of attribute-based and graph-based techniques to perform robust pattern matching data classification, while permitting the analysis the input data considering only a subset of its attributes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Agent-based model for rural-urban migration: A dynamic consideration

    NASA Astrophysics Data System (ADS)

    Cai, Ning; Ma, Hai-Ying; Khan, M. Junaid

    2015-10-01

    This paper develops a dynamic agent-based model for rural-urban migration, based on the previous relevant works. The model conforms to the typical dynamic linear multi-agent systems model concerned extensively in systems science, in which the communication network is formulated as a digraph. Simulations reveal that consensus of certain variable could be harmful to the overall stability and should be avoided.

  3. Image Analysis and Classification Based on Soil Strength

    DTIC Science & Technology

    2016-08-01

    Satellite imagery classification is useful for a variety of commonly used ap- plications, such as land use classification, agriculture , wetland...required use of a coinci- dent digital elevation model (DEM) and a high-resolution orthophoto- graph collected by the National Agriculture Imagery Program...14. ABSTRACT Satellite imagery classification is useful for a variety of commonly used applications, such as land use classification, agriculture

  4. Automatic classification of sleep stages based on the time-frequency image of EEG signals.

    PubMed

    Bajaj, Varun; Pachori, Ram Bilas

    2013-12-01

    In this paper, a new method for automatic sleep stage classification based on time-frequency image (TFI) of electroencephalogram (EEG) signals is proposed. Automatic classification of sleep stages is an important part for diagnosis and treatment of sleep disorders. The smoothed pseudo Wigner-Ville distribution (SPWVD) based time-frequency representation (TFR) of EEG signal has been used to obtain the time-frequency image (TFI). The segmentation of TFI has been performed based on the frequency-bands of the rhythms of EEG signals. The features derived from the histogram of segmented TFI have been used as an input feature set to multiclass least squares support vector machines (MC-LS-SVM) together with the radial basis function (RBF), Mexican hat wavelet, and Morlet wavelet kernel functions for automatic classification of sleep stages from EEG signals. The experimental results are presented to show the effectiveness of the proposed method for classification of sleep stages from EEG signals. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Classification of driver fatigue in an electroencephalography-based countermeasure system with source separation module.

    PubMed

    Rifai Chai; Naik, Ganesh R; Tran, Yvonne; Sai Ho Ling; Craig, Ashley; Nguyen, Hung T

    2015-08-01

    An electroencephalography (EEG)-based counter measure device could be used for fatigue detection during driving. This paper explores the classification of fatigue and alert states using power spectral density (PSD) as a feature extractor and fuzzy swarm based-artificial neural network (ANN) as a classifier. An independent component analysis of entropy rate bound minimization (ICA-ERBM) is investigated as a novel source separation technique for fatigue classification using EEG analysis. A comparison of the classification accuracy of source separator versus no source separator is presented. Classification performance based on 43 participants without the inclusion of the source separator resulted in an overall sensitivity of 71.67%, a specificity of 75.63% and an accuracy of 73.65%. However, these results were improved after the inclusion of a source separator module, resulting in an overall sensitivity of 78.16%, a specificity of 79.60% and an accuracy of 78.88% (p <; 0.05).

  6. Model-based object classification using unification grammars and abstract representations

    NASA Astrophysics Data System (ADS)

    Liburdy, Kathleen A.; Schalkoff, Robert J.

    1993-04-01

    The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.

  7. Intelligent judgements over health risks in a spatial agent-based model.

    PubMed

    Abdulkareem, Shaheen A; Augustijn, Ellen-Wien; Mustafa, Yaseen T; Filatova, Tatiana

    2018-03-20

    Millions of people worldwide are exposed to deadly infectious diseases on a regular basis. Breaking news of the Zika outbreak for instance, made it to the main media titles internationally. Perceiving disease risks motivate people to adapt their behavior toward a safer and more protective lifestyle. Computational science is instrumental in exploring patterns of disease spread emerging from many individual decisions and interactions among agents and their environment by means of agent-based models. Yet, current disease models rarely consider simulating dynamics in risk perception and its impact on the adaptive protective behavior. Social sciences offer insights into individual risk perception and corresponding protective actions, while machine learning provides algorithms and methods to capture these learning processes. This article presents an innovative approach to extend agent-based disease models by capturing behavioral aspects of decision-making in a risky context using machine learning techniques. We illustrate it with a case of cholera in Kumasi, Ghana, accounting for spatial and social risk factors that affect intelligent behavior and corresponding disease incidents. The results of computational experiments comparing intelligent with zero-intelligent representations of agents in a spatial disease agent-based model are discussed. We present a spatial disease agent-based model (ABM) with agents' behavior grounded in Protection Motivation Theory. Spatial and temporal patterns of disease diffusion among zero-intelligent agents are compared to those produced by a population of intelligent agents. Two Bayesian Networks (BNs) designed and coded using R and are further integrated with the NetLogo-based Cholera ABM. The first is a one-tier BN1 (only risk perception), the second is a two-tier BN2 (risk and coping behavior). We run three experiments (zero-intelligent agents, BN1 intelligence and BN2 intelligence) and report the results per experiment in terms of

  8. Efficient Agent-Based Models for Non-Genomic Evolution

    NASA Technical Reports Server (NTRS)

    Gupta, Nachi; Agogino, Adrian; Tumer, Kagan

    2006-01-01

    Modeling dynamical systems composed of aggregations of primitive proteins is critical to the field of astrobiological science involving early evolutionary structures and the origins of life. Unfortunately traditional non-multi-agent methods either require oversimplified models or are slow to converge to adequate solutions. This paper shows how to address these deficiencies by modeling the protein aggregations through a utility based multi-agent system. In this method each agent controls the properties of a set of proteins assigned to that agent. Some of these properties determine the dynamics of the system, such as the ability for some proteins to join or split other proteins, while additional properties determine the aggregation s fitness as a viable primitive cell. We show that over a wide range of starting conditions, there are mechanisins that allow protein aggregations to achieve high values of overall fitness. In addition through the use of agent-specific utilities that remain aligned with the overall global utility, we are able to reach these conclusions with 50 times fewer learning steps.

  9. Abuse-deterrent formulations: part 1 - development of a formulation-based classification system.

    PubMed

    Mastropietro, David J; Omidian, Hossein

    2015-02-01

    Strategies have been implemented to decrease the large proportion of individuals misusing abusable prescription medications. Abuse-deterrent formulations (ADFs) have been grown to incorporate many different technologies that still lack a systematic naming and organizational nomenclature. Without a proper classification system, it has been challenging to properly identify ADFs, study and determine common traits or characteristics and simplify communication within the field. This article introduces a classification system for all ADF approaches and examines the physical, chemical and pharmacological characteristics of a formulation by placing them into primary, secondary and tertiary categories. Primary approaches block tampering done directly to the product. Secondary approaches work in vivo after the product is administered. Tertiary approaches use materials that discourage abuse but do not stop tampering. Part 2 of this article discusses proprietary technologies, patents and products utilizing primary approaches. Drug products using opioid antagonists and aversive agents have been seen over the past few decades to discourage primarily overuse and injection. However, innovation in formulation development has introduced products capable of deterring multiple forms of tampering and abuse. Often, this is accomplished using known excipients and manufacturing methods that are repurposed to prevent crushing, extraction and syringeability.

  10. Agent-Based Modeling in Public Health: Current Applications and Future Directions.

    PubMed

    Tracy, Melissa; Cerdá, Magdalena; Keyes, Katherine M

    2018-04-01

    Agent-based modeling is a computational approach in which agents with a specified set of characteristics interact with each other and with their environment according to predefined rules. We review key areas in public health where agent-based modeling has been adopted, including both communicable and noncommunicable disease, health behaviors, and social epidemiology. We also describe the main strengths and limitations of this approach for questions with public health relevance. Finally, we describe both methodologic and substantive future directions that we believe will enhance the value of agent-based modeling for public health. In particular, advances in model validation, comparisons with other causal modeling procedures, and the expansion of the models to consider comorbidity and joint influences more systematically will improve the utility of this approach to inform public health research, practice, and policy.

  11. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R; Nutaro, James J

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigmmore » to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.« less

  12. Reverse engineering a social agent-based hidden markov model--visage.

    PubMed

    Chen, Hung-Ching Justin; Goldberg, Mark; Magdon-Ismail, Malik; Wallace, William A

    2008-12-01

    We present a machine learning approach to discover the agent dynamics that drives the evolution of the social groups in a community. We set up the problem by introducing an agent-based hidden Markov model for the agent dynamics: an agent's actions are determined by micro-laws. Nonetheless, We learn the agent dynamics from the observed communications without knowing state transitions. Our approach is to identify the appropriate micro-laws corresponding to an identification of the appropriate parameters in the model. The model identification problem is then formulated as a mixed optimization problem. To solve the problem, we develop a multistage learning process for determining the group structure, the group evolution, and the micro-laws of a community based on the observed set of communications among actors, without knowing the semantic contents. Finally, to test the quality of our approximations and the feasibility of the approach, we present the results of extensive experiments on synthetic data as well as the results on real communities, such as Enron email and Movie newsgroups. Insight into agent dynamics helps us understand the driving forces behind social evolution.

  13. A patch-based convolutional neural network for remote sensing image classification.

    PubMed

    Sharma, Atharva; Liu, Xiuwen; Yang, Xiaojun; Shi, Di

    2017-11-01

    Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Classification of cancerous cells based on the one-class problem approach

    NASA Astrophysics Data System (ADS)

    Murshed, Nabeel A.; Bortolozzi, Flavio; Sabourin, Robert

    1996-03-01

    One of the most important factors in reducing the effect of cancerous diseases is the early diagnosis, which requires a good and a robust method. With the advancement of computer technologies and digital image processing, the development of a computer-based system has become feasible. In this paper, we introduce a new approach for the detection of cancerous cells. This approach is based on the one-class problem approach, through which the classification system need only be trained with patterns of cancerous cells. This reduces the burden of the training task by about 50%. Based on this approach, a computer-based classification system is developed, based on the Fuzzy ARTMAP neural networks. Experimental results were performed using a set of 542 patterns taken from a sample of breast cancer. Results of the experiment show 98% correct identification of cancerous cells and 95% correct identification of non-cancerous cells.

  15. A model-based test for treatment effects with probabilistic classifications.

    PubMed

    Cavagnaro, Daniel R; Davis-Stober, Clintin P

    2018-05-21

    Within modern psychology, computational and statistical models play an important role in describing a wide variety of human behavior. Model selection analyses are typically used to classify individuals according to the model(s) that best describe their behavior. These classifications are inherently probabilistic, which presents challenges for performing group-level analyses, such as quantifying the effect of an experimental manipulation. We answer this challenge by presenting a method for quantifying treatment effects in terms of distributional changes in model-based (i.e., probabilistic) classifications across treatment conditions. The method uses hierarchical Bayesian mixture modeling to incorporate classification uncertainty at the individual level into the test for a treatment effect at the group level. We illustrate the method with several worked examples, including a reanalysis of the data from Kellen, Mata, and Davis-Stober (2017), and analyze its performance more generally through simulation studies. Our simulations show that the method is both more powerful and less prone to type-1 errors than Fisher's exact test when classifications are uncertain. In the special case where classifications are deterministic, we find a near-perfect power-law relationship between the Bayes factor, derived from our method, and the p value obtained from Fisher's exact test. We provide code in an online supplement that allows researchers to apply the method to their own data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.

    PubMed

    Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit

    2017-06-01

    We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

  17. Argumentation Based Joint Learning: A Novel Ensemble Learning Approach

    PubMed Central

    Xu, Junyi; Yao, Li; Li, Le

    2015-01-01

    Recently, ensemble learning methods have been widely used to improve classification performance in machine learning. In this paper, we present a novel ensemble learning method: argumentation based multi-agent joint learning (AMAJL), which integrates ideas from multi-agent argumentation, ensemble learning, and association rule mining. In AMAJL, argumentation technology is introduced as an ensemble strategy to integrate multiple base classifiers and generate a high performance ensemble classifier. We design an argumentation framework named Arena as a communication platform for knowledge integration. Through argumentation based joint learning, high quality individual knowledge can be extracted, and thus a refined global knowledge base can be generated and used independently for classification. We perform numerous experiments on multiple public datasets using AMAJL and other benchmark methods. The results demonstrate that our method can effectively extract high quality knowledge for ensemble classifier and improve the performance of classification. PMID:25966359

  18. New wideband radar target classification method based on neural learning and modified Euclidean metric

    NASA Astrophysics Data System (ADS)

    Jiang, Yicheng; Cheng, Ping; Ou, Yangkui

    2001-09-01

    A new method for target classification of high-range resolution radar is proposed. It tries to use neural learning to obtain invariant subclass features of training range profiles. A modified Euclidean metric based on the Box-Cox transformation technique is investigated for Nearest Neighbor target classification improvement. The classification experiments using real radar data of three different aircraft have demonstrated that classification error can reduce 8% if this method proposed in this paper is chosen instead of the conventional method. The results of this paper have shown that by choosing an optimized metric, it is indeed possible to reduce the classification error without increasing the number of samples.

  19. Contract Monitoring in Agent-Based Systems: Case Study

    NASA Astrophysics Data System (ADS)

    Hodík, Jiří; Vokřínek, Jiří; Jakob, Michal

    Monitoring of fulfilment of obligations defined by electronic contracts in distributed domains is presented in this paper. A two-level model of contract-based systems and the types of observations needed for contract monitoring are introduced. The observations (inter-agent communication and agents’ actions) are collected and processed by the contract observation and analysis pipeline. The presented approach has been utilized in a multi-agent system for electronic contracting in a modular certification testing domain.

  20. Multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement

    NASA Astrophysics Data System (ADS)

    Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing

    2018-02-01

    For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.

  1. Monsoon Forecasting based on Imbalanced Classification Techniques

    NASA Astrophysics Data System (ADS)

    Ribera, Pedro; Troncoso, Alicia; Asencio-Cortes, Gualberto; Vega, Inmaculada; Gallego, David

    2017-04-01

    Monsoonal systems are quasiperiodic processes of the climatic system that control seasonal precipitation over different regions of the world. The Western North Pacific Summer Monsoon (WNPSM) is one of those monsoons and it is known to have a great impact both over the global climate and over the total precipitation of very densely populated areas. The interannual variability of the WNPSM along the last 50-60 years has been related to different climatic indices such as El Niño, El Niño Modoki, the Indian Ocean Dipole or the Pacific Decadal Oscillation. Recently, a new and longer series characterizing the monthly evolution of the WNPSM, the WNP Directional Index (WNPDI), has been developed, extending its previous length from about 50 years to more than 100 years (1900-2007). Imbalanced classification techniques have been applied to the WNPDI in order to check the capability of traditional climate indices to capture and forecast the evolution of the WNPSM. The problem of forecasting has been transformed into a binary classification problem, in which the positive class represents the occurrence of an extreme monsoon event. Given that the number of extreme monsoons is much lower than the number of non-extreme monsoons, the resultant classification problem is highly imbalanced. The complete dataset is composed of 1296 instances, where only 71 (5.47%) samples correspond to extreme monsoons. Twenty predictor variables based on the cited climatic indices have been proposed, and namely, models based on trees, black box models such as neural networks, support vector machines and nearest neighbors, and finally ensemble-based techniques as random forests have been used in order to forecast the occurrence of extreme monsoons. It can be concluded that the methodology proposed here reports promising results according to the quality parameters evaluated and predicts extreme monsoons for a temporal horizon of a month with a high accuracy. From a climatological point of view

  2. Drug related webpages classification using images and text information based on multi-kernel learning

    NASA Astrophysics Data System (ADS)

    Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan

    2015-12-01

    In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.

  3. A Hybrid Classification System for Heart Disease Diagnosis Based on the RFRS Method.

    PubMed

    Liu, Xiao; Wang, Xiaoli; Su, Qiang; Zhang, Mo; Zhu, Yanhong; Wang, Qiugen; Wang, Qian

    2017-01-01

    Heart disease is one of the most common diseases in the world. The objective of this study is to aid the diagnosis of heart disease using a hybrid classification system based on the ReliefF and Rough Set (RFRS) method. The proposed system contains two subsystems: the RFRS feature selection system and a classification system with an ensemble classifier. The first system includes three stages: (i) data discretization, (ii) feature extraction using the ReliefF algorithm, and (iii) feature reduction using the heuristic Rough Set reduction algorithm that we developed. In the second system, an ensemble classifier is proposed based on the C4.5 classifier. The Statlog (Heart) dataset, obtained from the UCI database, was used for experiments. A maximum classification accuracy of 92.59% was achieved according to a jackknife cross-validation scheme. The results demonstrate that the performance of the proposed system is superior to the performances of previously reported classification techniques.

  4. A discrete wavelet based feature extraction and hybrid classification technique for microarray data analysis.

    PubMed

    Bennet, Jaison; Ganaprakasam, Chilambuchelvan Arul; Arputharaj, Kannan

    2014-01-01

    Cancer classification by doctors and radiologists was based on morphological and clinical features and had limited diagnostic ability in olden days. The recent arrival of DNA microarray technology has led to the concurrent monitoring of thousands of gene expressions in a single chip which stimulates the progress in cancer classification. In this paper, we have proposed a hybrid approach for microarray data classification based on nearest neighbor (KNN), naive Bayes, and support vector machine (SVM). Feature selection prior to classification plays a vital role and a feature selection technique which combines discrete wavelet transform (DWT) and moving window technique (MWT) is used. The performance of the proposed method is compared with the conventional classifiers like support vector machine, nearest neighbor, and naive Bayes. Experiments have been conducted on both real and benchmark datasets and the results indicate that the ensemble approach produces higher classification accuracy than conventional classifiers. This paper serves as an automated system for the classification of cancer and can be applied by doctors in real cases which serve as a boon to the medical community. This work further reduces the misclassification of cancers which is highly not allowed in cancer detection.

  5. For whom will the Bayesian agents vote?

    NASA Astrophysics Data System (ADS)

    Caticha, Nestor; Cesar, Jonatas; Vicente, Renato

    2015-04-01

    Within an agent-based model where moral classifications are socially learned, we ask if a population of agents behaves in a way that may be compared with conservative or liberal positions in the real political spectrum. We assume that agents first experience a formative period, in which they adjust their learning style acting as supervised Bayesian adaptive learners. The formative phase is followed by a period of social influence by reinforcement learning. By comparing data generated by the agents with data from a sample of 15000 Moral Foundation questionnaires we found the following. 1. The number of information exchanges in the formative phase correlates positively with statistics identifying liberals in the social influence phase. This is consistent with recent evidence that connects the dopamine receptor D4-7R gene, political orientation and early age social clique size. 2. The learning algorithms that result from the formative phase vary in the way they treat novelty and corroborative information with more conservative-like agents treating it more equally than liberal-like agents. This is consistent with the correlation between political affiliation and the Openness personality trait reported in the literature. 3. Under the increase of a model parameter interpreted as an external pressure, the statistics of liberal agents resemble more those of conservative agents, consistent with reports on the consequences of external threats on measures of conservatism. We also show that in the social influence phase liberal-like agents readapt much faster than conservative-like agents when subjected to changes on the relevant set of moral issues. This suggests a verifiable dynamical criterium for attaching liberal or conservative labels to groups.

  6. [Severity classification of chronic obstructive pulmonary disease based on deep learning].

    PubMed

    Ying, Jun; Yang, Ceyuan; Li, Quanzheng; Xue, Wanguo; Li, Tanshi; Cao, Wenzhe

    2017-12-01

    In this paper, a deep learning method has been raised to build an automatic classification algorithm of severity of chronic obstructive pulmonary disease. Large sample clinical data as input feature were analyzed for their weights in classification. Through feature selection, model training, parameter optimization and model testing, a classification prediction model based on deep belief network was built to predict severity classification criteria raised by the Global Initiative for Chronic Obstructive Lung Disease (GOLD). We get accuracy over 90% in prediction for two different standardized versions of severity criteria raised in 2007 and 2011 respectively. Moreover, we also got the contribution ranking of different input features through analyzing the model coefficient matrix and confirmed that there was a certain degree of agreement between the more contributive input features and the clinical diagnostic knowledge. The validity of the deep belief network model was proved by this result. This study provides an effective solution for the application of deep learning method in automatic diagnostic decision making.

  7. Model-Drive Architecture for Agent-Based Systems

    NASA Technical Reports Server (NTRS)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  8. Risk Classification and Risk-based Safety and Mission Assurance

    NASA Technical Reports Server (NTRS)

    Leitner, Jesse A.

    2014-01-01

    Recent activities to revamp and emphasize the need to streamline processes and activities for Class D missions across the agency have led to various interpretations of Class D, including the lumping of a variety of low-cost projects into Class D. Sometimes terms such as Class D minus are used. In this presentation, mission risk classifications will be traced to official requirements and definitions as a measure to ensure that projects and programs align with the guidance and requirements that are commensurate for their defined risk posture. As part of this, the full suite of risk classifications, formal and informal will be defined, followed by an introduction to the new GPR 8705.4 that is currently under review.GPR 8705.4 lays out guidance for the mission success activities performed at the Classes A-D for NPR 7120.5 projects as well as for projects not under NPR 7120.5. Furthermore, the trends in stepping from Class A into higher risk posture classifications will be discussed. The talk will conclude with a discussion about risk-based safety and mission assuranceat GSFC.

  9. Microcomputer-based classification of environmental data in municipal areas

    NASA Astrophysics Data System (ADS)

    Thiergärtner, H.

    1995-10-01

    Multivariate data-processing methods used in mineral resource identification can be used to classify urban regions. Using elements of expert systems, geographical information systems, as well as known classification and prognosis systems, it is possible to outline a single model that consists of resistant and of temporary parts of a knowledge base including graphical input and output treatment and of resistant and temporary elements of a bank of methods and algorithms. Whereas decision rules created by experts will be stored in expert systems directly, powerful classification rules in form of resistant but latent (implicit) decision algorithms may be implemented in the suggested model. The latent functions will be transformed into temporary explicit decision rules by learning processes depending on the actual task(s), parameter set(s), pixels selection(s), and expert control(s). This takes place both at supervised and nonsupervised classification of multivariately described pixel sets representing municipal subareas. The model is outlined briefly and illustrated by results obtained in a target area covering a part of the city of Berlin (Germany).

  10. Doppler Feature Based Classification of Wind Profiler Data

    NASA Astrophysics Data System (ADS)

    Sinha, Swati; Chandrasekhar Sarma, T. V.; Lourde. R, Mary

    2017-01-01

    Wind Profilers (WP) are coherent pulsed Doppler radars in UHF and VHF bands. They are used for vertical profiling of wind velocity and direction. This information is very useful for weather modeling, study of climatic patterns and weather prediction. Observations at different height and different wind velocities are possible by changing the operating parameters of WP. A set of Doppler power spectra is the standard form of WP data. Wind velocity, direction and wind velocity turbulence at different heights can be derived from it. Modern wind profilers operate for long duration and generate approximately 4 megabytes of data per hour. The radar data stream contains Doppler power spectra from different radar configurations with echoes from different atmospheric targets. In order to facilitate systematic study, this data needs to be segregated according the type of target. A reliable automated target classification technique is required to do this job. Classical techniques of radar target identification use pattern matching and minimization of mean squared error, Euclidean distance etc. These techniques are not effective for the classification of WP echoes, as these targets do not have well-defined signature in Doppler power spectra. This paper presents an effective target classification technique based on range-Doppler features.

  11. Empirical Wavelet Transform Based Features for Classification of Parkinson's Disease Severity.

    PubMed

    Oung, Qi Wei; Muthusamy, Hariharan; Basah, Shafriza Nisha; Lee, Hoileong; Vijean, Vikneswaran

    2017-12-29

    Parkinson's disease (PD) is a type of progressive neurodegenerative disorder that has affected a large part of the population till now. Several symptoms of PD include tremor, rigidity, slowness of movements and vocal impairments. In order to develop an effective diagnostic system, a number of algorithms were proposed mainly to distinguish healthy individuals from the ones with PD. However, most of the previous works were conducted based on a binary classification, with the early PD stage and the advanced ones being treated equally. Therefore, in this work, we propose a multiclass classification with three classes of PD severity level (mild, moderate, severe) and healthy control. The focus is to detect and classify PD using signals from wearable motion and audio sensors based on both empirical wavelet transform (EWT) and empirical wavelet packet transform (EWPT) respectively. The EWT/EWPT was applied to decompose both speech and motion data signals up to five levels. Next, several features are extracted after obtaining the instantaneous amplitudes and frequencies from the coefficients of the decomposed signals by applying the Hilbert transform. The performance of the algorithm was analysed using three classifiers - K-nearest neighbour (KNN), probabilistic neural network (PNN) and extreme learning machine (ELM). Experimental results demonstrated that our proposed approach had the ability to differentiate PD from non-PD subjects, including their severity level - with classification accuracies of more than 90% using EWT/EWPT-ELM based on signals from motion and audio sensors respectively. Additionally, classification accuracy of more than 95% was achieved when EWT/EWPT-ELM is applied to signals from integration of both signal's information.

  12. Gadolinium-based magnetic resonance imaging contrast agents in interventional radiology.

    PubMed

    Atar, Eli

    2004-07-01

    Gadolinium-based agents are widely used in magnetic resonance imaging as contrast agents. These agents are radio-opaque enough for diagnostic imaging of the vascular tree by using digitally subtracted images as well as for imaging of the biliary system and the urinary tract. The recommended doses for gadolinium do not impair renal function or cause adverse reactions in patients with iodine sensitivity; thus patients with such conditions can safely undergo diagnostic angiography, either by MRI angiography or by catheterization using gadolinium as contrast agent, for diagnostic and therapeutic purposes.

  13. Agent-Based Modeling of Growth Processes

    ERIC Educational Resources Information Center

    Abraham, Ralph

    2014-01-01

    Growth processes abound in nature, and are frequently the target of modeling exercises in the sciences. In this article we illustrate an agent-based approach to modeling, in the case of a single example from the social sciences: bullying.

  14. An application to pulmonary emphysema classification based on model of texton learning by sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryojiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-03-01

    We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD) pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840 annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method based on texton learning by k-means, which performs almost the best among other approaches in the literature.

  15. Yarn-dyed fabric defect classification based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Jing, Junfeng; Dong, Amei; Li, Pengfei

    2017-07-01

    Considering that the manual inspection of the yarn-dyed fabric can be time consuming and less efficient, a convolutional neural network (CNN) solution based on the modified AlexNet structure for the classification of the yarn-dyed fabric defect is proposed. CNN has powerful ability of feature extraction and feature fusion which can simulate the learning mechanism of the human brain. In order to enhance computational efficiency and detection accuracy, the local response normalization (LRN) layers in AlexNet are replaced by the batch normalization (BN) layers. In the process of the network training, through several convolution operations, the characteristics of the image are extracted step by step, and the essential features of the image can be obtained from the edge features. And the max pooling layers, the dropout layers, the fully connected layers are also employed in the classification model to reduce the computation cost and acquire more precise features of fabric defect. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show the capability of defect classification via the modified Alexnet model and indicate its robustness.

  16. Multi-agent cooperation rescue algorithm based on influence degree and state prediction

    NASA Astrophysics Data System (ADS)

    Zheng, Yanbin; Ma, Guangfu; Wang, Linlin; Xi, Pengxue

    2018-04-01

    Aiming at the multi-agent cooperative rescue in disaster, a multi-agent cooperative rescue algorithm based on impact degree and state prediction is proposed. Firstly, based on the influence of the information in the scene on the collaborative task, the influence degree function is used to filter the information. Secondly, using the selected information to predict the state of the system and Agent behavior. Finally, according to the result of the forecast, the cooperative behavior of Agent is guided and improved the efficiency of individual collaboration. The simulation results show that this algorithm can effectively solve the cooperative rescue problem of multi-agent and ensure the efficient completion of the task.

  17. Persuasion Model and Its Evaluation Based on Positive Change Degree of Agent Emotion

    NASA Astrophysics Data System (ADS)

    Jinghua, Wu; Wenguang, Lu; Hailiang, Meng

    For it can meet needs of negotiation among organizations take place in different time and place, and for it can make its course more rationality and result more ideal, persuasion based on agent can improve cooperation among organizations well. Integrated emotion change in agent persuasion can further bring agent advantage of artificial intelligence into play. Emotion of agent persuasion is classified, and the concept of positive change degree is given. Based on this, persuasion model based on positive change degree of agent emotion is constructed, which is explained clearly through an example. Finally, the method of relative evaluation is given, which is also verified through a calculation example.

  18. Diversity and Community: The Role of Agent-Based Modeling.

    PubMed

    Stivala, Alex

    2017-06-01

    Community psychology involves several dialectics between potentially opposing ideals, such as theory and practice, rights and needs, and respect for human diversity and sense of community. Some recent papers in the American Journal of Community Psychology have examined the diversity-community dialectic, some with the aid of agent-based modeling and concepts from network science. This paper further elucidates these concepts and suggests that research in community psychology can benefit from a useful dialectic between agent-based modeling and the real-world concerns of community psychology. © Society for Community Research and Action 2017.

  19. Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C

    System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less

  20. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMES IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands...

  1. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMED IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1)Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...

  2. Efficacy measures associated to a plantar pressure based classification system in diabetic foot medicine.

    PubMed

    Deschamps, Kevin; Matricali, Giovanni Arnoldo; Desmet, Dirk; Roosen, Philip; Keijsers, Noel; Nobels, Frank; Bruyninckx, Herman; Staes, Filip

    2016-09-01

    The concept of 'classification' has, similar to many other diseases, been found to be fundamental in the field of diabetic medicine. In the current study, we aimed at determining efficacy measures of a recently published plantar pressure based classification system. Technical efficacy of the classification system was investigated by applying a high resolution, pixel-level analysis on the normalized plantar pressure pedobarographic fields of the original experimental dataset consisting of 97 patients with diabetes and 33 persons without diabetes. Clinical efficacy was assessed by considering the occurence of foot ulcers at the plantar aspect of the forefoot in this dataset. Classification efficacy was assessed by determining the classification recognition rate as well as its sensitivity and specificity using cross-validation subsets of the experimental dataset together with a novel cohort of 12 patients with diabetes. Pixel-level comparison of the four groups associated to the classification system highlighted distinct regional differences. Retrospective analysis showed the occurence of eleven foot ulcers in the experimental dataset since their gait analysis. Eight out of the eleven ulcers developed in a region of the foot which had the highest forces. Overall classification recognition rate exceeded 90% for all cross-validation subsets. Sensitivity and specificity of the four groups associated to the classification system exceeded respectively the 0.7 and 0.8 level in all cross-validation subsets. The results of the current study support the use of the novel plantar pressure based classification system in diabetic foot medicine. It may particularly serve in communication, diagnosis and clinical decision making. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. A remote sensing based vegetation classification logic for global land cover analysis

    USGS Publications Warehouse

    Running, Steven W.; Loveland, Thomas R.; Pierce, Lars L.; Nemani, R.R.; Hunt, E. Raymond

    1995-01-01

    This article proposes a simple new logic for classifying global vegetation. The critical features of this classification are that 1) it is based on simple, observable, unambiguous characteristics of vegetation structure that are important to ecosystem biogeochemistry and can be measured in the field for validation, 2) the structural characteristics are remotely sensible so that repeatable and efficient global reclassifications of existing vegetation will be possible, and 3) the defined vegetation classes directly translate into the biophysical parameters of interest by global climate and biogeochemical models. A first test of this logic for the continental United States is presented based on an existing 1 km AVHRR normalized difference vegetation index database. Procedures for solving critical remote sensing problems needed to implement the classification are discussed. Also, some inferences from this classification to advanced vegetation biophysical variables such as specific leaf area and photosynthetic capacity useful to global biogeochemical modeling are suggested.

  4. Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks.

    PubMed

    Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R; Nguyen, Tuan N; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T

    2017-01-01

    This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively.

  5. Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks

    PubMed Central

    Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R.; Nguyen, Tuan N.; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T.

    2017-01-01

    This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively. PMID:28326009

  6. Geometry of behavioral spaces: A computational approach to analysis and understanding of agent based models and agent behaviors

    NASA Astrophysics Data System (ADS)

    Cenek, Martin; Dahl, Spencer K.

    2016-11-01

    Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.

  7. Geometry of behavioral spaces: A computational approach to analysis and understanding of agent based models and agent behaviors.

    PubMed

    Cenek, Martin; Dahl, Spencer K

    2016-11-01

    Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.

  8. Attribute-based classification for zero-shot visual object categorization.

    PubMed

    Lampert, Christoph H; Nickisch, Hannes; Harmeling, Stefan

    2014-03-01

    We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.

  9. Personalized E- learning System Based on Intelligent Agent

    NASA Astrophysics Data System (ADS)

    Duo, Sun; Ying, Zhou Cai

    Lack of personalized learning is the key shortcoming of traditional e-Learning system. This paper analyzes the personal characters in e-Learning activity. In order to meet the personalized e-learning, a personalized e-learning system based on intelligent agent was proposed and realized in the paper. The structure of system, work process, the design of intelligent agent and the realization of intelligent agent were introduced in the paper. After the test use of the system by certain network school, we found that the system could improve the learner's initiative participation, which can provide learners with personalized knowledge service. Thus, we thought it might be a practical solution to realize self- learning and self-promotion in the lifelong education age.

  10. Applications of Agent Based Approaches in Business (A Three Essay Dissertation)

    ERIC Educational Resources Information Center

    Prawesh, Shankar

    2013-01-01

    The goal of this dissertation is to investigate the enabling role that agent based simulation plays in business and policy. The aforementioned issue has been addressed in this dissertation through three distinct, but related essays. The first essay is a literature review of different research applications of agent based simulation in various…

  11. A k-mer-based barcode DNA classification methodology based on spectral representation and a neural gas network.

    PubMed

    Fiannaca, Antonino; La Rosa, Massimo; Rizzo, Riccardo; Urso, Alfonso

    2015-07-01

    In this paper, an alignment-free method for DNA barcode classification that is based on both a spectral representation and a neural gas network for unsupervised clustering is proposed. In the proposed methodology, distinctive words are identified from a spectral representation of DNA sequences. A taxonomic classification of the DNA sequence is then performed using the sequence signature, i.e., the smallest set of k-mers that can assign a DNA sequence to its proper taxonomic category. Experiments were then performed to compare our method with other supervised machine learning classification algorithms, such as support vector machine, random forest, ripper, naïve Bayes, ridor, and classification tree, which also consider short DNA sequence fragments of 200 and 300 base pairs (bp). The experimental tests were conducted over 10 real barcode datasets belonging to different animal species, which were provided by the on-line resource "Barcode of Life Database". The experimental results showed that our k-mer-based approach is directly comparable, in terms of accuracy, recall and precision metrics, with the other classifiers when considering full-length sequences. In addition, we demonstrate the robustness of our method when a classification is performed task with a set of short DNA sequences that were randomly extracted from the original data. For example, the proposed method can reach the accuracy of 64.8% at the species level with 200-bp fragments. Under the same conditions, the best other classifier (random forest) reaches the accuracy of 20.9%. Our results indicate that we obtained a clear improvement over the other classifiers for the study of short DNA barcode sequence fragments. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Agent-Based Modeling of Chronic Diseases: A Narrative Review and Future Research Directions.

    PubMed

    Li, Yan; Lawley, Mark A; Siscovick, David S; Zhang, Donglan; Pagán, José A

    2016-05-26

    The United States is experiencing an epidemic of chronic disease. As the US population ages, health care providers and policy makers urgently need decision models that provide systematic, credible prediction regarding the prevention and treatment of chronic diseases to improve population health management and medical decision-making. Agent-based modeling is a promising systems science approach that can model complex interactions and processes related to chronic health conditions, such as adaptive behaviors, feedback loops, and contextual effects. This article introduces agent-based modeling by providing a narrative review of agent-based models of chronic disease and identifying the characteristics of various chronic health conditions that must be taken into account to build effective clinical- and policy-relevant models. We also identify barriers to adopting agent-based models to study chronic diseases. Finally, we discuss future research directions of agent-based modeling applied to problems related to specific chronic health conditions.

  13. Development and verification of an agent-based model of opinion leadership.

    PubMed

    Anderson, Christine A; Titler, Marita G

    2014-09-27

    The use of opinion leaders is a strategy used to speed the process of translating research into practice. Much is still unknown about opinion leader attributes and activities and the context in which they are most effective. Agent-based modeling is a methodological tool that enables demonstration of the interactive and dynamic effects of individuals and their behaviors on other individuals in the environment. The purpose of this study was to develop and test an agent-based model of opinion leadership. The details of the design and verification of the model are presented. The agent-based model was developed by using a software development platform to translate an underlying conceptual model of opinion leadership into a computer model. Individual agent attributes (for example, motives and credibility) and behaviors (seeking or providing an opinion) were specified as variables in the model in the context of a fictitious patient care unit. The verification process was designed to test whether or not the agent-based model was capable of reproducing the conditions of the preliminary conceptual model. The verification methods included iterative programmatic testing ('debugging') and exploratory analysis of simulated data obtained from execution of the model. The simulation tests included a parameter sweep, in which the model input variables were adjusted systematically followed by an individual time series experiment. Statistical analysis of model output for the 288 possible simulation scenarios in the parameter sweep revealed that the agent-based model was performing, consistent with the posited relationships in the underlying model. Nurse opinion leaders act on the strength of their beliefs and as a result, become an opinion resource for their uncertain colleagues, depending on their perceived credibility. Over time, some nurses consistently act as this type of resource and have the potential to emerge as opinion leaders in a context where uncertainty exists. The

  14. Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems

    ERIC Educational Resources Information Center

    Gifford, Christopher M.

    2009-01-01

    This dissertation focuses on the collaboration of multiple heterogeneous, intelligent agents (hardware or software) which collaborate to learn a task and are capable of sharing knowledge. The concept of collaborative learning in multi-agent and multi-robot systems is largely under studied, and represents an area where further research is needed to…

  15. Classification and Quality Evaluation of Tobacco Leaves Based on Image Processing and Fuzzy Comprehensive Evaluation

    PubMed Central

    Zhang, Fan; Zhang, Xinhong

    2011-01-01

    Most of classification, quality evaluation or grading of the flue-cured tobacco leaves are manually operated, which relies on the judgmental experience of experts, and inevitably limited by personal, physical and environmental factors. The classification and the quality evaluation are therefore subjective and experientially based. In this paper, an automatic classification method of tobacco leaves based on the digital image processing and the fuzzy sets theory is presented. A grading system based on image processing techniques was developed for automatically inspecting and grading flue-cured tobacco leaves. This system uses machine vision for the extraction and analysis of color, size, shape and surface texture. Fuzzy comprehensive evaluation provides a high level of confidence in decision making based on the fuzzy logic. The neural network is used to estimate and forecast the membership function of the features of tobacco leaves in the fuzzy sets. The experimental results of the two-level fuzzy comprehensive evaluation (FCE) show that the accuracy rate of classification is about 94% for the trained tobacco leaves, and the accuracy rate of the non-trained tobacco leaves is about 72%. We believe that the fuzzy comprehensive evaluation is a viable way for the automatic classification and quality evaluation of the tobacco leaves. PMID:22163744

  16. Optimal Couple Projections for Domain Adaptive Sparse Representation-based Classification.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Porikli, Fatih; Liu, Yazhou; Sun, Quansen

    2017-08-29

    In recent years, sparse representation based classification (SRC) is one of the most successful methods and has been shown impressive performance in various classification tasks. However, when the training data has a different distribution than the testing data, the learned sparse representation may not be optimal, and the performance of SRC will be degraded significantly. To address this problem, in this paper, we propose an optimal couple projections for domain-adaptive sparse representation-based classification (OCPD-SRC) method, in which the discriminative features of data in the two domains are simultaneously learned with the dictionary that can succinctly represent the training and testing data in the projected space. OCPD-SRC is designed based on the decision rule of SRC, with the objective to learn coupled projection matrices and a common discriminative dictionary such that the between-class sparse reconstruction residuals of data from both domains are maximized, and the within-class sparse reconstruction residuals of data are minimized in the projected low-dimensional space. Thus, the resulting representations can well fit SRC and simultaneously have a better discriminant ability. In addition, our method can be easily extended to multiple domains and can be kernelized to deal with the nonlinear structure of data. The optimal solution for the proposed method can be efficiently obtained following the alternative optimization method. Extensive experimental results on a series of benchmark databases show that our method is better or comparable to many state-of-the-art methods.

  17. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs

    NASA Astrophysics Data System (ADS)

    Haaf, Ezra; Barthel, Roland

    2016-04-01

    When assessing hydrogeological conditions at the regional scale, the analyst is often confronted with uncertainty of structures, inputs and processes while having to base inference on scarce and patchy data. Haaf and Barthel (2015) proposed a concept for handling this predicament by developing a groundwater systems classification framework, where information is transferred from similar, but well-explored and better understood to poorly described systems. The concept is based on the central hypothesis that similar systems react similarly to the same inputs and vice versa. It is conceptually related to PUB (Prediction in ungauged basins) where organization of systems and processes by quantitative methods is intended and used to improve understanding and prediction. Furthermore, using the framework it is expected that regional conceptual and numerical models can be checked or enriched by ensemble generated data from neighborhood-based estimators. In a first step, groundwater hydrographs from a large dataset in Southern Germany are compared in an effort to identify structural similarity in groundwater dynamics. A number of approaches to group hydrographs, mostly based on a similarity measure - which have previously only been used in local-scale studies, can be found in the literature. These are tested alongside different global feature extraction techniques. The resulting classifications are then compared to a visual "expert assessment"-based classification which serves as a reference. A ranking of the classification methods is carried out and differences shown. Selected groups from the classifications are related to geological descriptors. Here we present the most promising results from a comparison of classifications based on series correlation, different series distances and series features, such as the coefficients of the discrete Fourier transform and the intrinsic mode functions of empirical mode decomposition. Additionally, we show examples of classes

  18. Robust point cloud classification based on multi-level semantic relationships for urban scenes

    NASA Astrophysics Data System (ADS)

    Zhu, Qing; Li, Yuan; Hu, Han; Wu, Bo

    2017-07-01

    The semantic classification of point clouds is a fundamental part of three-dimensional urban reconstruction. For datasets with high spatial resolution but significantly more noises, a general trend is to exploit more contexture information to surmount the decrease of discrimination of features for classification. However, previous works on adoption of contexture information are either too restrictive or only in a small region and in this paper, we propose a point cloud classification method based on multi-level semantic relationships, including point-homogeneity, supervoxel-adjacency and class-knowledge constraints, which is more versatile and incrementally propagate the classification cues from individual points to the object level and formulate them as a graphical model. The point-homogeneity constraint clusters points with similar geometric and radiometric properties into regular-shaped supervoxels that correspond to the vertices in the graphical model. The supervoxel-adjacency constraint contributes to the pairwise interactions by providing explicit adjacent relationships between supervoxels. The class-knowledge constraint operates at the object level based on semantic rules, guaranteeing the classification correctness of supervoxel clusters at that level. International Society of Photogrammetry and Remote Sensing (ISPRS) benchmark tests have shown that the proposed method achieves state-of-the-art performance with an average per-area completeness and correctness of 93.88% and 95.78%, respectively. The evaluation of classification of photogrammetric point clouds and DSM generated from aerial imagery confirms the method's reliability in several challenging urban scenes.

  19. Semiotics and agents for integrating and navigating through multimedia representations of concepts

    NASA Astrophysics Data System (ADS)

    Joyce, Dan W.; Lewis, Paul H.; Tansley, Robert H.; Dobie, Mark R.; Hall, Wendy

    1999-12-01

    The purpose of this paper is two-fold. We begin by exploring the emerging trend to view multimedia information in terms of low-level and high-level components; the former being feature-based and the latter the 'semantics' intrinsic to what is portrayed by the media object. Traditionally, this has been viewed by employing analogies with generative linguistics. Recently, a new perceptive based on the semiotic tradition has been alluded to in several papers. We believe this to be a more appropriate approach. From this, we propose an approach for tackling this problem which uses an associative data structure expressing authored information together with intelligent agents acting autonomously over this structure. We then show how neural networks can be used to implement such agents. The agents act as 'vehicles' for bridging the gap between multimedia semantics and concrete expressions of high-level knowledge, but we suggest that traditional neural network techniques for classification are not architecturally adequate.

  20. Cancer Pain: A Critical Review of Mechanism-based Classification and Physical Therapy Management in Palliative Care

    PubMed Central

    Kumar, Senthil P

    2011-01-01

    Mechanism-based classification and physical therapy management of pain is essential to effectively manage painful symptoms in patients attending palliative care. The objective of this review is to provide a detailed review of mechanism-based classification and physical therapy management of patients with cancer pain. Cancer pain can be classified based upon pain symptoms, pain mechanisms and pain syndromes. Classification based upon mechanisms not only addresses the underlying pathophysiology but also provides us with an understanding behind patient's symptoms and treatment responses. Existing evidence suggests that the five mechanisms – central sensitization, peripheral sensitization, sympathetically maintained pain, nociceptive and cognitive-affective – operate in patients with cancer pain. Summary of studies showing evidence for physical therapy treatment methods for cancer pain follows with suggested therapeutic implications. Effective palliative physical therapy care using a mechanism-based classification model should be tailored to suit each patient's findings, using a biopsychosocial model of pain. PMID:21976851

  1. An evidence-based diagnostic classification system for low back pain

    PubMed Central

    Vining, Robert; Potocki, Eric; Seidman, Michael; Morgenthal, A. Paige

    2013-01-01

    Introduction: While clinicians generally accept that musculoskeletal low back pain (LBP) can arise from specific tissues, it remains difficult to confirm specific sources. Methods: Based on evidence supported by diagnostic utility studies, doctors of chiropractic functioning as members of a research clinic created a diagnostic classification system, corresponding exam and checklist based on strength of evidence, and in-office efficiency. Results: The diagnostic classification system contains one screening category, two pain categories: Nociceptive, Neuropathic, one functional evaluation category, and one category for unknown or poorly defined diagnoses. Nociceptive and neuropathic pain categories are each divided into 4 subcategories. Conclusion: This article describes and discusses the strength of evidence surrounding diagnostic categories for an in-office, clinical exam and checklist tool for LBP diagnosis. The use of a standardized tool for diagnosing low back pain in clinical and research settings is encouraged. PMID:23997245

  2. Classification of ligand molecules in PDB with graph match-based structural superposition.

    PubMed

    Shionyu-Mitsuyama, Clara; Hijikata, Atsushi; Tsuji, Toshiyuki; Shirai, Tsuyoshi

    2016-12-01

    The fast heuristic graph match algorithm for small molecules, COMPLIG, was improved by adding a structural superposition process to verify the atom-atom matching. The modified method was used to classify the small molecule ligands in the Protein Data Bank (PDB) by their three-dimensional structures, and 16,660 types of ligands in the PDB were classified into 7561 clusters. In contrast, a classification by a previous method (without structure superposition) generated 3371 clusters from the same ligand set. The characteristic feature in the current classification system is the increased number of singleton clusters, which contained only one ligand molecule in a cluster. Inspections of the singletons in the current classification system but not in the previous one implied that the major factors for the isolation were differences in chirality, cyclic conformations, separation of substructures, and bond length. Comparisons between current and previous classification systems revealed that the superposition-based classification was effective in clustering functionally related ligands, such as drugs targeted to specific biological processes, owing to the strictness of the atom-atom matching.

  3. Research on environmental impact of water-based fire extinguishing agents

    NASA Astrophysics Data System (ADS)

    Wang, Shuai

    2018-02-01

    This paper offers current status of application of water-based fire extinguishing agents, the environmental and research considerations of the need for the study of toxicity research. This paper also offers systematic review of test methods of toxicity and environmental impact of water-based fire extinguishing agents currently available, illustrate the main requirements and relevant test methods, and offer some research findings for future research considerations. The paper also offers limitations of current study.

  4. Agent-based modeling: Methods and techniques for simulating human systems

    PubMed Central

    Bonabeau, Eric

    2002-01-01

    Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed. PMID:12011407

  5. Classification of Regional Ionospheric Disturbances Based on Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Begüm Terzi, Merve; Arikan, Feza; Arikan, Orhan; Karatay, Secil

    2016-07-01

    Ionosphere is an anisotropic, inhomogeneous, time varying and spatio-temporally dispersive medium whose parameters can be estimated almost always by using indirect measurements. Geomagnetic, gravitational, solar or seismic activities cause variations of ionosphere at various spatial and temporal scales. This complex spatio-temporal variability is challenging to be identified due to extensive scales in period, duration, amplitude and frequency of disturbances. Since geomagnetic and solar indices such as Disturbance storm time (Dst), F10.7 solar flux, Sun Spot Number (SSN), Auroral Electrojet (AE), Kp and W-index provide information about variability on a global scale, identification and classification of regional disturbances poses a challenge. The main aim of this study is to classify the regional effects of global geomagnetic storms and classify them according to their risk levels. For this purpose, Total Electron Content (TEC) estimated from GPS receivers, which is one of the major parameters of ionosphere, will be used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. In this work, for the automated classification of the regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. SVM is a supervised learning model used for classification with associated learning algorithm that analyze the data and recognize patterns. In addition to performing linear classification, SVM can efficiently perform nonlinear classification by embedding data into higher dimensional feature spaces. Performance of the developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from the GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing the developed classification

  6. Classification and global distribution of ocean precipitation types based on satellite passive microwave signatures

    NASA Astrophysics Data System (ADS)

    Gautam, Nitin

    The main objectives of this thesis are to develop a robust statistical method for the classification of ocean precipitation based on physical properties to which the SSM/I is sensitive and to examine how these properties vary globally and seasonally. A two step approach is adopted for the classification of oceanic precipitation classes from multispectral SSM/I data: (1)we subjectively define precipitation classes using a priori information about the precipitating system and its possible distinct signature on SSM/I data such as scattering by ice particles aloft in the precipitating cloud, emission by liquid rain water below freezing level, the difference of polarization at 19 GHz-an indirect measure of optical depth, etc.; (2)we then develop an objective classification scheme which is found to reproduce the subjective classification with high accuracy. This hybrid strategy allows us to use the characteristics of the data to define and encode classes and helps retain the physical interpretation of classes. The classification methods based on k-nearest neighbor and neural network are developed to objectively classify six precipitation classes. It is found that the classification method based neural network yields high accuracy for all precipitation classes. An inversion method based on minimum variance approach was used to retrieve gross microphysical properties of these precipitation classes such as column integrated liquid water path, column integrated ice water path, and column integrated min water path. This classification method is then applied to 2 years (1991-92) of SSM/I data to examine and document the seasonal and global distribution of precipitation frequency corresponding to each of these objectively defined six classes. The characteristics of the distribution are found to be consistent with assumptions used in defining these six precipitation classes and also with well known climatological patterns of precipitation regions. The seasonal and global

  7. [Research on the marketing status of antimicrobial products and the use of antimicrobial agents indicated on product labels from 1991 through 2005].

    PubMed

    Nakashima, Harunobu; Miyano, Naoko; Matsunaga, Ichiro; Nakashima, Naomi; Kaniwa, Masa-aki

    2007-05-01

    To clarify the marketing status of antimicrobial products, descriptions on the labels of commercially available antimicrobial products were investigated from 1991 through 2005, and the results were analyzed using a database system on antimicrobial deodorant agents. A classification table of household antimicrobial products was prepared and revised, based on which target products were reviewed for any changes in the product type. The number of antimicrobial products markedly increased over 3 years starting from 1996, among which there were many products apparently not requiring antimicrobial processing. More recently, in the 2002 and 2004 surveys, while sales of kitchenware and daily necessities decreased, chemical products, baby articles, and articles for pets increased; this poses new problems. To clarify the use of antimicrobial agents in the target products, a 3-step (large, intermediate, small) classification table of antimicrobial agents was also prepared, based on which antimicrobial agents indicated on the product labels were checked. The rate of identifying the agents increased. However, this is because of the increase of chemical products and baby articles, both of which more frequently indicated the ingredient agents on the labels, and the decrease of kitchenware and daily necessities, which less frequently indicated them on the labels. Therefore there has been little change in the actual identification rate. The agents used are characterized by product types: quaternary ammonium salts, metal salts, and organic antimicrobials are commonly used in textiles, plastics, and chemical products, respectively. Since the use of natural organic agents has recently increased, the safety of these agents should be evaluated.

  8. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.

    PubMed

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-10-20

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  9. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    PubMed Central

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-01-01

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596

  10. Connectionist agent-based learning in bank-run decision making

    NASA Astrophysics Data System (ADS)

    Huang, Weihong; Huang, Qiao

    2018-05-01

    It is of utter importance for the policy makers, bankers, and investors to thoroughly understand the probability of bank-run (PBR) which was often neglected in the classical models. Bank-run is not merely due to miscoordination (Diamond and Dybvig, 1983) or deterioration of bank assets (Allen and Gale, 1998) but various factors. This paper presents the simulation results of the nonlinear dynamic probabilities of bank runs based on the global games approach, with the distinct assumption that heterogenous agents hold highly correlated but unidentical beliefs about the true payoffs. The specific technique used in the simulation is to let agents have an integrated cognitive-affective network. It is observed that, even when the economy is good, agents are significantly affected by the cognitive-affective network to react to bad news which might lead to bank-run. Both the rise of the late payoffs, R, and the early payoffs, r, will decrease the effect of the affective process. The increased risk sharing might or might not increase PBR, and the increase in late payoff is beneficial for preventing the bank run. This paper is one of the pioneers that links agent-based computational economics and behavioral economics.

  11. Classification of EEG Signals Based on Pattern Recognition Approach.

    PubMed

    Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed

    2017-01-01

    Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.

  12. Classification of EEG Signals Based on Pattern Recognition Approach

    PubMed Central

    Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed

    2017-01-01

    Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90–7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11–89.63% and 91.60–81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy. PMID

  13. A Neural-Network-Based Semi-Automated Geospatial Classification Tool

    NASA Astrophysics Data System (ADS)

    Hale, R. G.; Herzfeld, U. C.

    2014-12-01

    North America's largest glacier system, the Bering Bagley Glacier System (BBGS) in Alaska, surged in 2011-2013, as shown by rapid mass transfer, elevation change, and heavy crevassing. Little is known about the physics controlling surge glaciers' semi-cyclic patterns; therefore, it is crucial to collect and analyze as much data as possible so that predictive models can be made. In addition, physical signs frozen in ice in the form of crevasses may help serve as a warning for future surges. The BBGS surge provided an opportunity to develop an automated classification tool for crevasse classification based on imagery collected from small aircraft. The classification allows one to link image classification to geophysical processes associated with ice deformation. The tool uses an approach that employs geostatistical functions and a feed-forward perceptron with error back-propagation. The connectionist-geostatistical approach uses directional experimental (discrete) variograms to parameterize images into a form that the Neural Network (NN) can recognize. In an application to preform analysis on airborne video graphic data from the surge of the BBGS, an NN was able to distinguish 18 different crevasse classes with 95 percent or higher accuracy, for over 3,000 images. Recognizing that each surge wave results in different crevasse types and that environmental conditions affect the appearance in imagery, we designed the tool's semi-automated pre-training algorithm to be adaptable. The tool can be optimized to specific settings and variables of image analysis: (airborne and satellite imagery, different camera types, observation altitude, number and types of classes, and resolution). The generalization of the classification tool brings three important advantages: (1) multiple types of problems in geophysics can be studied, (2) the training process is sufficiently formalized to allow non-experts in neural nets to perform the training process, and (3) the time required to

  14. Multi-material classification of dry recyclables from municipal solid waste based on thermal imaging.

    PubMed

    Gundupalli, Sathish Paulraj; Hait, Subrata; Thakur, Atul

    2017-12-01

    There has been a significant rise in municipal solid waste (MSW) generation in the last few decades due to rapid urbanization and industrialization. Due to the lack of source segregation practice, a need for automated segregation of recyclables from MSW exists in the developing countries. This paper reports a thermal imaging based system for classifying useful recyclables from simulated MSW sample. Experimental results have demonstrated the possibility to use thermal imaging technique for classification and a robotic system for sorting of recyclables in a single process step. The reported classification system yields an accuracy in the range of 85-96% and is comparable with the existing single-material recyclable classification techniques. We believe that the reported thermal imaging based system can emerge as a viable and inexpensive large-scale classification-cum-sorting technology in recycling plants for processing MSW in developing countries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Agent Based Intelligence in a Tetrahedral Rover

    NASA Technical Reports Server (NTRS)

    Phelps, Peter; Truszkowski, Walt

    2007-01-01

    A tetrahedron is a 4-node 6-strut pyramid structure which is being used by the NASA - Goddard Space Flight Center as the basic building block for a new approach to robotic motion. The struts are extendable; it is by the sequence of activities: strut-extension, changing the center of gravity and falling that the tetrahedron "moves". Currently, strut-extension is handled by human remote control. There is an effort underway to make the movement of the tetrahedron autonomous, driven by an attempt to achieve a goal. The approach being taken is to associate an intelligent agent with each node. Thus, the autonomous tetrahedron is realized as a constrained multi-agent system, where the constraints arise from the fact that between any two agents there is an extendible strut. The hypothesis of this work is that, by proper composition of such automated tetrahedra, robotic structures of various levels of complexity can be developed which will support more complex dynamic motions. This is the basis of the new approach to robotic motion which is under investigation. A Java-based simulator for the single tetrahedron, realized as a constrained multi-agent system, has been developed and evaluated. This paper reports on this project and presents a discussion of the structure and dynamics of the simulator.

  16. Agent-Based Modeling of Chronic Diseases: A Narrative Review and Future Research Directions

    PubMed Central

    Lawley, Mark A.; Siscovick, David S.; Zhang, Donglan; Pagán, José A.

    2016-01-01

    The United States is experiencing an epidemic of chronic disease. As the US population ages, health care providers and policy makers urgently need decision models that provide systematic, credible prediction regarding the prevention and treatment of chronic diseases to improve population health management and medical decision-making. Agent-based modeling is a promising systems science approach that can model complex interactions and processes related to chronic health conditions, such as adaptive behaviors, feedback loops, and contextual effects. This article introduces agent-based modeling by providing a narrative review of agent-based models of chronic disease and identifying the characteristics of various chronic health conditions that must be taken into account to build effective clinical- and policy-relevant models. We also identify barriers to adopting agent-based models to study chronic diseases. Finally, we discuss future research directions of agent-based modeling applied to problems related to specific chronic health conditions. PMID:27236380

  17. A Distributed Platform for Global-Scale Agent-Based Models of Disease Transmission

    PubMed Central

    Parker, Jon; Epstein, Joshua M.

    2013-01-01

    The Global-Scale Agent Model (GSAM) is presented. The GSAM is a high-performance distributed platform for agent-based epidemic modeling capable of simulating a disease outbreak in a population of several billion agents. It is unprecedented in its scale, its speed, and its use of Java. Solutions to multiple challenges inherent in distributing massive agent-based models are presented. Communication, synchronization, and memory usage are among the topics covered in detail. The memory usage discussion is Java specific. However, the communication and synchronization discussions apply broadly. We provide benchmarks illustrating the GSAM’s speed and scalability. PMID:24465120

  18. Classification of complex networks based on similarity of topological network features

    NASA Astrophysics Data System (ADS)

    Attar, Niousha; Aliakbary, Sadegh

    2017-09-01

    Over the past few decades, networks have been widely used to model real-world phenomena. Real-world networks exhibit nontrivial topological characteristics and therefore, many network models are proposed in the literature for generating graphs that are similar to real networks. Network models reproduce nontrivial properties such as long-tail degree distributions or high clustering coefficients. In this context, we encounter the problem of selecting the network model that best fits a given real-world network. The need for a model selection method reveals the network classification problem, in which a target-network is classified into one of the candidate network models. In this paper, we propose a novel network classification method which is independent of the network size and employs an alignment-free metric of network comparison. The proposed method is based on supervised machine learning algorithms and utilizes the topological similarities of networks for the classification task. The experiments show that the proposed method outperforms state-of-the-art methods with respect to classification accuracy, time efficiency, and robustness to noise.

  19. Non-target adjacent stimuli classification improves performance of classical ERP-based brain computer interface

    NASA Astrophysics Data System (ADS)

    Ceballos, G. A.; Hernández, L. F.

    2015-04-01

    Objective. The classical ERP-based speller, or P300 Speller, is one of the most commonly used paradigms in the field of Brain Computer Interfaces (BCI). Several alterations to the visual stimuli presentation system have been developed to avoid unfavorable effects elicited by adjacent stimuli. However, there has been little, if any, regard to useful information contained in responses to adjacent stimuli about spatial location of target symbols. This paper aims to demonstrate that combining the classification of non-target adjacent stimuli with standard classification (target versus non-target) significantly improves classical ERP-based speller efficiency. Approach. Four SWLDA classifiers were trained and combined with the standard classifier: the lower row, upper row, right column and left column classifiers. This new feature extraction procedure and the classification method were carried out on three open databases: the UAM P300 database (Universidad Autonoma Metropolitana, Mexico), BCI competition II (dataset IIb) and BCI competition III (dataset II). Main results. The inclusion of the classification of non-target adjacent stimuli improves target classification in the classical row/column paradigm. A gain in mean single trial classification of 9.6% and an overall improvement of 25% in simulated spelling speed was achieved. Significance. We have provided further evidence that the ERPs produced by adjacent stimuli present discriminable features, which could provide additional information about the spatial location of intended symbols. This work promotes the searching of information on the peripheral stimulation responses to improve the performance of emerging visual ERP-based spellers.

  20. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  1. An Agent-Based Interface to Terrestrial Ecological Forecasting

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Nemani, Ramakrishna; Pang, Wan-Lin; Votava, Petr; Etzioni, Oren

    2004-01-01

    This paper describes a flexible agent-based ecological forecasting system that combines multiple distributed data sources and models to provide near-real-time answers to questions about the state of the Earth system We build on novel techniques in automated constraint-based planning and natural language interfaces to automatically generate data products based on descriptions of the desired data products.

  2. From learning taxonomies to phylogenetic learning: integration of 16S rRNA gene data into FAME-based bacterial classification.

    PubMed

    Slabbinck, Bram; Waegeman, Willem; Dawyndt, Peter; De Vos, Paul; De Baets, Bernard

    2010-01-30

    Machine learning techniques have shown to improve bacterial species classification based on fatty acid methyl ester (FAME) data. Nonetheless, FAME analysis has a limited resolution for discrimination of bacteria at the species level. In this paper, we approach the species classification problem from a taxonomic point of view. Such a taxonomy or tree is typically obtained by applying clustering algorithms on FAME data or on 16S rRNA gene data. The knowledge gained from the tree can then be used to evaluate FAME-based classifiers, resulting in a novel framework for bacterial species classification. In view of learning in a taxonomic framework, we consider two types of trees. First, a FAME tree is constructed with a supervised divisive clustering algorithm. Subsequently, based on 16S rRNA gene sequence analysis, phylogenetic trees are inferred by the NJ and UPGMA methods. In this second approach, the species classification problem is based on the combination of two different types of data. Herein, 16S rRNA gene sequence data is used for phylogenetic tree inference and the corresponding binary tree splits are learned based on FAME data. We call this learning approach 'phylogenetic learning'. Supervised Random Forest models are developed to train the classification tasks in a stratified cross-validation setting. In this way, better classification results are obtained for species that are typically hard to distinguish by a single or flat multi-class classification model. FAME-based bacterial species classification is successfully evaluated in a taxonomic framework. Although the proposed approach does not improve the overall accuracy compared to flat multi-class classification, it has some distinct advantages. First, it has better capabilities for distinguishing species on which flat multi-class classification fails. Secondly, the hierarchical classification structure allows to easily evaluate and visualize the resolution of FAME data for the discrimination of bacterial

  3. From learning taxonomies to phylogenetic learning: Integration of 16S rRNA gene data into FAME-based bacterial classification

    PubMed Central

    2010-01-01

    Background Machine learning techniques have shown to improve bacterial species classification based on fatty acid methyl ester (FAME) data. Nonetheless, FAME analysis has a limited resolution for discrimination of bacteria at the species level. In this paper, we approach the species classification problem from a taxonomic point of view. Such a taxonomy or tree is typically obtained by applying clustering algorithms on FAME data or on 16S rRNA gene data. The knowledge gained from the tree can then be used to evaluate FAME-based classifiers, resulting in a novel framework for bacterial species classification. Results In view of learning in a taxonomic framework, we consider two types of trees. First, a FAME tree is constructed with a supervised divisive clustering algorithm. Subsequently, based on 16S rRNA gene sequence analysis, phylogenetic trees are inferred by the NJ and UPGMA methods. In this second approach, the species classification problem is based on the combination of two different types of data. Herein, 16S rRNA gene sequence data is used for phylogenetic tree inference and the corresponding binary tree splits are learned based on FAME data. We call this learning approach 'phylogenetic learning'. Supervised Random Forest models are developed to train the classification tasks in a stratified cross-validation setting. In this way, better classification results are obtained for species that are typically hard to distinguish by a single or flat multi-class classification model. Conclusions FAME-based bacterial species classification is successfully evaluated in a taxonomic framework. Although the proposed approach does not improve the overall accuracy compared to flat multi-class classification, it has some distinct advantages. First, it has better capabilities for distinguishing species on which flat multi-class classification fails. Secondly, the hierarchical classification structure allows to easily evaluate and visualize the resolution of FAME data for

  4. A Distributed Ambient Intelligence Based Multi-Agent System for Alzheimer Health Care

    NASA Astrophysics Data System (ADS)

    Tapia, Dante I.; RodríGuez, Sara; Corchado, Juan M.

    This chapter presents ALZ-MAS (Alzheimer multi-agent system), an ambient intelligence (AmI)-based multi-agent system aimed at enhancing the assistance and health care for Alzheimer patients. The system makes use of several context-aware technologies that allow it to automatically obtain information from users and the environment in an evenly distributed way, focusing on the characteristics of ubiquity, awareness, intelligence, mobility, etc., all of which are concepts defined by AmI. ALZ-MAS makes use of a services oriented multi-agent architecture, called flexible user and services oriented multi-agent architecture, to distribute resources and enhance its performance. It is demonstrated that a SOA approach is adequate to build distributed and highly dynamic AmI-based multi-agent systems.

  5. Object-based land-cover classification for metropolitan Phoenix, Arizona, using aerial photography

    NASA Astrophysics Data System (ADS)

    Li, Xiaoxiao; Myint, Soe W.; Zhang, Yujia; Galletti, Chritopher; Zhang, Xiaoxiang; Turner, Billie L.

    2014-12-01

    Detailed land-cover mapping is essential for a range of research issues addressed by the sustainability and land system sciences and planning. This study uses an object-based approach to create a 1 m land-cover classification map of the expansive Phoenix metropolitan area through the use of high spatial resolution aerial photography from National Agricultural Imagery Program. It employs an expert knowledge decision rule set and incorporates the cadastral GIS vector layer as auxiliary data. The classification rule was established on a hierarchical image object network, and the properties of parcels in the vector layer were used to establish land cover types. Image segmentations were initially utilized to separate the aerial photos into parcel sized objects, and were further used for detailed land type identification within the parcels. Characteristics of image objects from contextual and geometrical aspects were used in the decision rule set to reduce the spectral limitation of the four-band aerial photography. Classification results include 12 land-cover classes and subclasses that may be assessed from the sub-parcel to the landscape scales, facilitating examination of scale dynamics. The proposed object-based classification method provides robust results, uses minimal and readily available ancillary data, and reduces computational time.

  6. Knowledge Management in Role Based Agents

    NASA Astrophysics Data System (ADS)

    Kır, Hüseyin; Ekinci, Erdem Eser; Dikenelli, Oguz

    In multi-agent system literature, the role concept is getting increasingly researched to provide an abstraction to scope beliefs, norms, goals of agents and to shape relationships of the agents in the organization. In this research, we propose a knowledgebase architecture to increase applicability of roles in MAS domain by drawing inspiration from the self concept in the role theory of sociology. The proposed knowledgebase architecture has granulated structure that is dynamically organized according to the agent's identification in a social environment. Thanks to this dynamic structure, agents are enabled to work on consistent knowledge in spite of inevitable conflicts between roles and the agent. The knowledgebase architecture is also implemented and incorporated into the SEAGENT multi-agent system development framework.

  7. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    PubMed Central

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  8. Natural Humic-Acid-Based Phototheranostic Agent.

    PubMed

    Miao, Zhao-Hua; Li, Kai; Liu, Pei-Ying; Li, Zhenglin; Yang, Huanjie; Zhao, Qingliang; Chang, Manli; Yang, Qingzhu; Zhen, Liang; Xu, Cheng-Yan

    2018-04-01

    Humic acids, a major constituent of natural organic carbon resources, are naturally formed through the microbial biodegradation of animal and plant residues. Due to numerous physiologically active groups (phenol, carboxyl, and quinone), the biomedical applications of humic acid have been already investigated across different cultures for several centuries or even longer. In this work, sodium humate, the sodium salt of humic acid, is explored as phototheranostic agent for light-induced photoacoustic imaging and photothermal therapy based on intrinsic absorption in the near-infrared region. The purified colloidal sodium humate exhibits a high photothermal conversion efficiency up to 76.3%, much higher than that of the majority of state-of-the-art photothermal agents including gold nanorods, Cu 9 S 5 nanoparticles, antimonene quantum dots, and black phosphorus quantum dots, leading to obvious photoacoustic enhancement in vitro and in vivo. Besides, highly effective photothermal ablation of HeLa tumor is achieved through intratumoral injection. Impressively, sodium humate reveals ultralow toxicity at the cellular and animal levels. This work promises the great potential of humic acids as light-mediated theranostic agents, thus expanding the application scope of traditional humic acids in biomedical field. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Style-based classification of Chinese ink and wash paintings

    NASA Astrophysics Data System (ADS)

    Sheng, Jiachuan; Jiang, Jianmin

    2013-09-01

    Following the fact that a large collection of ink and wash paintings (IWP) is being digitized and made available on the Internet, their automated content description, analysis, and management are attracting attention across research communities. While existing research in relevant areas is primarily focused on image processing approaches, a style-based algorithm is proposed to classify IWPs automatically by their authors. As IWPs do not have colors or even tones, the proposed algorithm applies edge detection to locate the local region and detect painting strokes to enable histogram-based feature extraction and capture of important cues to reflect the styles of different artists. Such features are then applied to drive a number of neural networks in parallel to complete the classification, and an information entropy balanced fusion is proposed to make an integrated decision for the multiple neural network classification results in which the entropy is used as a pointer to combine the global and local features. Evaluations via experiments support that the proposed algorithm achieves good performances, providing excellent potential for computerized analysis and management of IWPs.

  10. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    PubMed Central

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  11. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-08-16

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  12. Interactive classification and content-based retrieval of tissue images

    NASA Astrophysics Data System (ADS)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  13. Peatland classification of West Siberia based on Landsat imagery

    NASA Astrophysics Data System (ADS)

    Terentieva, I.; Glagolev, M.; Lapshina, E.; Maksyutov, S. S.

    2014-12-01

    Increasing interest in peatlands for prediction of environmental changes requires an understanding of its geographical distribution. West Siberia Plain is the biggest peatland area in Eurasia and is situated in the high latitudes experiencing enhanced rate of climate change. West Siberian taiga mires are important globally, accounting for about 12.5% of the global wetland area. A number of peatland maps of the West Siberia was developed in 1970s, but their accuracy is limited. Here we report the effort in mapping West Siberian peatlands using 30 m resolution Landsat imagery. As a first step, peatland classification scheme oriented on environmental parameter upscaling was developed. The overall workflow involves data pre-processing, training data collection, image classification on a scene-by-scene basis, regrouping of the derived classes into final peatland types and accuracy assessment. To avoid misclassification peatlands were distinguished from other landscapes using threshold method: for each scene, Green-Red Vegetation Indices was used for peatland masking and 5th channel was used for masking water bodies. Peatland image masks were made in Quantum GIS, filtered in MATLAB and then classified in Multispec (Purdue Research Foundation) using maximum likelihood algorithm of supervised classification method. Training sample selection was mostly based on spectral signatures due to limited ancillary and high-resolution image data. As an additional source of information, we applied our field knowledge resulting from more than 10 years of fieldwork in West Siberia summarized in an extensive dataset of botanical relevés, field photos, pH and electrical conductivity data from 40 test sites. After the classification procedure, discriminated spectral classes were generalized into 12 peatland types. Overall accuracy assessment was based on 439 randomly assigned test sites showing final map accuracy was 80%. Total peatland area was estimated at 73.0 Mha. Various ridge

  14. Automatic Building Detection based on Supervised Classification using High Resolution Google Earth Images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, S.; Ghaffarian, S.

    2014-08-01

    This paper presents a novel approach to detect the buildings by automization of the training area collecting stage for supervised classification. The method based on the fact that a 3d building structure should cast a shadow under suitable imaging conditions. Therefore, the methodology begins with the detection and masking out the shadow areas using luminance component of the LAB color space, which indicates the lightness of the image, and a novel double thresholding technique. Further, the training areas for supervised classification are selected by automatically determining a buffer zone on each building whose shadow is detected by using the shadow shape and the sun illumination direction. Thereafter, by calculating the statistic values of each buffer zone which is collected from the building areas the Improved Parallelepiped Supervised Classification is executed to detect the buildings. Standard deviation thresholding applied to the Parallelepiped classification method to improve its accuracy. Finally, simple morphological operations conducted for releasing the noises and increasing the accuracy of the results. The experiments were performed on set of high resolution Google Earth images. The performance of the proposed approach was assessed by comparing the results of the proposed approach with the reference data by using well-known quality measurements (Precision, Recall and F1-score) to evaluate the pixel-based and object-based performances of the proposed approach. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.4 % and 853 % overall pixel-based and object-based precision performances, respectively.

  15. Understanding Group/Party Affiliation Using Social Networks and Agent-Based Modeling

    NASA Technical Reports Server (NTRS)

    Campbell, Kenyth

    2012-01-01

    The dynamics of group affiliation and group dispersion is a concept that is most often studied in order for political candidates to better understand the most efficient way to conduct their campaigns. While political campaigning in the United States is a very hot topic that most politicians analyze and study, the concept of group/party affiliation presents its own area of study that producers very interesting results. One tool for examining party affiliation on a large scale is agent-based modeling (ABM), a paradigm in the modeling and simulation (M&S) field perfectly suited for aggregating individual behaviors to observe large swaths of a population. For this study agent based modeling was used in order to look at a community of agents and determine what factors can affect the group/party affiliation patterns that are present. In the agent-based model that was used for this experiment many factors were present but two main factors were used to determine the results. The results of this study show that it is possible to use agent-based modeling to explore group/party affiliation and construct a model that can mimic real world events. More importantly, the model in the study allows for the results found in a smaller community to be translated into larger experiments to determine if the results will remain present on a much larger scale.

  16. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture

    PubMed Central

    Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883

  17. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture.

    PubMed

    Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.

  18. Aesthetics-based classification of geological structures in outcrops for geotourism purposes: a tentative proposal

    NASA Astrophysics Data System (ADS)

    Mikhailenko, Anna V.; Nazarenko, Olesya V.; Ruban, Dmitry A.; Zayats, Pavel P.

    2017-03-01

    The current growth in geotourism requires an urgent development of classifications of geological features on the basis of criteria that are relevant to tourist perceptions. It appears that structure-related patterns are especially attractive for geotourists. Consideration of the main criteria by which tourists judge beauty and observations made in the geodiversity hotspot of the Western Caucasus allow us to propose a tentative aesthetics-based classification of geological structures in outcrops, with two classes and four subclasses. It is possible to distinguish between regular and quasi-regular patterns (i.e., striped and lined and contorted patterns) and irregular and complex patterns (paysage and sculptured patterns). Typical examples of each case are found both in the study area and on a global scale. The application of the proposed classification permits to emphasise features of interest to a broad range of tourists. Aesthetics-based (i.e., non-geological) classifications are necessary to take into account visions and attitudes of visitors.

  19. Adaptive video-based vehicle classification technique for monitoring traffic : [executive summary].

    DOT National Transportation Integrated Search

    2015-08-01

    Federal Highway Administration (FHWA) recommends axle-based classification standards to map : passenger vehicles, single unit trucks, and multi-unit trucks, at Automatic Traffic Recorder (ATR) stations : statewide. Many state Departments of Transport...

  20. Effective Sequential Classifier Training for SVM-Based Multitemporal Remote Sensing Image Classification

    NASA Astrophysics Data System (ADS)

    Guo, Yiqing; Jia, Xiuping; Paull, David

    2018-06-01

    The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.

  1. Simulation-based intelligent robotic agent for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Biegl, Csaba A.; Springfield, James F.; Cook, George E.; Fernandez, Kenneth R.

    1990-01-01

    A robot control package is described which utilizes on-line structural simulation of robot manipulators and objects in their workspace. The model-based controller is interfaced with a high level agent-independent planner, which is responsible for the task-level planning of the robot's actions. Commands received from the agent-independent planner are refined and executed in the simulated workspace, and upon successful completion, they are transferred to the real manipulators.

  2. Evaluation of different classification methods for the diagnosis of schizophrenia based on functional near-infrared spectroscopy.

    PubMed

    Li, Zhaohua; Wang, Yuduo; Quan, Wenxiang; Wu, Tongning; Lv, Bin

    2015-02-15

    Based on near-infrared spectroscopy (NIRS), recent converging evidence has been observed that patients with schizophrenia exhibit abnormal functional activities in the prefrontal cortex during a verbal fluency task (VFT). Therefore, some studies have attempted to employ NIRS measurements to differentiate schizophrenia patients from healthy controls with different classification methods. However, no systematic evaluation was conducted to compare their respective classification performances on the same study population. In this study, we evaluated the classification performance of four classification methods (including linear discriminant analysis, k-nearest neighbors, Gaussian process classifier, and support vector machines) on an NIRS-aided schizophrenia diagnosis. We recruited a large sample of 120 schizophrenia patients and 120 healthy controls and measured the hemoglobin response in the prefrontal cortex during the VFT using a multichannel NIRS system. Features for classification were extracted from three types of NIRS data in each channel. We subsequently performed a principal component analysis (PCA) for feature selection prior to comparison of the different classification methods. We achieved a maximum accuracy of 85.83% and an overall mean accuracy of 83.37% using a PCA-based feature selection on oxygenated hemoglobin signals and support vector machine classifier. This is the first comprehensive evaluation of different classification methods for the diagnosis of schizophrenia based on different types of NIRS signals. Our results suggested that, using the appropriate classification method, NIRS has the potential capacity to be an effective objective biomarker for the diagnosis of schizophrenia. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Agent-Based Scientific Workflow Composition

    NASA Astrophysics Data System (ADS)

    Barker, A.; Mann, B.

    2006-07-01

    Agents are active autonomous entities that interact with one another to achieve their objectives. This paper addresses how these active agents are a natural fit to consume the passive Service Oriented Architecture which is found in Internet and Grid Systems, in order to compose, coordinate and execute e-Science experiments. A framework is introduced which allows an e-Science experiment to be described as a MultiAgent System.

  4. Classification of EEG signals using a genetic-based machine learning classifier.

    PubMed

    Skinner, B T; Nguyen, H T; Liu, D K

    2007-01-01

    This paper investigates the efficacy of the genetic-based learning classifier system XCS, for the classification of noisy, artefact-inclusive human electroencephalogram (EEG) signals represented using large condition strings (108bits). EEG signals from three participants were recorded while they performed four mental tasks designed to elicit hemispheric responses. Autoregressive (AR) models and Fast Fourier Transform (FFT) methods were used to form feature vectors with which mental tasks can be discriminated. XCS achieved a maximum classification accuracy of 99.3% and a best average of 88.9%. The relative classification performance of XCS was then compared against four non-evolutionary classifier systems originating from different learning techniques. The experimental results will be used as part of our larger research effort investigating the feasibility of using EEG signals as an interface to allow paralysed persons to control a powered wheelchair or other devices.

  5. Multiscale agent-based cancer modeling.

    PubMed

    Zhang, Le; Wang, Zhihui; Sagotsky, Jonathan A; Deisboeck, Thomas S

    2009-04-01

    Agent-based modeling (ABM) is an in silico technique that is being used in a variety of research areas such as in social sciences, economics and increasingly in biomedicine as an interdisciplinary tool to study the dynamics of complex systems. Here, we describe its applicability to integrative tumor biology research by introducing a multi-scale tumor modeling platform that understands brain cancer as a complex dynamic biosystem. We summarize significant findings of this work, and discuss both challenges and future directions for ABM in the field of cancer research.

  6. Geometry-based ensembles: toward a structural characterization of the classification boundary.

    PubMed

    Pujol, Oriol; Masip, David

    2009-06-01

    This paper introduces a novel binary discriminative learning technique based on the approximation of the nonlinear decision boundary by a piecewise linear smooth additive model. The decision border is geometrically defined by means of the characterizing boundary points-points that belong to the optimal boundary under a certain notion of robustness. Based on these points, a set of locally robust linear classifiers is defined and assembled by means of a Tikhonov regularized optimization procedure in an additive model to create a final lambda-smooth decision rule. As a result, a very simple and robust classifier with a strong geometrical meaning and nonlinear behavior is obtained. The simplicity of the method allows its extension to cope with some of today's machine learning challenges, such as online learning, large-scale learning or parallelization, with linear computational complexity. We validate our approach on the UCI database, comparing with several state-of-the-art classification techniques. Finally, we apply our technique in online and large-scale scenarios and in six real-life computer vision and pattern recognition problems: gender recognition based on face images, intravascular ultrasound tissue classification, speed traffic sign detection, Chagas' disease myocardial damage severity detection, old musical scores clef classification, and action recognition using 3D accelerometer data from a wearable device. The results are promising and this paper opens a line of research that deserves further attention.

  7. Agent-Based Chemical Plume Tracing Using Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zarzhitsky, Dimitri; Spears, Diana; Thayer, David; Spears, William

    2004-01-01

    This paper presents a rigorous evaluation of a novel, distributed chemical plume tracing algorithm. The algorithm is a combination of the best aspects of the two most popular predecessors for this task. Furthermore, it is based on solid, formal principles from the field of fluid mechanics. The algorithm is applied by a network of mobile sensing agents (e.g., robots or micro-air vehicles) that sense the ambient fluid velocity and chemical concentration, and calculate derivatives. The algorithm drives the robotic network to the source of the toxic plume, where measures can be taken to disable the source emitter. This work is part of a much larger effort in research and development of a physics-based approach to developing networks of mobile sensing agents for monitoring, tracking, reporting and responding to hazardous conditions.

  8. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  9. Summary of the NICHD-BPCA Pediatric Formulation Initiatives Workshop-Pediatric Biopharmaceutics Classification System (PBCS) Working Group

    PubMed Central

    Abdel-Rahman, Susan; Amidon, Gordon L.; Kaul, Ajay; Lukacova, Viera; Vinks, Alexander A.; Knipp, Gregory

    2012-01-01

    ) an incomplete understanding of age-based changes in the GI, liver and kidney physiology; 3) a clear need to better understand age-based intestinal permeability and fraction absorbed required to develop the PBCS; 4) a clear need for the development and organization of pediatric tissue biobanks to serve as a source for ontogenic research; and 5) a lack of literature published in age-based pediatric pharmacokinetics in order to build Physiologically- and Population-Based Pharmacokinetic (PBPK) databases. Conclusions To begin the process of establishing a PBPK model, ten pediatric therapeutic agents were selected (based on their adult BCS classifications). Those agents should be targeted for additional research in the future. The PBCS working group also identified several areas where a greater emphasis on research is needed to enable the development of a PBCS. PMID:23149009

  10. iCrowd: agent-based behavior modeling and crowd simulator

    NASA Astrophysics Data System (ADS)

    Kountouriotis, Vassilios I.; Paterakis, Manolis; Thomopoulos, Stelios C. A.

    2016-05-01

    Initially designed in the context of the TASS (Total Airport Security System) FP-7 project, the Crowd Simulation platform developed by the Integrated Systems Lab of the Institute of Informatics and Telecommunications at N.C.S.R. Demokritos, has evolved into a complete domain-independent agent-based behavior simulator with an emphasis on crowd behavior and building evacuation simulation. Under continuous development, it reflects an effort to implement a modern, multithreaded, data-oriented simulation engine employing latest state-of-the-art programming technologies and paradigms. It is based on an extensible architecture that separates core services from the individual layers of agent behavior, offering a concrete simulation kernel designed for high-performance and stability. Its primary goal is to deliver an abstract platform to facilitate implementation of several Agent-Based Simulation solutions with applicability in several domains of knowledge, such as: (i) Crowd behavior simulation during [in/out] door evacuation. (ii) Non-Player Character AI for Game-oriented applications and Gamification activities. (iii) Vessel traffic modeling and simulation for Maritime Security and Surveillance applications. (iv) Urban and Highway Traffic and Transportation Simulations. (v) Social Behavior Simulation and Modeling.

  11. Agent-based modeling as a tool for program design and evaluation.

    PubMed

    Lawlor, Jennifer A; McGirr, Sara

    2017-12-01

    Recently, systems thinking and systems science approaches have gained popularity in the field of evaluation; however, there has been relatively little exploration of how evaluators could use quantitative tools to assist in the implementation of systems approaches therein. The purpose of this paper is to explore potential uses of one such quantitative tool, agent-based modeling, in evaluation practice. To this end, we define agent-based modeling and offer potential uses for it in typical evaluation activities, including: engaging stakeholders, selecting an intervention, modeling program theory, setting performance targets, and interpreting evaluation results. We provide demonstrative examples from published agent-based modeling efforts both inside and outside the field of evaluation for each of the evaluative activities discussed. We further describe potential pitfalls of this tool and offer cautions for evaluators who may chose to implement it in their practice. Finally, the article concludes with a discussion of the future of agent-based modeling in evaluation practice and a call for more formal exploration of this tool as well as other approaches to simulation modeling in the field. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Using Agent-Based Technologies to Enhance Learning in Educational Games

    ERIC Educational Resources Information Center

    Tumenayu, Ogar Ofut; Shabalina, Olga; Kamaev, Valeriy; Davtyan, Alexander

    2014-01-01

    Recent research has shown that educational games positively motivate learning. However, there is a little evidence that they can trigger learning to a large extent if the game-play is supported by additional activities. We aim to support educational games development with an Agent-Based Technology (ABT) by using intelligent pedagogical agents that…

  13. Unsilencing Critical Conversations in Social-Studies Teacher Education Using Agent-Based Modeling

    ERIC Educational Resources Information Center

    Hostetler, Andrew; Sengupta, Pratim; Hollett, Ty

    2018-01-01

    In this article, we argue that when complex sociopolitical issues such as ethnocentrism and racial segregation are represented as complex, emergent systems using agent-based computational models (in short agent-based models or ABMs), discourse about these representations can disrupt social studies teacher candidates' dispositions of teaching…

  14. A fingerprint classification algorithm based on combination of local and global information

    NASA Astrophysics Data System (ADS)

    Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu

    2011-12-01

    Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.

  15. SoFoCles: feature filtering for microarray classification based on gene ontology.

    PubMed

    Papachristoudis, Georgios; Diplaris, Sotiris; Mitkas, Pericles A

    2010-02-01

    Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the "curse of dimensionality" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.

  16. Hydrologic classification of rivers based on cluster analysis of dimensionless hydrologic signatures: Applications for environmental instream flows

    NASA Astrophysics Data System (ADS)

    Praskievicz, S. J.; Luo, C.

    2017-12-01

    Classification of rivers is useful for a variety of purposes, such as generating and testing hypotheses about watershed controls on hydrology, predicting hydrologic variables for ungaged rivers, and setting goals for river management. In this research, we present a bottom-up (based on machine learning) river classification designed to investigate the underlying physical processes governing rivers' hydrologic regimes. The classification was developed for the entire state of Alabama, based on 248 United States Geological Survey (USGS) stream gages that met criteria for length and completeness of records. Five dimensionless hydrologic signatures were derived for each gage: slope of the flow duration curve (indicator of flow variability), baseflow index (ratio of baseflow to average streamflow), rising limb density (number of rising limbs per unit time), runoff ratio (ratio of long-term average streamflow to long-term average precipitation), and streamflow elasticity (sensitivity of streamflow to precipitation). We used a Bayesian clustering algorithm to classify the gages, based on the five hydrologic signatures, into distinct hydrologic regimes. We then used classification and regression trees (CART) to predict each gaged river's membership in different hydrologic regimes based on climatic and watershed variables. Using existing geospatial data, we applied the CART analysis to classify ungaged streams in Alabama, with the National Hydrography Dataset Plus (NHDPlus) catchment (average area 3 km2) as the unit of classification. The results of the classification can be used for meeting management and conservation objectives in Alabama, such as developing statewide standards for environmental instream flows. Such hydrologic classification approaches are promising for contributing to process-based understanding of river systems.

  17. Feature selection gait-based gender classification under different circumstances

    NASA Astrophysics Data System (ADS)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  18. Agent-Based Models in Empirical Social Research

    ERIC Educational Resources Information Center

    Bruch, Elizabeth; Atwell, Jon

    2015-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first…

  19. An information-based network approach for protein classification

    PubMed Central

    Wan, Xiaogeng; Zhao, Xin; Yau, Stephen S. T.

    2017-01-01

    Protein classification is one of the critical problems in bioinformatics. Early studies used geometric distances and polygenetic-tree to classify proteins. These methods use binary trees to present protein classification. In this paper, we propose a new protein classification method, whereby theories of information and networks are used to classify the multivariate relationships of proteins. In this study, protein universe is modeled as an undirected network, where proteins are classified according to their connections. Our method is unsupervised, multivariate, and alignment-free. It can be applied to the classification of both protein sequences and structures. Nine examples are used to demonstrate the efficiency of our new method. PMID:28350835

  20. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  1. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  2. GIS/RS-based Rapid Reassessment for Slope Land Capability Classification

    NASA Astrophysics Data System (ADS)

    Chang, T. Y.; Chompuchan, C.

    2014-12-01

    Farmland resources in Taiwan are limited because about 73% is mountainous and slope land. Moreover, the rapid urbanization and dense population resulted in the highly developed flat area. Therefore, the utilization of slope land for agriculture is more needed. In 1976, "Slope Land Conservation and Utilization Act" was promulgated to regulate the slope land utilization. Consequently, slope land capability was categorized into Class I-IV according to 4 criteria, i.e., average land slope, effective soil depth, degree of soil erosion, and parent rock. The slope land capability Class I-VI are suitable for cultivation and pasture. Whereas, Class V should be used for forestry purpose and Class VI should be the conservation land which requires intensive conservation practices. The field survey was conducted to categorize each land unit as the classification scheme. The landowners may not allow to overuse land capability limitation. In the last decade, typhoons and landslides frequently devastated in Taiwan. The rapid post-disaster reassessment of the slope land capability classification is necessary. However, the large-scale disaster on slope land is the constraint of field investigation. This study focused on using satellite remote sensing and GIS as the rapid re-evaluation method. Chenyulan watershed in Nantou County, Taiwan was selected to be a case study area. Grid-based slope derivation, topographic wetness index (TWI) and USLE soil loss calculation were used to classify slope land capability. The results showed that GIS-based classification give an overall accuracy of 68.32%. In addition, the post-disaster areas of Typhoon Morakot in 2009, which interpreted by SPOT satellite imageries, were suggested to classify as the conservation lands. These tools perform better in the large coverage post-disaster update for slope land capability classification and reduce time-consuming, manpower and material resources to the field investigation.

  3. Probabilistic multiple sclerosis lesion classification based on modeling regional intensity variability and local neighborhood information.

    PubMed

    Harmouche, Rola; Subbanna, Nagesh K; Collins, D Louis; Arnold, Douglas L; Arbel, Tal

    2015-05-01

    In this paper, a fully automatic probabilistic method for multiple sclerosis (MS) lesion classification is presented, whereby the posterior probability density function over healthy tissues and two types of lesions (T1-hypointense and T2-hyperintense) is generated at every voxel. During training, the system explicitly models the spatial variability of the intensity distributions throughout the brain by first segmenting it into distinct anatomical regions and then building regional likelihood distributions for each tissue class based on multimodal magnetic resonance image (MRI) intensities. Local class smoothness is ensured by incorporating neighboring voxel information in the prior probability through Markov random fields. The system is tested on two datasets from real multisite clinical trials consisting of multimodal MRIs from a total of 100 patients with MS. Lesion classification results based on the framework are compared with and without the regional information, as well as with other state-of-the-art methods against the labels from expert manual raters. The metrics for comparison include Dice overlap, sensitivity, and positive predictive rates for both voxel and lesion classifications. Statistically significant improvements in Dice values ( ), for voxel-based and lesion-based sensitivity values ( ), and positive predictive rates ( and respectively) are shown when the proposed method is compared to the method without regional information, and to a widely used method [1]. This holds particularly true in the posterior fossa, an area where classification is very challenging. The proposed method allows us to provide clinicians with accurate tissue labels for T1-hypointense and T2-hyperintense lesions, two types of lesions that differ in appearance and clinical ramifications, and with a confidence level in the classification, which helps clinicians assess the classification results.

  4. The Agent-based Approach: A New Direction for Computational Models of Development.

    ERIC Educational Resources Information Center

    Schlesinger, Matthew; Parisi, Domenico

    2001-01-01

    Introduces the concepts of online and offline sampling and highlights the role of online sampling in agent-based models of learning and development. Compares the strengths of each approach for modeling particular developmental phenomena and research questions. Describes a recent agent-based model of infant causal perception. Discusses limitations…

  5. Sedaxicenes: potential new antifungal ferrocene-based agents?

    PubMed

    Rubbiani, R; Blacque, O; Gasser, G

    2016-04-21

    Fungal infections are a group of diseases spread all over the world with an extremely high morbidity. Worryingly, although several pathogenic fungi were found to develop resistance towards traditional therapy, research towards the discovery of novel antimycotic agents is very limited. Considering the promising results obtained with the ferrocene-based drug candidates Ferroquine and Ferrocifen as antimalarial and anticancer drug candidates, respectively, we envisaged derivatizing the organic scaffold of a new broad-spectrum fungicide, namely sedaxane, with a ferrocenyl moiety in order to obtain new metal-based antifungal agents. The new ferrocenyl sedaxane derivatives called herein Sedaxicenes (, and ) were characterized using different analytical techniques and the structures were confirmed by X-ray crystallography. As expected for antimycotic agents, , and were found to have a low or even no toxicity towards human cells (IC50 > 100 μM). Interestingly, while the parent drug did not display any mycotoxicity (EC50 > 100 μM), complex was found to have some antifungal activity with an IC50 value of 43 μM under the same experimental conditions. In order to investigate the possible redox-mediated mode of action of , we synthesized the ruthenocene analogue of , namely . Ruthenocene is known to have a completely different electrochemical behaviour from ferrocene although both the compounds are isostructural. As anticipated, complex was found to induce an increase of the reactive oxygen species level in S. cerevisiae, contrary to its analogue and to the parent compound sedaxane.

  6. Characterization and classification of lupus patients based on plasma thermograms

    PubMed Central

    Chaires, Jonathan B.; Mekmaysy, Chongkham S.; DeLeeuw, Lynn; Sivils, Kathy L.; Harley, John B.; Rovin, Brad H.; Kulasekera, K. B.; Jarjour, Wael N.

    2017-01-01

    Objective Plasma thermograms (thermal stability profiles of blood plasma) are being utilized as a new diagnostic approach for clinical assessment. In this study, we investigated the ability of plasma thermograms to classify systemic lupus erythematosus (SLE) patients versus non SLE controls using a sample of 300 SLE and 300 control subjects from the Lupus Family Registry and Repository. Additionally, we evaluated the heterogeneity of thermograms along age, sex, ethnicity, concurrent health conditions and SLE diagnostic criteria. Methods Thermograms were visualized graphically for important differences between covariates and summarized using various measures. A modified linear discriminant analysis was used to segregate SLE versus control subjects on the basis of the thermograms. Classification accuracy was measured based on multiple training/test splits of the data and compared to classification based on SLE serological markers. Results Median sensitivity, specificity, and overall accuracy based on classification using plasma thermograms was 86%, 83%, and 84% compared to 78%, 95%, and 86% based on a combination of five antibody tests. Combining thermogram and serology information together improved sensitivity from 78% to 86% and overall accuracy from 86% to 89% relative to serology alone. Predictive accuracy of thermograms for distinguishing SLE and osteoarthritis / rheumatoid arthritis patients was comparable. Both gender and anemia significantly interacted with disease status for plasma thermograms (p<0.001), with greater separation between SLE and control thermograms for females relative to males and for patients with anemia relative to patients without anemia. Conclusion Plasma thermograms constitute an additional biomarker which may help improve diagnosis of SLE patients, particularly when coupled with standard diagnostic testing. Differences in thermograms according to patient sex, ethnicity, clinical and environmental factors are important considerations for

  7. A drone detection with aircraft classification based on a camera array

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Qu, Fangchao; Liu, Yingjian; Zhao, Wei; Chen, Yitong

    2018-03-01

    In recent years, because of the rapid popularity of drones, many people have begun to operate drones, bringing a range of security issues to sensitive areas such as airports and military locus. It is one of the important ways to solve these problems by realizing fine-grained classification and providing the fast and accurate detection of different models of drone. The main challenges of fine-grained classification are that: (1) there are various types of drones, and the models are more complex and diverse. (2) the recognition test is fast and accurate, in addition, the existing methods are not efficient. In this paper, we propose a fine-grained drone detection system based on the high resolution camera array. The system can quickly and accurately recognize the detection of fine grained drone based on hd camera.

  8. Study on the E-commerce platform based on the agent

    NASA Astrophysics Data System (ADS)

    Fu, Ruixue; Qin, Lishuan; Gao, Yinmin

    2011-10-01

    To solve problem of dynamic integration in e-commerce, the Multi-Agent architecture of electronic commerce platform system based on Agent and Ontology has been introduced, which includes three major types of agent, Ontology and rule collection. In this architecture, service agent and rule are used to realize the business process reengineering, the reuse of software component, and agility of the electronic commerce platform. To illustrate the architecture, a simulation work has been done and the results imply that the architecture provides a very efficient method to design and implement the flexible, distributed, open and intelligent electronic commerce platform system to solve problem of dynamic integration in ecommerce. The objective of this paper is to illustrate the architecture of electronic commerce platform system, and the approach how Agent and Ontology support the electronic commerce platform system.

  9. Agent based modeling of the coevolution of hostility and pacifism

    NASA Astrophysics Data System (ADS)

    Dalmagro, Fermin; Jimenez, Juan

    2015-01-01

    We propose a model based on a population of agents whose states represent either hostile or peaceful behavior. Randomly selected pairs of agents interact according to a variation of the Prisoners Dilemma game, and the probabilities that the agents behave aggressively or not are constantly updated by the model so that the agents that remain in the game are those with the highest fitness. We show that the population of agents oscillate between generalized conflict and global peace, without either reaching a stable state. We then use this model to explain some of the emergent behaviors in collective conflicts, by comparing the simulated results with empirical data obtained from social systems. In particular, using public data reports we show how the model precisely reproduces interesting quantitative characteristics of diverse types of armed conflicts, public protests, riots and strikes.

  10. Surface impact on nanoparticle-based magnetic resonance imaging contrast agents

    PubMed Central

    Zhang, Weizhong; Liu, Lin; Chen, Hongmin; Hu, Kai; Delahunty, Ian; Gao, Shi; Xie, Jin

    2018-01-01

    Magnetic resonance imaging (MRI) is one of the most widely used diagnostic tools in the clinic. To improve imaging quality, MRI contrast agents, which can modulate local T1 and T2 relaxation times, are often injected prior to or during MRI scans. However, clinically used contrast agents, including Gd3+-based chelates and iron oxide nanoparticles (IONPs), afford mediocre contrast abilities. To address this issue, there has been extensive research on developing alternative MRI contrast agents with superior r1 and r2 relaxivities. These efforts are facilitated by the fast progress in nanotechnology, which allows for preparation of magnetic nanoparticles (NPs) with varied size, shape, crystallinity, and composition. Studies suggest that surface coatings can also largely affect T1 and T2 relaxations and can be tailored in favor of a high r1 or r2. However, the surface impact of NPs has been less emphasized. Herein, we review recent progress on developing NP-based T1 and T2 contrast agents, with a focus on the surface impact. PMID:29721097

  11. Joint sparse coding based spatial pyramid matching for classification of color medical image.

    PubMed

    Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin

    2015-04-01

    Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. An Ant Colony Optimization Based Feature Selection for Web Page Classification

    PubMed Central

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods. PMID:25136678

  13. An ant colony optimization based feature selection for web page classification.

    PubMed

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  14. A spatial web/agent-based model to support stakeholders' negotiation regarding land development.

    PubMed

    Pooyandeh, Majeed; Marceau, Danielle J

    2013-11-15

    Decision making in land management can be greatly enhanced if the perspectives of concerned stakeholders are taken into consideration. This often implies negotiation in order to reach an agreement based on the examination of multiple alternatives. This paper describes a spatial web/agent-based modeling system that was developed to support the negotiation process of stakeholders regarding land development in southern Alberta, Canada. This system integrates a fuzzy analytic hierarchy procedure within an agent-based model in an interactive visualization environment provided through a web interface to facilitate the learning and negotiation of the stakeholders. In the pre-negotiation phase, the stakeholders compare their evaluation criteria using linguistic expressions. Due to the uncertainty and fuzzy nature of such comparisons, a fuzzy Analytic Hierarchy Process is then used to prioritize the criteria. The negotiation starts by a development plan being submitted by a user (stakeholder) through the web interface. An agent called the proposer, which represents the proposer of the plan, receives this plan and starts negotiating with all other agents. The negotiation is conducted in a step-wise manner where the agents change their attitudes by assigning a new set of weights to their criteria. If an agreement is not achieved, a new location for development is proposed by the proposer agent. This process is repeated until a location is found that satisfies all agents to a certain predefined degree. To evaluate the performance of the model, the negotiation was simulated with four agents, one of which being the proposer agent, using two hypothetical development plans. The first plan was selected randomly; the other one was chosen in an area that is of high importance to one of the agents. While the agents managed to achieve an agreement about the location of the land development after three rounds of negotiation in the first scenario, seven rounds were required in the second

  15. Object based image analysis for the classification of the growth stages of Avocado crop, in Michoacán State, Mexico

    NASA Astrophysics Data System (ADS)

    Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.

    2014-11-01

    This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.

  16. Drunk driving detection based on classification of multivariate time series.

    PubMed

    Li, Zhenlong; Jin, Xue; Zhao, Xiaohua

    2015-09-01

    This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  17. Classification Based on Pruning and Double Covered Rule Sets for the Internet of Things Applications

    PubMed Central

    Zhou, Zhongmei; Wang, Weiping

    2014-01-01

    The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy. PMID:24511304

  18. Classification based on pruning and double covered rule sets for the internet of things applications.

    PubMed

    Li, Shasha; Zhou, Zhongmei; Wang, Weiping

    2014-01-01

    The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy.

  19. Multi-class SVM model for fMRI-based classification and grading of liver fibrosis

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Sela, Y.; Edrei, Y.; Pappo, O.; Joskowicz, L.; Abramovitch, R.

    2010-03-01

    We present a novel non-invasive automatic method for the classification and grading of liver fibrosis from fMRI maps based on hepatic hemodynamic changes. This method automatically creates a model for liver fibrosis grading based on training datasets. Our supervised learning method evaluates hepatic hemodynamics from an anatomical MRI image and three T2*-W fMRI signal intensity time-course scans acquired during the breathing of air, air-carbon dioxide, and carbogen. It constructs a statistical model of liver fibrosis from these fMRI scans using a binary-based one-against-all multi class Support Vector Machine (SVM) classifier. We evaluated the resulting classification model with the leave-one out technique and compared it to both full multi-class SVM and K-Nearest Neighbor (KNN) classifications. Our experimental study analyzed 57 slice sets from 13 mice, and yielded a 98.2% separation accuracy between healthy and low grade fibrotic subjects, and an overall accuracy of 84.2% for fibrosis grading. These results are better than the existing image-based methods which can only discriminate between healthy and high grade fibrosis subjects. With appropriate extensions, our method may be used for non-invasive classification and progression monitoring of liver fibrosis in human patients instead of more invasive approaches, such as biopsy or contrast-enhanced imaging.

  20. Hypercompetitive Environments: An Agent-based model approach

    NASA Astrophysics Data System (ADS)

    Dias, Manuel; Araújo, Tanya

    Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.

  1. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    NASA Technical Reports Server (NTRS)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post-classification

  2. A Partial Least Squares Based Procedure for Upstream Sequence Classification in Prokaryotes.

    PubMed

    Mehmood, Tahir; Bohlin, Jon; Snipen, Lars

    2015-01-01

    The upstream region of coding genes is important for several reasons, for instance locating transcription factor, binding sites, and start site initiation in genomic DNA. Motivated by a recently conducted study, where multivariate approach was successfully applied to coding sequence modeling, we have introduced a partial least squares (PLS) based procedure for the classification of true upstream prokaryotic sequence from background upstream sequence. The upstream sequences of conserved coding genes over genomes were considered in analysis, where conserved coding genes were found by using pan-genomics concept for each considered prokaryotic species. PLS uses position specific scoring matrix (PSSM) to study the characteristics of upstream region. Results obtained by PLS based method were compared with Gini importance of random forest (RF) and support vector machine (SVM), which is much used method for sequence classification. The upstream sequence classification performance was evaluated by using cross validation, and suggested approach identifies prokaryotic upstream region significantly better to RF (p-value < 0.01) and SVM (p-value < 0.01). Further, the proposed method also produced results that concurred with known biological characteristics of the upstream region.

  3. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Case-based statistical learning applied to SPECT image classification

    NASA Astrophysics Data System (ADS)

    Górriz, Juan M.; Ramírez, Javier; Illán, I. A.; Martínez-Murcia, Francisco J.; Segovia, Fermín.; Salas-Gonzalez, Diego; Ortiz, A.

    2017-03-01

    Statistical learning and decision theory play a key role in many areas of science and engineering. Some examples include time series regression and prediction, optical character recognition, signal detection in communications or biomedical applications for diagnosis and prognosis. This paper deals with the topic of learning from biomedical image data in the classification problem. In a typical scenario we have a training set that is employed to fit a prediction model or learner and a testing set on which the learner is applied to in order to predict the outcome for new unseen patterns. Both processes are usually completely separated to avoid over-fitting and due to the fact that, in practice, the unseen new objects (testing set) have unknown outcomes. However, the outcome yields one of a discrete set of values, i.e. the binary diagnosis problem. Thus, assumptions on these outcome values could be established to obtain the most likely prediction model at the training stage, that could improve the overall classification accuracy on the testing set, or keep its performance at least at the level of the selected statistical classifier. In this sense, a novel case-based learning (c-learning) procedure is proposed which combines hypothesis testing from a discrete set of expected outcomes and a cross-validated classification stage.

  5. Tongue Images Classification Based on Constrained High Dispersal Network.

    PubMed

    Meng, Dan; Cao, Guitao; Duan, Ye; Zhu, Minghua; Tu, Liping; Xu, Dong; Xu, Jiatuo

    2017-01-01

    Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM). However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN), we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet) to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.

  6. Agent-Based Modeling in Systems Pharmacology.

    PubMed

    Cosgrove, J; Butler, J; Alden, K; Read, M; Kumar, V; Cucurull-Sanchez, L; Timmis, J; Coles, M

    2015-11-01

    Modeling and simulation (M&S) techniques provide a platform for knowledge integration and hypothesis testing to gain insights into biological systems that would not be possible a priori. Agent-based modeling (ABM) is an M&S technique that focuses on describing individual components rather than homogenous populations. This tutorial introduces ABM to systems pharmacologists, using relevant case studies to highlight how ABM-specific strengths have yielded success in the area of preclinical mechanistic modeling.

  7. Agent-Based Modeling of Cancer Stem Cell Driven Solid Tumor Growth.

    PubMed

    Poleszczuk, Jan; Macklin, Paul; Enderling, Heiko

    2016-01-01

    Computational modeling of tumor growth has become an invaluable tool to simulate complex cell-cell interactions and emerging population-level dynamics. Agent-based models are commonly used to describe the behavior and interaction of individual cells in different environments. Behavioral rules can be informed and calibrated by in vitro assays, and emerging population-level dynamics may be validated with both in vitro and in vivo experiments. Here, we describe the design and implementation of a lattice-based agent-based model of cancer stem cell driven tumor growth.

  8. CARSVM: a class association rule-based classification framework and its application to gene expression data.

    PubMed

    Kianmehr, Keivan; Alhajj, Reda

    2008-09-01

    In this study, we aim at building a classification framework, namely the CARSVM model, which integrates association rule mining and support vector machine (SVM). The goal is to benefit from advantages of both, the discriminative knowledge represented by class association rules and the classification power of the SVM algorithm, to construct an efficient and accurate classifier model that improves the interpretability problem of SVM as a traditional machine learning technique and overcomes the efficiency issues of associative classification algorithms. In our proposed framework: instead of using the original training set, a set of rule-based feature vectors, which are generated based on the discriminative ability of class association rules over the training samples, are presented to the learning component of the SVM algorithm. We show that rule-based feature vectors present a high-qualified source of discrimination knowledge that can impact substantially the prediction power of SVM and associative classification techniques. They provide users with more conveniences in terms of understandability and interpretability as well. We have used four datasets from UCI ML repository to evaluate the performance of the developed system in comparison with five well-known existing classification methods. Because of the importance and popularity of gene expression analysis as real world application of the classification model, we present an extension of CARSVM combined with feature selection to be applied to gene expression data. Then, we describe how this combination will provide biologists with an efficient and understandable classifier model. The reported test results and their biological interpretation demonstrate the applicability, efficiency and effectiveness of the proposed model. From the results, it can be concluded that a considerable increase in classification accuracy can be obtained when the rule-based feature vectors are integrated in the learning process of the SVM

  9. [Evaluation of traditional pathological classification at molecular classification era for gastric cancer].

    PubMed

    Yu, Yingyan

    2014-01-01

    Histopathological classification is in a pivotal position in both basic research and clinical diagnosis and treatment of gastric cancer. Currently, there are different classification systems in basic science and clinical application. In medical literatures, different classifications are used including Lauren and WHO systems, which have confused many researchers. Lauren classification has been proposed for half a century, but is still used worldwide. It shows many advantages of simple, easy handling with prognostic significance. The WHO classification scheme is better than Lauren classification in that it is continuously being revised according to the progress of gastric cancer, and is always used in the clinical and pathological diagnosis of common scenarios. Along with the progression of genomics, transcriptomics, proteomics, metabolomics researches, molecular classification of gastric cancer becomes the current hot topics. The traditional therapeutic approach based on phenotypic characteristics of gastric cancer will most likely be replaced with a gene variation mode. The gene-targeted therapy against the same molecular variation seems more reasonable than traditional chemical treatment based on the same morphological change.

  10. Semantic Classification of Diseases in Discharge Summaries Using a Context-aware Rule-based Classifier

    PubMed Central

    Solt, Illés; Tikk, Domonkos; Gál, Viktor; Kardkovács, Zsolt T.

    2009-01-01

    Objective Automated and disease-specific classification of textual clinical discharge summaries is of great importance in human life science, as it helps physicians to make medical studies by providing statistically relevant data for analysis. This can be further facilitated if, at the labeling of discharge summaries, semantic labels are also extracted from text, such as whether a given disease is present, absent, questionable in a patient, or is unmentioned in the document. The authors present a classification technique that successfully solves the semantic classification task. Design The authors introduce a context-aware rule-based semantic classification technique for use on clinical discharge summaries. The classification is performed in subsequent steps. First, some misleading parts are removed from the text; then the text is partitioned into positive, negative, and uncertain context segments, then a sequence of binary classifiers is applied to assign the appropriate semantic labels. Measurement For evaluation the authors used the documents of the i2b2 Obesity Challenge and adopted its evaluation measures: F1-macro and F1-micro for measurements. Results On the two subtasks of the Obesity Challenge (textual and intuitive classification) the system performed very well, and achieved a F1-macro = 0.80 for the textual and F1-macro = 0.67 for the intuitive tasks, and obtained second place at the textual and first place at the intuitive subtasks of the challenge. Conclusions The authors show in the paper that a simple rule-based classifier can tackle the semantic classification task more successfully than machine learning techniques, if the training data are limited and some semantic labels are very sparse. PMID:19390101

  11. Classification of Land Cover and Land Use Based on Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Yang, Chun; Rottensteiner, Franz; Heipke, Christian

    2018-04-01

    Land cover describes the physical material of the earth's surface, whereas land use describes the socio-economic function of a piece of land. Land use information is typically collected in geospatial databases. As such databases become outdated quickly, an automatic update process is required. This paper presents a new approach to determine land cover and to classify land use objects based on convolutional neural networks (CNN). The input data are aerial images and derived data such as digital surface models. Firstly, we apply a CNN to determine the land cover for each pixel of the input image. We compare different CNN structures, all of them based on an encoder-decoder structure for obtaining dense class predictions. Secondly, we propose a new CNN-based methodology for the prediction of the land use label of objects from a geospatial database. In this context, we present a strategy for generating image patches of identical size from the input data, which are classified by a CNN. Again, we compare different CNN architectures. Our experiments show that an overall accuracy of up to 85.7 % and 77.4 % can be achieved for land cover and land use, respectively. The classification of land cover has a positive contribution to the classification of the land use classification.

  12. Quad-polarized synthetic aperture radar and multispectral data classification using classification and regression tree and support vector machine-based data fusion system

    NASA Astrophysics Data System (ADS)

    Bigdeli, Behnaz; Pahlavani, Parham

    2017-01-01

    Interpretation of synthetic aperture radar (SAR) data processing is difficult because the geometry and spectral range of SAR are different from optical imagery. Consequently, SAR imaging can be a complementary data to multispectral (MS) optical remote sensing techniques because it does not depend on solar illumination and weather conditions. This study presents a multisensor fusion of SAR and MS data based on the use of classification and regression tree (CART) and support vector machine (SVM) through a decision fusion system. First, different feature extraction strategies were applied on SAR and MS data to produce more spectral and textural information. To overcome the redundancy and correlation between features, an intrinsic dimension estimation method based on noise-whitened Harsanyi, Farrand, and Chang determines the proper dimension of the features. Then, principal component analysis and independent component analysis were utilized on stacked feature space of two data. Afterward, SVM and CART classified each reduced feature space. Finally, a fusion strategy was utilized to fuse the classification results. To show the effectiveness of the proposed methodology, single classification on each data was compared to the obtained results. A coregistered Radarsat-2 and WorldView-2 data set from San Francisco, USA, was available to examine the effectiveness of the proposed method. The results show that combinations of SAR data with optical sensor based on the proposed methodology improve the classification results for most of the classes. The proposed fusion method provided approximately 93.24% and 95.44% for two different areas of the data.

  13. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  14. Spectral-spatial classification of hyperspectral data with mutual information based segmented stacked autoencoder approach

    NASA Astrophysics Data System (ADS)

    Paul, Subir; Nagesh Kumar, D.

    2018-04-01

    Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.

  15. The ITE Land classification: Providing an environmental stratification of Great Britain.

    PubMed

    Bunce, R G; Barr, C J; Gillespie, M K; Howard, D C

    1996-01-01

    The surface of Great Britain (GB) varies continuously in land cover from one area to another. The objective of any environmentally based land classification is to produce classes that match the patterns that are present by helping to define clear boundaries. The more appropriate the analysis and data used, the better the classes will fit the natural patterns. The observation of inter-correlations between ecological factors is the basis for interpreting ecological patterns in the field, and the Institute of Terrestrial Ecology (ITE) Land Classification formalises such subjective ideas. The data inevitably comprise a large number of factors in order to describe the environment adequately. Single factors, such as altitude, would only be useful on a national basis if they were the only dominant causative agent of ecological variation.The ITE Land Classification has defined 32 environmental categories called 'land classes', initially based on a sample of 1-km squares in Great Britain but subsequently extended to all 240 000 1-km squares. The original classification was produced using multivariate analysis of 75 environmental variables. The extension to all squares in GB was performed using a combination of logistic discrimination and discriminant functions. The classes have provided a stratification for successive ecological surveys, the results of which have characterised the classes in terms of botanical, zoological and landscape features.The classification has also been applied to integrate diverse datasets including satellite imagery, soils and socio-economic information. A variety of models have used the structure of the classification, for example to show potential land use change under different economic conditions. The principal data sets relevant for planning purposes have been incorporated into a user-friendly computer package, called the 'Countryside Information System'.

  16. Quantum Cascade Laser-Based Infrared Microscopy for Label-Free and Automated Cancer Classification in Tissue Sections.

    PubMed

    Kuepper, Claus; Kallenbach-Thieltges, Angela; Juette, Hendrik; Tannapfel, Andrea; Großerueschkamp, Frederik; Gerwert, Klaus

    2018-05-16

    A feasibility study using a quantum cascade laser-based infrared microscope for the rapid and label-free classification of colorectal cancer tissues is presented. Infrared imaging is a reliable, robust, automated, and operator-independent tissue classification method that has been used for differential classification of tissue thin sections identifying tumorous regions. However, long acquisition time by the so far used FT-IR-based microscopes hampered the clinical translation of this technique. Here, the used quantum cascade laser-based microscope provides now infrared images for precise tissue classification within few minutes. We analyzed 110 patients with UICC-Stage II and III colorectal cancer, showing 96% sensitivity and 100% specificity of this label-free method as compared to histopathology, the gold standard in routine clinical diagnostics. The main hurdle for the clinical translation of IR-Imaging is overcome now by the short acquisition time for high quality diagnostic images, which is in the same time range as frozen sections by pathologists.

  17. Recurrent neural networks for breast lesion classification based on DCE-MRIs

    NASA Astrophysics Data System (ADS)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2018-02-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).

  18. Point spread function based classification of regions for linear digital tomosynthesis

    NASA Astrophysics Data System (ADS)

    Israni, Kenny; Avinash, Gopal; Li, Baojun

    2007-03-01

    In digital tomosynthesis, one of the limitations is the presence of out-of-plane blur due to the limited angle acquisition. The point spread function (PSF) characterizes blur in the imaging volume, and is shift-variant in tomosynthesis. The purpose of this research is to classify the tomosynthesis imaging volume into four different categories based on PSF-driven focus criteria. We considered linear tomosynthesis geometry and simple back projection algorithm for reconstruction. The three-dimensional PSF at every pixel in the imaging volume was determined. Intensity profiles were computed for every pixel by integrating the PSF-weighted intensities contained within the line segment defined by the PSF, at each slice. Classification rules based on these intensity profiles were used to categorize image regions. At background and low-frequency pixels, the derived intensity profiles were flat curves with relatively low and high maximum intensities respectively. At in-focus pixels, the maximum intensity of the profiles coincided with the PSF-weighted intensity of the pixel. At out-of-focus pixels, the PSF-weighted intensity of the pixel was always less than the maximum intensity of the profile. We validated our method using human observer classified regions as gold standard. Based on the computed and manual classifications, the mean sensitivity and specificity of the algorithm were 77+/-8.44% and 91+/-4.13% respectively (t=-0.64, p=0.56, DF=4). Such a classification algorithm may assist in mitigating out-of-focus blur from tomosynthesis image slices.

  19. Developing framework for agent- based diabetes disease management system: user perspective.

    PubMed

    Mohammadzadeh, Niloofar; Safdari, Reza; Rahimi, Azin

    2014-02-01

    One of the characteristics of agents is mobility which makes them very suitable for remote electronic health and tele medicine. The aim of this study is developing a framework for agent based diabetes information management at national level through identifying required agents. The main tool is a questioner that is designed in three sections based on studying library resources, performance of major organizations in the field of diabetes in and out of the country and interviews with experts in the medical, health information management and software fields. Questionnaires based on Delphi methods were distributed among 20 experts. In order to design and identify agents required in health information management for the prevention and appropriate and rapid treatment of diabetes, the results were analyzed using SPSS 17 and Results were plotted with FREEPLANE mind map software. ACCESS TO DATA TECHNOLOGY IN PROPOSED FRAMEWORK IN ORDER OF PRIORITY IS: mobile (mean 1/80), SMS, EMAIL (mean 2/80), internet, web (mean 3/30), phone (mean 3/60), WIFI (mean 4/60). In delivering health care to diabetic patients, considering social and human aspects is essential. Having a systematic view for implementation of agent systems and paying attention to all aspects such as feedbacks, user acceptance, budget, motivation, hierarchy, useful standards, affordability of individuals, identifying barriers and opportunities and so on, are necessary.

  20. Application of Bayesian Classification to Content-Based Data Management

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  1. Formalizing the Role of Agent-Based Modeling in Causal Inference and Epidemiology

    PubMed Central

    Marshall, Brandon D. L.; Galea, Sandro

    2015-01-01

    Calls for the adoption of complex systems approaches, including agent-based modeling, in the field of epidemiology have largely centered on the potential for such methods to examine complex disease etiologies, which are characterized by feedback behavior, interference, threshold dynamics, and multiple interacting causal effects. However, considerable theoretical and practical issues impede the capacity of agent-based methods to examine and evaluate causal effects and thus illuminate new areas for intervention. We build on this work by describing how agent-based models can be used to simulate counterfactual outcomes in the presence of complexity. We show that these models are of particular utility when the hypothesized causal mechanisms exhibit a high degree of interdependence between multiple causal effects and when interference (i.e., one person's exposure affects the outcome of others) is present and of intrinsic scientific interest. Although not without challenges, agent-based modeling (and complex systems methods broadly) represent a promising novel approach to identify and evaluate complex causal effects, and they are thus well suited to complement other modern epidemiologic methods of etiologic inquiry. PMID:25480821

  2. DNA methylation-based classification and grading system for meningioma: a multicentre, retrospective analysis.

    PubMed

    Sahm, Felix; Schrimpf, Daniel; Stichel, Damian; Jones, David T W; Hielscher, Thomas; Schefzyk, Sebastian; Okonechnikov, Konstantin; Koelsche, Christian; Reuss, David E; Capper, David; Sturm, Dominik; Wirsching, Hans-Georg; Berghoff, Anna Sophie; Baumgarten, Peter; Kratz, Annekathrin; Huang, Kristin; Wefers, Annika K; Hovestadt, Volker; Sill, Martin; Ellis, Hayley P; Kurian, Kathreena M; Okuducu, Ali Fuat; Jungk, Christine; Drueschler, Katharina; Schick, Matthias; Bewerunge-Hudler, Melanie; Mawrin, Christian; Seiz-Rosenhagen, Marcel; Ketter, Ralf; Simon, Matthias; Westphal, Manfred; Lamszus, Katrin; Becker, Albert; Koch, Arend; Schittenhelm, Jens; Rushing, Elisabeth J; Collins, V Peter; Brehmer, Stefanie; Chavez, Lukas; Platten, Michael; Hänggi, Daniel; Unterberg, Andreas; Paulus, Werner; Wick, Wolfgang; Pfister, Stefan M; Mittelbronn, Michel; Preusser, Matthias; Herold-Mende, Christel; Weller, Michael; von Deimling, Andreas

    2017-05-01

    The WHO classification of brain tumours describes 15 subtypes of meningioma. Nine of these subtypes are allotted to WHO grade I, and three each to grade II and grade III. Grading is based solely on histology, with an absence of molecular markers. Although the existing classification and grading approach is of prognostic value, it harbours shortcomings such as ill-defined parameters for subtypes and grading criteria prone to arbitrary judgment. In this study, we aimed for a comprehensive characterisation of the entire molecular genetic landscape of meningioma to identify biologically and clinically relevant subgroups. In this multicentre, retrospective analysis, we investigated genome-wide DNA methylation patterns of meningiomas from ten European academic neuro-oncology centres to identify distinct methylation classes of meningiomas. The methylation classes were further characterised by DNA copy number analysis, mutational profiling, and RNA sequencing. Methylation classes were analysed for progression-free survival outcomes by the Kaplan-Meier method. The DNA methylation-based and WHO classification schema were compared using the Brier prediction score, analysed in an independent cohort with WHO grading, progression-free survival, and disease-specific survival data available, collected at the Medical University Vienna (Vienna, Austria), assessing methylation patterns with an alternative methylation chip. We retrospectively collected 497 meningiomas along with 309 samples of other extra-axial skull tumours that might histologically mimic meningioma variants. Unsupervised clustering of DNA methylation data clearly segregated all meningiomas from other skull tumours. We generated genome-wide DNA methylation profiles from all 497 meningioma samples. DNA methylation profiling distinguished six distinct clinically relevant methylation classes associated with typical mutational, cytogenetic, and gene expression patterns. Compared with WHO grading, classification by

  3. Prediction of hot regions in protein-protein interaction by combining density-based incremental clustering with feature-based classification.

    PubMed

    Hu, Jing; Zhang, Xiaolong; Liu, Xiaoming; Tang, Jinshan

    2015-06-01

    Discovering hot regions in protein-protein interaction is important for drug and protein design, while experimental identification of hot regions is a time-consuming and labor-intensive effort; thus, the development of predictive models can be very helpful. In hot region prediction research, some models are based on structure information, and others are based on a protein interaction network. However, the prediction accuracy of these methods can still be improved. In this paper, a new method is proposed for hot region prediction, which combines density-based incremental clustering with feature-based classification. The method uses density-based incremental clustering to obtain rough hot regions, and uses feature-based classification to remove the non-hot spot residues from the rough hot regions. Experimental results show that the proposed method significantly improves the prediction performance of hot regions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Patient prognosis based on feature extraction, selection and classification of EEG periodic activity.

    PubMed

    Sánchez-González, Alain; García-Zapirain, Begoña; Maestro Saiz, Iratxe; Yurrebaso Santamaría, Izaskun

    2015-01-01

    Periodic activity in electroencephalography (PA-EEG) is shown as comprising a series of repetitive wave patterns that may appear in different cerebral regions and are due to many different pathologies. The diagnosis based on PA-EEG is an arduous task for experts in Clinical Neurophysiology, being mainly based on other clinical features of patients. Considering this difficulty in the diagnosis it is also very complicated to establish the prognosis of patients who present PA-EEG. The goal of this paper is to propose a method capable of determining patient prognosis based on characteristics of the PA-EEG activity. The approach, based on a parallel classification architecture and a majority vote system has proven successful by obtaining a success rate of 81.94% in the classification of patient prognosis of our database.

  5. Creating a three level building classification using topographic and address-based data for Manchester

    NASA Astrophysics Data System (ADS)

    Hussain, M.; Chen, D.

    2014-11-01

    Buildings, the basic unit of an urban landscape, host most of its socio-economic activities and play an important role in the creation of urban land-use patterns. The spatial arrangement of different building types creates varied urban land-use clusters which can provide an insight to understand the relationships between social, economic, and living spaces. The classification of such urban clusters can help in policy-making and resource management. In many countries including the UK no national-level cadastral database containing information on individual building types exists in public domain. In this paper, we present a framework for inferring functional types of buildings based on the analysis of their form (e.g. geometrical properties, such as area and perimeter, layout) and spatial relationship from large topographic and address-based GIS database. Machine learning algorithms along with exploratory spatial analysis techniques are used to create the classification rules. The classification is extended to two further levels based on the functions (use) of buildings derived from address-based data. The developed methodology was applied to the Manchester metropolitan area using the Ordnance Survey's MasterMap®, a large-scale topographic and address-based data available for the UK.

  6. An agent-based method for simulating porous fluid-saturated structures with indistinguishable components

    NASA Astrophysics Data System (ADS)

    Kashani, Jamal; Pettet, Graeme John; Gu, YuanTong; Zhang, Lihai; Oloyede, Adekunle

    2017-10-01

    Single-phase porous materials contain multiple components that intermingle up to the ultramicroscopic level. Although the structures of the porous materials have been simulated with agent-based methods, the results of the available methods continue to provide patterns of distinguishable solid and fluid agents which do not represent materials with indistinguishable phases. This paper introduces a new agent (hybrid agent) and category of rules (intra-agent rule) that can be used to create emergent structures that would more accurately represent single-phase structures and materials. The novel hybrid agent carries the characteristics of system's elements and it is capable of changing within itself, while also responding to its neighbours as they also change. As an example, the hybrid agent under one-dimensional cellular automata formalism in a two-dimensional domain is used to generate patterns that demonstrate the striking morphological and characteristic similarities with the porous saturated single-phase structures where each agent of the ;structure; carries semi-permeability property and consists of both fluid and solid in space and at all times. We conclude that the ability of the hybrid agent to change locally provides an enhanced protocol to simulate complex porous structures such as biological tissues which could facilitate models for agent-based techniques and numerical methods.

  7. 14 CFR 1203.412 - Classification guides.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Classification guides. 1203.412 Section... PROGRAM Guides for Original Classification § 1203.412 Classification guides. (a) General. A classification guide, based upon classification determinations made by appropriate program and classification...

  8. 14 CFR 1203.412 - Classification guides.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Classification guides. 1203.412 Section... PROGRAM Guides for Original Classification § 1203.412 Classification guides. (a) General. A classification guide, based upon classification determinations made by appropriate program and classification...

  9. 14 CFR 1203.412 - Classification guides.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Classification guides. 1203.412 Section 1203... Guides for Original Classification § 1203.412 Classification guides. (a) General. A classification guide, based upon classification determinations made by appropriate program and classification authorities...

  10. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  11. The Comprehensive AOCMF Classification: Skull Base and Cranial Vault Fractures – Level 2 and 3 Tutorial

    PubMed Central

    Ieva, Antonio Di; Audigé, Laurent; Kellman, Robert M.; Shumrick, Kevin A.; Ringl, Helmut; Prein, Joachim; Matula, Christian

    2014-01-01

    The AOCMF Classification Group developed a hierarchical three-level craniomaxillofacial classification system with increasing level of complexity and details. The highest level 1 system distinguish four major anatomical units, including the mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). This tutorial presents the level 2 and more detailed level 3 systems for the skull base and cranial vault units. The level 2 system describes fracture location outlining the topographic boundaries of the anatomic regions, considering in particular the endocranial and exocranial skull base surfaces. The endocranial skull base is divided into nine regions; a central skull base adjoining a left and right side are divided into the anterior, middle, and posterior skull base. The exocranial skull base surface and cranial vault are divided in regions defined by the names of the bones involved: frontal, parietal, temporal, sphenoid, and occipital bones. The level 3 system allows assessing fracture morphology described by the presence of fracture fragmentation, displacement, and bone loss. A documentation of associated intracranial diagnostic features is proposed. This tutorial is organized in a sequence of sections dealing with the description of the classification system with illustrations of the topographical skull base and cranial vault regions along with rules for fracture location and coding, a series of case examples with clinical imaging and a general discussion on the design of this classification. PMID:25489394

  12. A review of classification algorithms for EEG-based brain–computer interfaces: a 10 year update

    NASA Astrophysics Data System (ADS)

    Lotte, F.; Bougrain, L.; Cichocki, A.; Clerc, M.; Congedo, M.; Rakotomamonjy, A.; Yger, F.

    2018-06-01

    Objective. Most current electroencephalography (EEG)-based brain–computer interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately ten years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs. Approach. We surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons. Main results. We found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods. Significance. This paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these methods and guidelines on when and how to use them. It also identifies a number of challenges

  13. Agent-based modelling of consumer energy choices

    NASA Astrophysics Data System (ADS)

    Rai, Varun; Henry, Adam Douglas

    2016-06-01

    Strategies to mitigate global climate change should be grounded in a rigorous understanding of energy systems, particularly the factors that drive energy demand. Agent-based modelling (ABM) is a powerful tool for representing the complexities of energy demand, such as social interactions and spatial constraints. Unlike other approaches for modelling energy demand, ABM is not limited to studying perfectly rational agents or to abstracting micro details into system-level equations. Instead, ABM provides the ability to represent behaviours of energy consumers -- such as individual households -- using a range of theories, and to examine how the interaction of heterogeneous agents at the micro-level produces macro outcomes of importance to the global climate, such as the adoption of low-carbon behaviours and technologies over space and time. We provide an overview of ABM work in the area of consumer energy choices, with a focus on identifying specific ways in which ABM can improve understanding of both fundamental scientific and applied aspects of the demand side of energy to aid the design of better policies and programmes. Future research needs for improving the practice of ABM to better understand energy demand are also discussed.

  14. Interannual rainfall variability and SOM-based circulation classification

    NASA Astrophysics Data System (ADS)

    Wolski, Piotr; Jack, Christopher; Tadross, Mark; van Aardenne, Lisa; Lennard, Christopher

    2018-01-01

    Self-Organizing Maps (SOM) based classifications of synoptic circulation patterns are increasingly being used to interpret large-scale drivers of local climate variability, and as part of statistical downscaling methodologies. These applications rely on a basic premise of synoptic climatology, i.e. that local weather is conditioned by the large-scale circulation. While it is clear that this relationship holds in principle, the implications of its implementation through SOM-based classification, particularly at interannual and longer time scales, are not well recognized. Here we use a SOM to understand the interannual synoptic drivers of climate variability at two locations in the winter and summer rainfall regimes of South Africa. We quantify the portion of variance in seasonal rainfall totals that is explained by year to year differences in the synoptic circulation, as schematized by a SOM. We furthermore test how different spatial domain sizes and synoptic variables affect the ability of the SOM to capture the dominant synoptic drivers of interannual rainfall variability. Additionally, we identify systematic synoptic forcing that is not captured by the SOM classification. The results indicate that the frequency of synoptic states, as schematized by a relatively disaggregated SOM (7 × 9) of prognostic atmospheric variables, including specific humidity, air temperature and geostrophic winds, captures only 20-45% of interannual local rainfall variability, and that the residual variance contains a strong systematic component. Utilising a multivariate linear regression framework demonstrates that this residual variance can largely be explained using synoptic variables over a particular location; even though they are used in the development of the SOM their influence, however, diminishes with the size of the SOM spatial domain. The influence of the SOM domain size, the choice of SOM atmospheric variables and grid-point explanatory variables on the levels of explained

  15. A new classification scheme of plastic wastes based upon recycling labels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Özkan, Kemal, E-mail: kozkan@ogu.edu.tr; Ergin, Semih, E-mail: sergin@ogu.edu.tr; Işık, Şahin, E-mail: sahini@ogu.edu.tr

    Highlights: • PET, HPDE or PP types of plastics are considered. • An automated classification of plastic bottles based on the feature extraction and classification methods is performed. • The decision mechanism consists of PCA, Kernel PCA, FLDA, SVD and Laplacian Eigenmaps methods. • SVM is selected to achieve the classification task and majority voting technique is used. - Abstract: Since recycling of materials is widely assumed to be environmentally and economically beneficial, reliable sorting and processing of waste packaging materials such as plastics is very important for recycling with high efficiency. An automated system that can quickly categorize thesemore » materials is certainly needed for obtaining maximum classification while maintaining high throughput. In this paper, first of all, the photographs of the plastic bottles have been taken and several preprocessing steps were carried out. The first preprocessing step is to extract the plastic area of a bottle from the background. Then, the morphological image operations are implemented. These operations are edge detection, noise removal, hole removing, image enhancement, and image segmentation. These morphological operations can be generally defined in terms of the combinations of erosion and dilation. The effect of bottle color as well as label are eliminated using these operations. Secondly, the pixel-wise intensity values of the plastic bottle images have been used together with the most popular subspace and statistical feature extraction methods to construct the feature vectors in this study. Only three types of plastics are considered due to higher existence ratio of them than the other plastic types in the world. The decision mechanism consists of five different feature extraction methods including as Principal Component Analysis (PCA), Kernel PCA (KPCA), Fisher’s Linear Discriminant Analysis (FLDA), Singular Value Decomposition (SVD) and Laplacian Eigenmaps (LEMAP) and uses a simple

  16. Age group classification and gender detection based on forced expiratory spirometry.

    PubMed

    Cosgun, Sema; Ozbek, I Yucel

    2015-08-01

    This paper investigates the utility of forced expiratory spirometry (FES) test with efficient machine learning algorithms for the purpose of gender detection and age group classification. The proposed method has three main stages: feature extraction, training of the models and detection. In the first stage, some features are extracted from volume-time curve and expiratory flow-volume loop obtained from FES test. In the second stage, the probabilistic models for each gender and age group are constructed by training Gaussian mixture models (GMMs) and Support vector machine (SVM) algorithm. In the final stage, the gender (or age group) of test subject is estimated by using the trained GMM (or SVM) model. Experiments have been evaluated on a large database from 4571 subjects. The experimental results show that average correct classification rate performance of both GMM and SVM methods based on the FES test is more than 99.3 % and 96.8 % for gender and age group classification, respectively.

  17. A CLASSIFICATION OF U.S. ESTUARIES BASED ON PHYSICAL, HYDROLOGIC ATTRIBUTES

    EPA Science Inventory

    A classification of U.S. estuaries is presented based on estuarine characteristics that have been identified as important for quantifying stressor-response

    relationships in coastal systems. Estuaries within a class have similar physical/hydrologic and land use characteris...

  18. Evidence-based provisional clinical classification criteria for autoinflammatory periodic fevers.

    PubMed

    Federici, Silvia; Sormani, Maria Pia; Ozen, Seza; Lachmann, Helen J; Amaryan, Gayane; Woo, Patricia; Koné-Paut, Isabelle; Dewarrat, Natacha; Cantarini, Luca; Insalaco, Antonella; Uziel, Yosef; Rigante, Donato; Quartier, Pierre; Demirkaya, Erkan; Herlin, Troels; Meini, Antonella; Fabio, Giovanna; Kallinich, Tilmann; Martino, Silvana; Butbul, Aviel Yonatan; Olivieri, Alma; Kuemmerle-Deschner, Jasmin; Neven, Benedicte; Simon, Anna; Ozdogan, Huri; Touitou, Isabelle; Frenkel, Joost; Hofer, Michael; Martini, Alberto; Ruperto, Nicolino; Gattorno, Marco

    2015-05-01

    The objective of this work was to develop and validate a set of clinical criteria for the classification of patients affected by periodic fevers. Patients with inherited periodic fevers (familial Mediterranean fever (FMF); mevalonate kinase deficiency (MKD); tumour necrosis factor receptor-associated periodic fever syndrome (TRAPS); cryopyrin-associated periodic syndromes (CAPS)) enrolled in the Eurofever Registry up until March 2013 were evaluated. Patients with periodic fever, aphthosis, pharyngitis and adenitis (PFAPA) syndrome were used as negative controls. For each genetic disease, patients were considered to be 'gold standard' on the basis of the presence of a confirmatory genetic analysis. Clinical criteria were formulated on the basis of univariate and multivariate analysis in an initial group of patients (training set) and validated in an independent set of patients (validation set). A total of 1215 consecutive patients with periodic fevers were identified, and 518 gold standard patients (291 FMF, 74 MKD, 86 TRAPS, 67 CAPS) and 199 patients with PFAPA as disease controls were evaluated. The univariate and multivariate analyses identified a number of clinical variables that correlated independently with each disease, and four provisional classification scores were created. Cut-off values of the classification scores were chosen using receiver operating characteristic curve analysis as those giving the highest sensitivity and specificity. The classification scores were then tested in an independent set of patients (validation set) with an area under the curve of 0.98 for FMF, 0.95 for TRAPS, 0.96 for MKD, and 0.99 for CAPS. In conclusion, evidence-based provisional clinical criteria with high sensitivity and specificity for the clinical classification of patients with inherited periodic fevers have been developed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  19. Large-scale optimization-based classification models in medicine and biology.

    PubMed

    Lee, Eva K

    2007-06-01

    We present novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule); and (5) successive multi-stage classification capability to handle data points placed in the reserved-judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multi-group prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80 to 100%. This provides motivation for pursuing its use as a

  20. Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use

    PubMed Central

    Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil

    2013-01-01

    The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648

  1. Amino acid–based surfactants: New antimicrobial agents.

    PubMed

    Pinazo, A; Manresa, M A; Marques, A M; Bustelo, M; Espuny, M J; Pérez, L

    2016-02-01

    The rapid increase of drug resistant bacteria makes necessary the development of new antimicrobial agents. Synthetic amino acid-based surfactants constitute a promising alternative to conventional antimicrobial compounds given that they can be prepared from renewable raw materials. In this review, we discuss the structural features that promote antimicrobial activity of amino acid-based surfactants. Monocatenary, dicatenary and gemini surfactants that contain different amino acids on the polar head and show activity against bacteria are revised. The synthesis and basic physico-chemical properties have also been included.

  2. Fuzzy logic based on-line fault detection and classification in transmission line.

    PubMed

    Adhikari, Shuma; Sinha, Nidul; Dorendrajit, Thingam

    2016-01-01

    This study presents fuzzy logic based online fault detection and classification of transmission line using Programmable Automation and Control technology based National Instrument Compact Reconfigurable i/o (CRIO) devices. The LabVIEW software combined with CRIO can perform real time data acquisition of transmission line. When fault occurs in the system current waveforms are distorted due to transients and their pattern changes according to the type of fault in the system. The three phase alternating current, zero sequence and positive sequence current data generated by LabVIEW through CRIO-9067 are processed directly for relaying. The result shows that proposed technique is capable of right tripping action and classification of type of fault at high speed therefore can be employed in practical application.

  3. Application of GIS-based Procedure on Slopeland Use Classification and Identification

    NASA Astrophysics Data System (ADS)

    KU, L. C.; LI, M. C.

    2016-12-01

    In Taiwan, the "Slopeland Conservation and Utilization Act" regulates the management of the slopelands. It categorizes the slopeland into land suitable for agricultural or animal husbandry, land suitable for forestry and land for enhanced conservation, according to the environmental factors of average slope, effective soil depth, soil erosion and parental rock. Traditionally, investigations of environmental factors require cost-effective field works. It has been confronted with many practical issues such as non-evaluated cadastral parcels, evaluation results depending on expert's opinion, difficulties in field measurement and judgment, and time consuming. This study aimed to develop a GIS-based procedure involved in the acceleration of slopeland use classification and quality improvement. First, the environmental factors of slopelands were analyzed by GIS and SPSS software. The analysis involved with the digital elevation model (DEM), soil depth map, land use map and satellite images. Second, 5% of the analyzed slopelands were selected to perform the site investigations and correct the results of classification. Finally, a 2nd examination was involved by randomly selected 2% of the analyzed slopelands to perform the accuracy evaluation. It was showed the developed procedure is effective in slopeland use classification and identification. Keywords: Slopeland Use Classification, GIS, Management

  4. Spectral-spatial hyperspectral image classification using super-pixel-based spatial pyramid representation

    NASA Astrophysics Data System (ADS)

    Fan, Jiayuan; Tan, Hui Li; Toomik, Maria; Lu, Shijian

    2016-10-01

    Spatial pyramid matching has demonstrated its power for image recognition task by pooling features from spatially increasingly fine sub-regions. Motivated by the concept of feature pooling at multiple pyramid levels, we propose a novel spectral-spatial hyperspectral image classification approach using superpixel-based spatial pyramid representation. This technique first generates multiple superpixel maps by decreasing the superpixel number gradually along with the increased spatial regions for labelled samples. By using every superpixel map, sparse representation of pixels within every spatial region is then computed through local max pooling. Finally, features learned from training samples are aggregated and trained by a support vector machine (SVM) classifier. The proposed spectral-spatial hyperspectral image classification technique has been evaluated on two public hyperspectral datasets, including the Indian Pines image containing 16 different agricultural scene categories with a 20m resolution acquired by AVIRIS and the University of Pavia image containing 9 land-use categories with a 1.3m spatial resolution acquired by the ROSIS-03 sensor. Experimental results show significantly improved performance compared with the state-of-the-art works. The major contributions of this proposed technique include (1) a new spectral-spatial classification approach to generate feature representation for hyperspectral image, (2) a complementary yet effective feature pooling approach, i.e. the superpixel-based spatial pyramid representation that is used for the spatial correlation study, (3) evaluation on two public hyperspectral image datasets with superior image classification performance.

  5. Texture characterization for joint compression and classification based on human perception in the wavelet domain.

    PubMed

    Fahmy, Gamal; Black, John; Panchanathan, Sethuraman

    2006-06-01

    Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.

  6. Efficient Fingercode Classification

    NASA Astrophysics Data System (ADS)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  7. [Surgical treatment of chronic pancreatitis based on classification of M. Buchler and coworkers].

    PubMed

    Krivoruchko, I A; Boĭko, V V; Goncharova, N N; Andreeshchev, S A

    2011-08-01

    The results of surgical treatment of 452 patients, suffering chronic pancreatitis (CHP), were analyzed. The CHP classification, elaborated by M. Buchler and coworkers (2009), based on clinical signs, morphological peculiarities and pancreatic function analysis, contains scientifically substantiated recommendations for choice of diagnostic methods and complex treatment of the disease. The classification proposed is simple in application and constitutes an instrument for studying and comparison of the CHP course severity, the patients prognosis and treatment.

  8. Elements of decisional dynamics: An agent-based approach applied to artificial financial market

    NASA Astrophysics Data System (ADS)

    Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille

    2018-02-01

    This paper introduces an original mathematical description for describing agents' decision-making process in the case of problems affected by both individual and collective behaviors in systems characterized by nonlinear, path dependent, and self-organizing interactions. An application to artificial financial markets is proposed by designing a multi-agent system based on the proposed formalization. In this application, agents' decision-making process is based on fuzzy logic rules and the price dynamics is purely deterministic according to the basic matching rules of a central order book. Finally, while putting most parameters under evolutionary control, the computational agent-based system is able to replicate several stylized facts of financial time series (distributions of stock returns showing a heavy tail with positive excess kurtosis, absence of autocorrelations in stock returns, and volatility clustering phenomenon).

  9. Elements of decisional dynamics: An agent-based approach applied to artificial financial market.

    PubMed

    Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille

    2018-02-01

    This paper introduces an original mathematical description for describing agents' decision-making process in the case of problems affected by both individual and collective behaviors in systems characterized by nonlinear, path dependent, and self-organizing interactions. An application to artificial financial markets is proposed by designing a multi-agent system based on the proposed formalization. In this application, agents' decision-making process is based on fuzzy logic rules and the price dynamics is purely deterministic according to the basic matching rules of a central order book. Finally, while putting most parameters under evolutionary control, the computational agent-based system is able to replicate several stylized facts of financial time series (distributions of stock returns showing a heavy tail with positive excess kurtosis, absence of autocorrelations in stock returns, and volatility clustering phenomenon).

  10. Developing a Conceptual Architecture for a Generalized Agent-based Modeling Environment (GAME)

    DTIC Science & Technology

    2008-03-01

    4. REPAST (Java, Python , C#, Open Source) ........28 5. MASON: Multi-Agent Modeling Language (Swarm Extension... Python , C#, Open Source) Repast (Recursive Porous Agent Simulation Toolkit) was designed for building agent-based models and simulations in the...Repast makes it easy for inexperienced users to build models by including a built-in simple model and provide interfaces through which menus and Python

  11. Object-based vegetation classification with high resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Yu, Qian

    Vegetation species are valuable indicators to understand the earth system. Information from mapping of vegetation species and community distribution at large scales provides important insight for studying the phenological (growth) cycles of vegetation and plant physiology. Such information plays an important role in land process modeling including climate, ecosystem and hydrological models. The rapidly growing remote sensing technology has increased its potential in vegetation species mapping. However, extracting information at a species level is still a challenging research topic. I proposed an effective method for extracting vegetation species distribution from remotely sensed data and investigated some ways for accuracy improvement. The study consists of three phases. Firstly, a statistical analysis was conducted to explore the spatial variation and class separability of vegetation as a function of image scale. This analysis aimed to confirm that high resolution imagery contains the information on spatial vegetation variation and these species classes can be potentially separable. The second phase was a major effort in advancing classification by proposing a method for extracting vegetation species from high spatial resolution remote sensing data. The proposed classification employs an object-based approach that integrates GIS and remote sensing data and explores the usefulness of ancillary information. The whole process includes image segmentation, feature generation and selection, and nearest neighbor classification. The third phase introduces a spatial regression model for evaluating the mapping quality from the above vegetation classification results. The effects of six categories of sample characteristics on the classification uncertainty are examined: topography, sample membership, sample density, spatial composition characteristics, training reliability and sample object features. This evaluation analysis answered several interesting scientific questions

  12. a Rough Set Decision Tree Based Mlp-Cnn for Very High Resolution Remotely Sensed Image Classification

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.

    2017-09-01

    Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.

  13. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    PubMed Central

    Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Aarsvold, John N.; Raghunath, Nivedita; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.; Votaw, John R.

    2012-01-01

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [11C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR

  14. A novel fruit shape classification method based on multi-scale analysis

    NASA Astrophysics Data System (ADS)

    Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin

    2005-11-01

    Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.

  15. Motif-Based Text Mining of Microbial Metagenome Redundancy Profiling Data for Disease Classification.

    PubMed

    Wang, Yin; Li, Rudong; Zhou, Yuhua; Ling, Zongxin; Guo, Xiaokui; Xie, Lu; Liu, Lei

    2016-01-01

    Text data of 16S rRNA are informative for classifications of microbiota-associated diseases. However, the raw text data need to be systematically processed so that features for classification can be defined/extracted; moreover, the high-dimension feature spaces generated by the text data also pose an additional difficulty. Here we present a Phylogenetic Tree-Based Motif Finding algorithm (PMF) to analyze 16S rRNA text data. By integrating phylogenetic rules and other statistical indexes for classification, we can effectively reduce the dimension of the large feature spaces generated by the text datasets. Using the retrieved motifs in combination with common classification methods, we can discriminate different samples of both pneumonia and dental caries better than other existing methods. We extend the phylogenetic approaches to perform supervised learning on microbiota text data to discriminate the pathological states for pneumonia and dental caries. The results have shown that PMF may enhance the efficiency and reliability in analyzing high-dimension text data.

  16. Integration agent-based models and GIS as a virtual urban dynamic laboratory

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Liu, Miaolong

    2007-06-01

    Based on the Agent-based Model and spatial data model, a tight-coupling integrating method of GIS and Agent-based Model (ABM) is to be discussed in this paper. The use of object-orientation for both spatial data and spatial process models facilitates their integration, which can allow exploration and explanation of spatial-temporal phenomena such as urban dynamic. In order to better understand how tight coupling might proceed and to evaluate the possible functional and efficiency gains from such a tight coupling, the agent-based model and spatial data model are discussed, and then the relationships affecting spatial data model and agent-based process models interaction. After that, a realistic crowd flow simulation experiment is presented. Using some tools provided by general GIS systems and a few specific programming languages, a new software system integrating GIS and MAS as a virtual laboratory applicable for simulating pedestrian flows in a crowd activity centre has been developed successfully. Under the environment supported by the software system, as an applicable case, a dynamic evolution process of the pedestrian's flows (dispersed process for the spectators) in a crowds' activity center - The Shanghai Stadium has been simulated successfully. At the end of the paper, some new research problems have been pointed out for the future.

  17. Comparing the MRI-based Goutallier Classification to an experimental quantitative MR spectroscopic fat measurement of the supraspinatus muscle.

    PubMed

    Gilbert, Fabian; Böhm, Dirk; Eden, Lars; Schmalzl, Jonas; Meffert, Rainer H; Köstler, Herbert; Weng, Andreas M; Ziegler, Dirk

    2016-08-22

    The Goutallier Classification is a semi quantitative classification system to determine the amount of fatty degeneration in rotator cuff muscles. Although initially proposed for axial computer tomography scans it is currently applied to magnet-resonance-imaging-scans. The role for its clinical use is controversial, as the reliability of the classification has been shown to be inconsistent. The purpose of this study was to compare the semi quantitative MRI-based Goutallier Classification applied by 5 different raters to experimental MR spectroscopic quantitative fat measurement in order to determine the correlation between this classification system and the true extent of fatty degeneration shown by spectroscopy. MRI-scans of 42 patients with rotator cuff tears were examined by 5 shoulder surgeons and were graduated according to the MRI-based Goutallier Classification proposed by Fuchs et al. Additionally the fat/water ratio was measured with MR spectroscopy using the experimental SPLASH technique. The semi quantitative grading according to the Goutallier Classification was statistically correlated with the quantitative measured fat/water ratio using Spearman's rank correlation. Statistical analysis of the data revealed only fair correlation of the Goutallier Classification system and the quantitative fat/water ratio with R = 0.35 (p < 0.05). By dichotomizing the scale the correlation was 0.72. The interobserver and intraobserver reliabilities were substantial with R = 0.62 and R = 0.74 (p < 0.01). The correlation between the semi quantitative MRI based Goutallier Classification system and MR spectroscopic fat measurement is weak. As an adequate estimation of fatty degeneration based on standard MRI may not be possible, quantitative methods need to be considered in order to increase diagnostic safety and thus provide patients with ideal care in regard to the amount of fatty degeneration. Spectroscopic MR measurement may increase the accuracy of

  18. Training sample selection based on self-training for liver cirrhosis classification using ultrasound images

    NASA Astrophysics Data System (ADS)

    Fujita, Yusuke; Mitani, Yoshihiro; Hamamoto, Yoshihiko; Segawa, Makoto; Terai, Shuji; Sakaida, Isao

    2017-03-01

    Ultrasound imaging is a popular and non-invasive tool used in the diagnoses of liver disease. Cirrhosis is a chronic liver disease and it can advance to liver cancer. Early detection and appropriate treatment are crucial to prevent liver cancer. However, ultrasound image analysis is very challenging, because of the low signal-to-noise ratio of ultrasound images. To achieve the higher classification performance, selection of training regions of interest (ROIs) is very important that effect to classification accuracy. The purpose of our study is cirrhosis detection with high accuracy using liver ultrasound images. In our previous works, training ROI selection by MILBoost and multiple-ROI classification based on the product rule had been proposed, to achieve high classification performance. In this article, we propose self-training method to select training ROIs effectively. Evaluation experiments were performed to evaluate effect of self-training, using manually selected ROIs and also automatically selected ROIs. Experimental results show that self-training for manually selected ROIs achieved higher classification performance than other approaches, including our conventional methods. The manually ROI definition and sample selection are important to improve classification accuracy in cirrhosis detection using ultrasound images.

  19. Agent-based modeling in ecological economics.

    PubMed

    Heckbert, Scott; Baynes, Tim; Reeson, Andrew

    2010-01-01

    Interconnected social and environmental systems are the domain of ecological economics, and models can be used to explore feedbacks and adaptations inherent in these systems. Agent-based modeling (ABM) represents autonomous entities, each with dynamic behavior and heterogeneous characteristics. Agents interact with each other and their environment, resulting in emergent outcomes at the macroscale that can be used to quantitatively analyze complex systems. ABM is contributing to research questions in ecological economics in the areas of natural resource management and land-use change, urban systems modeling, market dynamics, changes in consumer attitudes, innovation, and diffusion of technology and management practices, commons dilemmas and self-governance, and psychological aspects to human decision making and behavior change. Frontiers for ABM research in ecological economics involve advancing the empirical calibration and validation of models through mixed methods, including surveys, interviews, participatory modeling, and, notably, experimental economics to test specific decision-making hypotheses. Linking ABM with other modeling techniques at the level of emergent properties will further advance efforts to understand dynamics of social-environmental systems.

  20. Bulk Magnetization Effects in EMI-Based Classification and Discrimination

    DTIC Science & Technology

    2012-04-01

    response adds to classification performance and ( 2 ) develop a comprehensive understanding of the engineering challenges of primary field cancellation...response adds to classification performance and ( 2 ) develop a comprehensive understanding of the engineering challenges of primary field cancellation...classification performance and ( 2 ) develop a comprehensive understanding of the engineering challenges of primary field cancellation that can support a

  1. Children's Agentive Orientations in Play-Based and Academically Focused Preschools in Hong Kong

    ERIC Educational Resources Information Center

    Cheng Pui-Wah, Doris; Reunamo, Jyrki; Cooper, Paul; Liu, Karen; Vong, Keang-ieng Peggy

    2015-01-01

    The article describes a comparative case study on children's agentive orientations in two Hong Kong preschools, one is play-based and the other is academically focused. Agentive orientations were measured using Reunamo's interview tool, which focuses on children's uses of accommodative and agentive orientations in everyday situations. The findings…

  2. Statistical Analysis of Q-matrix Based Diagnostic Classification Models

    PubMed Central

    Chen, Yunxiao; Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang

    2014-01-01

    Diagnostic classification models have recently gained prominence in educational assessment, psychiatric evaluation, and many other disciplines. Central to the model specification is the so-called Q-matrix that provides a qualitative specification of the item-attribute relationship. In this paper, we develop theories on the identifiability for the Q-matrix under the DINA and the DINO models. We further propose an estimation procedure for the Q-matrix through the regularized maximum likelihood. The applicability of this procedure is not limited to the DINA or the DINO model and it can be applied to essentially all Q-matrix based diagnostic classification models. Simulation studies are conducted to illustrate its performance. Furthermore, two case studies are presented. The first case is a data set on fraction subtraction (educational application) and the second case is a subsample of the National Epidemiological Survey on Alcohol and Related Conditions concerning the social anxiety disorder (psychiatric application). PMID:26294801

  3. Accuracy and efficiency of area classifications based on tree tally

    Treesearch

    Michael S. Williams; Hans T. Schreuder; Raymond L. Czaplewski

    2001-01-01

    Inventory data are often used to estimate the area of the land base that is classified as a specific condition class. Examples include areas classified as old-growth forest, private ownership, or suitable habitat for a given species. Many inventory programs rely on classification algorithms of varying complexity to determine condition class. These algorithms can be...

  4. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    NASA Astrophysics Data System (ADS)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  5. Agent-based power sharing scheme for active hybrid power sources

    NASA Astrophysics Data System (ADS)

    Jiang, Zhenhua

    The active hybridization technique provides an effective approach to combining the best properties of a heterogeneous set of power sources to achieve higher energy density, power density and fuel efficiency. Active hybrid power sources can be used to power hybrid electric vehicles with selected combinations of internal combustion engines, fuel cells, batteries, and/or supercapacitors. They can be deployed in all-electric ships to build a distributed electric power system. They can also be used in a bulk power system to construct an autonomous distributed energy system. An important aspect in designing an active hybrid power source is to find a suitable control strategy that can manage the active power sharing and take advantage of the inherent scalability and robustness benefits of the hybrid system. This paper presents an agent-based power sharing scheme for active hybrid power sources. To demonstrate the effectiveness of the proposed agent-based power sharing scheme, simulation studies are performed for a hybrid power source that can be used in a solar car as the main propulsion power module. Simulation results clearly indicate that the agent-based control framework is effective to coordinate the various energy sources and manage the power/voltage profiles.

  6. Proceedings 3rd NASA/IEEE Workshop on Formal Approaches to Agent-Based Systems (FAABS-III)

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael (Editor); Rash, James (Editor); Truszkowski, Walt (Editor); Rouff, Christopher (Editor)

    2004-01-01

    These preceedings contain 18 papers and 4 poster presentation, covering topics such as: multi-agent systems, agent-based control, formalism, norms, as well as physical and biological models of agent-based systems. Some applications presented in the proceedings include systems analysis, software engineering, computer networks and robot control.

  7. Web based listing of agents associated with new onset work-related asthma.

    PubMed

    Rosenman, K D; Beckett, W S

    2015-05-01

    Work-related asthma is common and yet remains a challenge to diagnose. Access to a listing of agents associated with work-related asthma has been suggested as useful in assisting in the diagnosis. The Association of Occupational and Environmental Clinics (AOEC) developed criteria that were used to review the peer-reviewed medical literature published in English. Based on this review, substances were designated either as a sensitizing agent or an irritant. The reviews were conducted by a board certified internist/pulmonologist/occupational medicine specialist from 2002 to 2007 and a board certified internist/occupational medicine physician from 2008- date. All reviews were then reviewed by the nine member AOEC board of directors. The original list of agents associated with new onset work-related asthma was derived from the tables of a text book on work-related asthma. After 13 years of review, there are 327 substances designated as asthma agents on the AOEC list; 173 (52.9%) coded as sensitizers, 35 (10.7%) as generally recognized as an asthma causing agent, four (1.2%) as irritants, two (0.6%) as both a sensitizer and an irritant and 113(34.6%) agents that still need to be reviewed. The AOEC has developed a readily available web based listing of agents associated with new onset work-related asthma in adults. The listing is based on peer-reviewed criteria. The listing is updated twice a year. Regular review of the peer-reviewed medical literature is conducted to determine whether new substances should be added to the list. Clinicians should find the list useful when considering the diagnosis of work-related asthma. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. X-ray spatial frequency heterodyne imaging of protein-based nanobubble contrast agents

    PubMed Central

    Rand, Danielle; Uchida, Masaki; Douglas, Trevor; Rose-Petruck, Christoph

    2014-01-01

    Spatial Frequency Heterodyne Imaging (SFHI) is a novel x-ray scatter imaging technique that utilizes nanoparticle contrast agents. The enhanced sensitivity of this new technique relative to traditional absorption-based x-ray radiography makes it promising for applications in biomedical and materials imaging. Although previous studies on SFHI have utilized only metal nanoparticle contrast agents, we show that nanomaterials with a much lower electron density are also suitable. We prepared protein-based “nanobubble” contrast agents that are comprised of protein cage architectures filled with gas. Results show that these nanobubbles provide contrast in SFHI comparable to that of gold nanoparticles of similar size. PMID:25321797

  9. The evolution of gadolinium based contrast agents: from single-modality to multi-modality

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Liu, Ruiqing; Peng, Hui; Li, Penghui; Xu, Zushun; Whittaker, Andrew K.

    2016-05-01

    Gadolinium-based contrast agents are extensively used as magnetic resonance imaging (MRI) contrast agents due to their outstanding signal enhancement and ease of chemical modification. However, it is increasingly recognized that information obtained from single modal molecular imaging cannot satisfy the higher requirements on the efficiency and accuracy for clinical diagnosis and medical research, due to its limitation and default rooted in single molecular imaging technique itself. To compensate for the deficiencies of single function magnetic resonance imaging contrast agents, the combination of multi-modality imaging has turned to be the research hotpot in recent years. This review presents an overview on the recent developments of the functionalization of gadolinium-based contrast agents, and their application in biomedicine applications.

  10. Agent Based Modeling of Collaboration and Work Practices Onboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Acquisti, Alessandro; Sierhuis, Maarten; Clancey, William J.; Bradshaw, Jeffrey M.; Shaffo, Mike (Technical Monitor)

    2002-01-01

    The International Space Station is one the most complex projects ever, with numerous interdependent constraints affecting productivity and crew safety. This requires planning years before crew expeditions, and the use of sophisticated scheduling tools. Human work practices, however, are difficult to study and represent within traditional planning tools. We present an agent-based model and simulation of the activities and work practices of astronauts onboard the ISS based on an agent-oriented approach. The model represents 'a day in the life' of the ISS crew and is developed in Brahms, an agent-oriented, activity-based language used to model knowledge in situated action and learning in human activities.

  11. Agent-based model to rural urban migration analysis

    NASA Astrophysics Data System (ADS)

    Silveira, Jaylson J.; Espíndola, Aquino L.; Penna, T. J. P.

    2006-05-01

    In this paper, we analyze the rural-urban migration phenomenon as it is usually observed in economies which are in the early stages of industrialization. The analysis is conducted by means of a statistical mechanics approach which builds a computational agent-based model. Agents are placed on a lattice and the connections among them are described via an Ising-like model. Simulations on this computational model show some emergent properties that are common in developing economies, such as a transitional dynamics characterized by continuous growth of urban population, followed by the equalization of expected wages between rural and urban sectors (Harris-Todaro equilibrium condition), urban concentration and increasing of per capita income.

  12. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    DOT National Transportation Integrated Search

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  13. Abstracting of suspected illegal land use in urban areas using case-based classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Fulong; Wang, Chao; Yang, Chengyun; Zhang, Hong; Wu, Fan; Lin, Wenjuan; Zhang, Bo

    2008-11-01

    This paper proposed a method that uses a case-based classification of remote sensing images and applied this method to abstract the information of suspected illegal land use in urban areas. Because of the discrete cases for imagery classification, the proposed method dealt with the oscillation of spectrum or backscatter within the same land use category, and it not only overcame the deficiency of maximum likelihood classification (the prior probability of land use could not be obtained) but also inherited the advantages of the knowledge-based classification system, such as artificial intelligence and automatic characteristics. Consequently, the proposed method could do the classifying better. Then the researchers used the object-oriented technique for shadow removal in highly dense city zones. With multi-temporal SPOT 5 images whose resolution was 2.5×2.5 meters, the researchers found that the method can abstract suspected illegal land use information in urban areas using post-classification comparison technique.

  14. Rapid Catalyst Capture Enables Metal-Free para-Hydrogen-Based Hyperpolarized Contrast Agents.

    PubMed

    Barskiy, Danila A; Ke, Lucia A; Li, Xingyang; Stevenson, Vincent; Widarman, Nevin; Zhang, Hao; Truxal, Ashley; Pines, Alexander

    2018-05-10

    Hyperpolarization techniques based on the use of para-hydrogen provide orders of magnitude signal enhancement for magnetic resonance spectroscopy and imaging. The main drawback limiting widespread applicability of para-hydrogen-based techniques in biomedicine is the presence of organometallic compounds (the polarization transfer catalysts) in solution with hyperpolarized contrast agents. These catalysts are typically complexes of platinum-group metals, and their administration in vivo should be avoided. Herein, we show how extraction of a hyperpolarized compound from an organic phase to an aqueous phase combined with a rapid (less than 10 s) Ir-based catalyst capture by metal scavenging agents can produce pure para-hydrogen-based hyperpolarized contrast agents, as demonstrated by high-resolution nuclear magnetic resonance (NMR) spectroscopy and inductively coupled plasma atomic emission spectroscopy (ICP-AES). The presented methodology enables fast and efficient means of producing pure hyperpolarized aqueous solutions for biomedical and other uses.

  15. School-Based Decision Making: A Principal-Agent Perspective.

    ERIC Educational Resources Information Center

    Ferris, James M.

    1992-01-01

    A principal-agent framework is used to examine potential gains in educational performance and potential threats to public accountability that school-based decision-making proposals pose. Analysis underscores the need to tailor the design of decentralized decision making to the sources of poor educational performance and threats to school…

  16. EVA: Collaborative Distributed Learning Environment Based in Agents.

    ERIC Educational Resources Information Center

    Sheremetov, Leonid; Tellez, Rolando Quintero

    In this paper, a Web-based learning environment developed within the project called Virtual Learning Spaces (EVA, in Spanish) is presented. The environment is composed of knowledge, collaboration, consulting, experimentation, and personal spaces as a collection of agents and conventional software components working over the knowledge domains. All…

  17. Classification of the micro and nanoparticles and biological agents by neural network analysis of the parameters of optical resonance of whispering gallery mode in dielectric microspheres

    NASA Astrophysics Data System (ADS)

    Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Schweiger, Gustav; Ostendorf, Andreas

    2011-07-01

    A novel technique for the label-free analysis of micro and nanoparticles including biomolecules using optical micro cavity resonance of whispering-gallery-type modes is being developed. Various schemes of the method using both standard and specially produced microspheres have been investigated to make further development for microbial application. It was demonstrated that optical resonance under optimal geometry could be detected under the laser power of less 1 microwatt. The sensitivity of developed schemes has been tested by monitoring the spectral shift of the whispering gallery modes. Water solutions of ethanol, ascorbic acid, blood phantoms including albumin and HCl, glucose, biotin, biomarker like C reactive protein so as bacteria and virus phantoms (gels of silica micro and nanoparticles) have been used. Structure of resonance spectra of the solutions was a specific subject of investigation. Probabilistic neural network classifier for biological agents and micro/nano particles classification has been developed. Several parameters of resonance spectra as spectral shift, broadening, diffuseness and others have been used as input parameters to develop a network classifier for micro and nanoparticles and biological agents in solution. Classification probability of approximately 98% for probes under investigation have been achieved. Developed approach have been demonstrated to be a promising technology platform for sensitive, lab-on-chip type sensor which can be used for development of diagnostic tools for different biological molecules, e.g. proteins, oligonucleotides, oligosaccharides, lipids, small molecules, viral particles, cells as well as in different experimental contexts e.g. proteomics, genomics, drug discovery, and membrane studies.

  18. Agent-based models of financial markets

    NASA Astrophysics Data System (ADS)

    Samanidou, E.; Zschischang, E.; Stauffer, D.; Lux, T.

    2007-03-01

    This review deals with several microscopic ('agent-based') models of financial markets which have been studied by economists and physicists over the last decade: Kim-Markowitz, Levy-Levy-Solomon, Cont-Bouchaud, Solomon-Weisbuch, Lux-Marchesi, Donangelo-Sneppen and Solomon-Levy-Huang. After an overview of simulation approaches in financial economics, we first give a summary of the Donangelo-Sneppen model of monetary exchange and compare it with related models in economics literature. Our selective review then outlines the main ingredients of some influential early models of multi-agent dynamics in financial markets (Kim-Markowitz, Levy-Levy-Solomon). As will be seen, these contributions draw their inspiration from the complex appearance of investors' interactions in real-life markets. Their main aim is to reproduce (and, thereby, provide possible explanations) for the spectacular bubbles and crashes seen in certain historical episodes, but they lack (like almost all the work before 1998 or so) a perspective in terms of the universal statistical features of financial time series. In fact, awareness of a set of such regularities (power-law tails of the distribution of returns, temporal scaling of volatility) only gradually appeared over the nineties. With the more precise description of the formerly relatively vague characteristics (e.g. moving from the notion of fat tails to the more concrete one of a power law with index around three), it became clear that financial market dynamics give rise to some kind of universal scaling law. Showing similarities with scaling laws for other systems with many interacting sub-units, an exploration of financial markets as multi-agent systems appeared to be a natural consequence. This topic has been pursued by quite a number of contributions appearing in both the physics and economics literature since the late nineties. From the wealth of different flavours of multi-agent models that have appeared up to now, we discuss the Cont

  19. Robust feature detection and local classification for surfaces based on moment analysis.

    PubMed

    Clarenz, Ulrich; Rumpf, Martin; Telea, Alexandru

    2004-01-01

    The stable local classification of discrete surfaces with respect to features such as edges and corners or concave and convex regions, respectively, is as quite difficult as well as indispensable for many surface processing applications. Usually, the feature detection is done via a local curvature analysis. If concerned with large triangular and irregular grids, e.g., generated via a marching cube algorithm, the detectors are tedious to treat and a robust classification is hard to achieve. Here, a local classification method on surfaces is presented which avoids the evaluation of discretized curvature quantities. Moreover, it provides an indicator for smoothness of a given discrete surface and comes together with a built-in multiscale. The proposed classification tool is based on local zero and first moments on the discrete surface. The corresponding integral quantities are stable to compute and they give less noisy results compared to discrete curvature quantities. The stencil width for the integration of the moments turns out to be the scale parameter. Prospective surface processing applications are the segmentation on surfaces, surface comparison, and matching and surface modeling. Here, a method for feature preserving fairing of surfaces is discussed to underline the applicability of the presented approach.

  20. Rule-based land use/land cover classification in coastal areas using seasonal remote sensing imagery: a case study from Lianyungang City, China.

    PubMed

    Yang, Xiaoyan; Chen, Longgao; Li, Yingkui; Xi, Wenjia; Chen, Longqian

    2015-07-01

    Land use/land cover (LULC) inventory provides an important dataset in regional planning and environmental assessment. To efficiently obtain the LULC inventory, we compared the LULC classifications based on single satellite imagery with a rule-based classification based on multi-seasonal imagery in Lianyungang City, a coastal city in China, using CBERS-02 (the 2nd China-Brazil Environmental Resource Satellites) images. The overall accuracies of the classification based on single imagery are 78.9, 82.8, and 82.0% in winter, early summer, and autumn, respectively. The rule-based classification improves the accuracy to 87.9% (kappa 0.85), suggesting that combining multi-seasonal images can considerably improve the classification accuracy over any single image-based classification. This method could also be used to analyze seasonal changes of LULC types, especially for those associated with tidal changes in coastal areas. The distribution and inventory of LULC types with an overall accuracy of 87.9% and a spatial resolution of 19.5 m can assist regional planning and environmental assessment efficiently in Lianyungang City. This rule-based classification provides a guidance to improve accuracy for coastal areas with distinct LULC temporal spectral features.